

# Getting started with Amazon Keyspaces (for Apache Cassandra)
<a name="getting-started"></a>

If you're new to Apache Cassandra and Amazon Keyspaces, this tutorial guides you through installing the necessary programs and tools to use Amazon Keyspaces successfully. You'll learn how to create a keyspace and table using Cassandra Query Language (CQL), the AWS Management Console, or the AWS Command Line Interface (AWS CLI). You then use Cassandra Query Language (CQL) to perform create, read, update, and delete (CRUD) operations on data in your Amazon Keyspaces table. 

This tutorial covers the following steps.
+ **Prerequisites** – Before starting the tutorial, follow the AWS setup instructions to sign up for AWS and create an IAM user with access to Amazon Keyspaces. Then you set up the `cqhsh-expansion` and AWS CloudShell. Alternatively you can use the AWS CLI to create resources in Amazon Keyspaces. 
+ **Step 1: Create a keyspace and table ** – In this section, you'll create a keyspace named "catalog" and a table named "book\$1awards" within it. You'll specify the table's columns, data types, partition key, and clustering column using the AWS Management Console, CQL, or the AWS CLI. 
+ **Step 2: Perform CRUD operations ** – Here, you'll use the `cqlsh-expansion` in CloudShell to insert, read, update, and delete data in the "book\$1awards" table. You'll learn how to use various CQL statements like SELECT, INSERT, UPDATE, and DELETE, and practice filtering and modifying data. 
+ **Step 3: Clean up resources ** – To avoid incurring charges for unused resources, this section guides you through deleting the "book\$1awards" table and "catalog" keyspace using the console, CQL, or the AWS CLI. 

For tutorials to connect programmatically to Amazon Keyspaces using different Apache Cassandra client drivers, see [Using a Cassandra client driver to access Amazon Keyspaces programmatically](programmatic.drivers.md). For code examples using different AWS SDKs, see [Code examples for Amazon Keyspaces using AWS SDKs](https://docs.aws.amazon.com/keyspaces/latest/devguide/service_code_examples.html).

**Topics**
+ [

# Tutorial prerequisites and considerations
](getting-started.before-you-begin.md)
+ [

# Create a keyspace in Amazon Keyspaces
](getting-started.keyspaces.md)
+ [

# Check keyspace creation status in Amazon Keyspaces
](keyspaces-create.md)
+ [

# Create a table in Amazon Keyspaces
](getting-started.tables.md)
+ [

# Check table creation status in Amazon Keyspaces
](tables-create.md)
+ [

# Create, read, update, and delete data (CRUD) using CQL in Amazon Keyspaces
](getting-started.dml.md)
+ [

# Delete a table in Amazon Keyspaces
](getting-started.clean-up.table.md)
+ [

# Delete a keyspace in Amazon Keyspaces
](getting-started.clean-up.keyspace.md)

# Tutorial prerequisites and considerations
<a name="getting-started.before-you-begin"></a>

Before you can get started with Amazon Keyspaces, follow the AWS setup instructions in [Accessing Amazon Keyspaces (for Apache Cassandra)](accessing.md). These steps include signing up for AWS and creating an AWS Identity and Access Management (IAM) user with access to Amazon Keyspaces.

To complete all the steps of the tutorial, you need to install `cqlsh`. You can follow the setup instructions at [Using `cqlsh` to connect to Amazon Keyspaces](programmatic.cqlsh.md). 

To access Amazon Keyspaces using `cqlsh` or the AWS CLI, we recommend using AWS CloudShell. CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. You can run AWS Command Line Interface (AWS CLI) commands against Amazon Keyspaces using your preferred shell (Bash, PowerShell or Z shell). To use `cqlsh`, you must install the `cqlsh-expansion`. For `cqlsh-expansion` installation instructions, see [Using the `cqlsh-expansion` to connect to Amazon Keyspaces](programmatic.cqlsh.md#using_cqlsh). For more information about CloudShell see [Using AWS CloudShell to access Amazon Keyspaces](using-aws-with-cloudshell.md).

To use the AWS CLI to create, view, and delete resources in Amazon Keyspaces, follow the setup instructions at [Downloading and Configuring the AWS CLI](access.cli.md#access.cli.installcli).

After completing the prerequisite steps, proceed to [Create a keyspace in Amazon Keyspaces](getting-started.keyspaces.md).

# Create a keyspace in Amazon Keyspaces
<a name="getting-started.keyspaces"></a>

In this section, you create a keyspace using the console, `cqlsh`, or the AWS CLI.

**Note**  
Before you begin, make sure that you have configured all the [tutorial prerequisites](getting-started.before-you-begin.md). 

A *keyspace* groups related tables that are relevant for one or more applications. A keyspace contains one or more tables and defines the replication strategy for all the tables it contains. For more information about keyspaces, see the following topics:
+ Data definition language (DDL) statements in the CQL language reference: [Keyspaces](cql.ddl.keyspace.md)
+ [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md)

In this tutorial we create a single-Region keyspace, and the replication strategy of the keyspace is `SingleRegionStrategy`. Using `SingleRegionStrategy`, Amazon Keyspaces replicates data across three [ Availability Zones](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) in one AWS Region. To learn how to create multi-Region keyspaces, see [Create a multi-Region keyspace in Amazon Keyspaces](keyspaces-mrr-create.md).

## Using the console
<a name="getting-started.keyspaces.con"></a>

**To create a keyspace using the console**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**.

1. Choose **Create keyspace**.

1. In the **Keyspace name** box, enter **catalog** as the name for your keyspace.

   **Name constraints:**
   + The name can't be empty.
   + Allowed characters: alphanumeric characters and underscore ( `_` ).
   + Maximum length is 48 characters.

1. Under **AWS Regions**, confirm that **Single-Region replication** is the replication strategy for the keyspace.

1. To create the keyspace, choose **Create keyspace**.

1. Verify that the keyspace `catalog` was created by doing the following:

   1. In the navigation pane, choose **Keyspaces**.

   1. Locate your keyspace `catalog` in the list of keyspaces.

## Using CQL
<a name="getting-started.keyspaces.cql"></a>

The following procedure creates a keyspace using CQL.

**To create a keyspace using CQL**

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region.

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

   The output of that command should look like this.

   ```
   Connected to Amazon Keyspaces at cassandra.us-east-1.amazonaws.com:9142
   [cqlsh 6.1.0 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
   Use HELP for help.
   cqlsh current consistency level is ONE.
   ```

1. Create your keyspace using the following CQL command.

   ```
   CREATE KEYSPACE catalog WITH REPLICATION = {'class': 'SingleRegionStrategy'};
   ```

   `SingleRegionStrategy` uses a replication factor of three and replicates data across three AWS Availability Zones in its Region.
**Note**  
Amazon Keyspaces defaults all input to lowercase unless you enclose it in quotation marks. 

1. Verify that your keyspace was created.

   ```
   SELECT * from system_schema.keyspaces;
   ```

   The output of this command should look similar to this.

   ```
   cqlsh> SELECT * from system_schema.keyspaces;
   
    keyspace_name           | durable_writes | replication
   -------------------------+----------------+-------------------------------------------------------------------------------------
              system_schema |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
          system_schema_mcs |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
                     system |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
    system_multiregion_info |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
                    catalog |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
   
   (5 rows)
   ```

## Using the AWS CLI
<a name="getting-started.keyspaces.cli"></a>

The following procedure creates a keyspace using the AWS CLI.

**To create a keyspace using the AWS CLI**

1. To confirm that your environment is setup, you can run the following command in CloudShell.

   ```
   aws keyspaces help
   ```

1. Create your keyspace using the following AWS CLI statement.

   ```
   aws keyspaces create-keyspace --keyspace-name 'catalog'
   ```

1. Verify that your keyspace was created with the following AWS CLI statement

   ```
   aws keyspaces get-keyspace --keyspace-name 'catalog'
   ```

   The output of this command should look similar to this example.

   ```
   {
       "keyspaceName": "catalog",
       "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/catalog/",
       "replicationStrategy": "SINGLE_REGION"
   }
   ```

# Check keyspace creation status in Amazon Keyspaces
<a name="keyspaces-create"></a>

Amazon Keyspaces performs data definition language (DDL) operations, such as creating and deleting keyspaces, asynchronously. 

You can monitor the creation status of new keyspaces in the AWS Management Console, which indicates when a keyspace is pending or active. You can also monitor the creation status of a new keyspace programmatically by using the `system_schema_mcs` keyspace. A keyspace becomes visible in the `system_schema_mcs` `keyspaces` table when it's ready for use. 

The recommended design pattern to check when a new keyspace is ready for use is to poll the Amazon Keyspaces `system_schema_mcs` `keyspaces` table (system\$1schema\$1mcs.\$1). For a list of DDL statements for keyspaces, see the [Keyspaces](cql.ddl.keyspace.md) section in the CQL language reference.

The following query shows whether a keyspace has been successfully created.

```
SELECT * FROM system_schema_mcs.keyspaces WHERE keyspace_name = 'mykeyspace';
```

For a keyspace that has been successfully created, the output of the query looks like the following.

```
keyspace_name | durable_writes  | replication
--------------+-----------------+--------------
   mykeyspace | true            |{...} 1 item
```

# Create a table in Amazon Keyspaces
<a name="getting-started.tables"></a>

In this section, you create a table using the console, `cqlsh`, or the AWS CLI.

A table is where your data is organized and stored. The primary key of your table determines how data is partitioned in your table. The primary key is composed of a required partition key and one or more optional clustering columns. The combined values that compose the primary key must be unique across all the table’s data. For more information about tables, see the following topics:
+ Partition key design: [How to use partition keys effectively in Amazon Keyspaces](bp-partition-key-design.md)
+ Working with tables: [Check table creation status in Amazon Keyspaces](tables-create.md)
+ DDL statements in the CQL language reference: [Tables](cql.ddl.table.md)
+ Table resource management: [Managing serverless resources in Amazon Keyspaces (for Apache Cassandra)](serverless_resource_management.md)
+ Monitoring table resource utilization: [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md)
+ [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md)

When you create a table, you specify the following:
+ The name of the table.
+ The name and data type of each column in the table.
+ The primary key for the table.
  + **Partition key** – Required
  + **Clustering columns** – Optional

Use the following procedure to create a table with the specified columns, data types, partition keys, and clustering columns.

## Using the console
<a name="getting-started.tables.con"></a>

The following procedure creates the table `book_awards` with these columns and data types.

```
year           int
award          text
rank           int 
category       text
book_title     text
author         text
publisher      text
```

**To create a table using the console**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**.

1. Choose `catalog` as the keyspace you want to create this table in.

1. Choose **Create table**.

1. In the **Table name** box, enter **book\$1awards** as a name for your table.

   **Name constraints:**
   + The name can't be empty.
   + Allowed characters: alphanumeric characters and underscore ( `_` ).
   + Maximum length is 48 characters.

1. In the **Columns** section, repeat the following steps for each column that you want to add to this table.

   Add the following columns and data types.

   ```
   year           int
   award          text
   rank           int 
   category       text
   book_title     text
   author         text
   publisher      text
   ```

   1. **Name** – Enter a name for the column.

      **Name constraints:**
      + The name can't be empty.
      + Allowed characters: alphanumeric characters and underscore ( `_` ).
      + Maximum length is 48 characters.

   1. **Type** – In the list of data types, choose the data type for this column.

   1. To add another column, choose **Add column**.

1. Choose `award` and `year` as the partition keys under **Partition Key**. A partition key is required for each table. A partition key can be made of one or more columns. 

1. Add `category` and `rank` as **Clustering columns**. Clustering columns are optional and determine the sort order within each partition.

   1. To add a clustering column, choose **Add clustering column**.

   1. In the **Column** list, choose **category**. In the **Order** list, choose **ASC** to sort in ascending order on the values in this column. (Choose **DESC** for descending order.)

   1. Then select **Add clustering column** and choose **rank**.

1. In the **Table settings** section, choose **Default settings**.

1. Choose **Create table**.

1. Verify that your table was created.

   1. In the navigation pane, choose **Tables**.

   1. Confirm that your table is in the list of tables.

   1. Choose the name of your table.

   1. Confirm that all your columns and data types are correct.
**Note**  
The columns might not be listed in the same order that you added them to the table. 

## Using CQL
<a name="getting-started.tables.cql"></a>

This procedure creates a table with the following columns and data types using CQL. The `year` and `award` columns are partition keys with `category` and `rank` as clustering columns, together they make up the primary key of the table.

```
year           int
award          text
rank           int 
category       text
book_title     text
author         text
publisher      text
```

**To create a table using CQL**

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region.

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

   The output of that command should look like this.

   ```
   Connected to Amazon Keyspaces at cassandra.us-east-1.amazonaws.com:9142
   [cqlsh 6.1.0 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
   Use HELP for help.
   cqlsh current consistency level is ONE.
   ```

1. At the keyspace prompt (`cqlsh:keyspace_name>`), create your table by entering the following code into your command window.

   ```
   CREATE TABLE catalog.book_awards (
      year int,
      award text,
      rank int, 
      category text,
      book_title text,
      author text, 
      publisher text,
      PRIMARY KEY ((year, award), category, rank)
      );
   ```
**Note**  
`ASC` is the default clustering order. You can also specify `DESC` for descending order. 

   Note that the `year` and `award` are partition key columns. Then, `category` and `rank` are the clustering columns ordered by ascending order (`ASC`). Together, these columns form the primary key of the table. 

1. Verify that your table was created.

   ```
   SELECT * from system_schema.tables WHERE keyspace_name='catalog.book_awards' ;
   ```

   The output should look similar to this.

   ```
    keyspace_name | table_name | bloom_filter_fp_chance | caching | cdc | comment | compaction | compression | crc_check_chance | dclocal_read_repair_chance | default_time_to_live | extensions | flags | gc_grace_seconds | id | max_index_interval | memtable_flush_period_in_ms | min_index_interval | read_repair_chance | speculative_retry
   ---------------+------------+------------------------+---------+-----+---------+------------+-------------+------------------+----------------------------+----------------------+------------+-------+------------------+----+--------------------+-----------------------------+--------------------+--------------------+-------------------
   
   (0 rows)
   ```

1. Verify your table's structure.

   ```
   SELECT * FROM system_schema.columns WHERE keyspace_name = 'catalog' AND table_name = 'book_awards';
   ```

   The output of this statement should look similar to this example.

   ```
    keyspace_name | table_name  | column_name | clustering_order | column_name_bytes      | kind          | position | type
   ---------------+-------------+-------------+------------------+------------------------+---------------+----------+------
          catalog | book_awards |        year |             none |             0x79656172 | partition_key |        0 |  int
          catalog | book_awards |       award |             none |           0x6177617264 | partition_key |        1 | text
          catalog | book_awards |    category |              asc |     0x63617465676f7279 |    clustering |        0 | text
          catalog | book_awards |        rank |              asc |             0x72616e6b |    clustering |        1 |  int
          catalog | book_awards |      author |             none |         0x617574686f72 |       regular |       -1 | text
          catalog | book_awards |  book_title |             none | 0x626f6f6b5f7469746c65 |       regular |       -1 | text
          catalog | book_awards |   publisher |             none |   0x7075626c6973686572 |       regular |       -1 | text
   
   (7 rows)
   ```

   Confirm that all the columns and data types are as you expected. The order of the columns might be different than in the `CREATE` statement.

## Using the AWS CLI
<a name="getting-started.tables.cli"></a>

This procedure creates a table with the following columns and data types using the AWS CLI. The `year` and `award` columns make up the partition key with `category` and `rank` as clustering columns.

```
year           int
award          text
rank           int 
category       text
book_title     text
author         text
publisher      text
```

**To create a table using the AWS CLI**

The following command creates a table with the name *book\$1awards*. The partition key of the table consists of the columns `year` and `award` and the clustering key consists of the columns `category` and `rank`, both clustering columns use the ascending sort order. (For easier readability, the `schema-definition` of the table create command in this section is broken into separate lines.)

1. You can create the table using the following statement.

   ```
   aws keyspaces create-table --keyspace-name 'catalog' \
                         --table-name 'book_awards' \
                         --schema-definition 'allColumns=[{name=year,type=int},{name=award,type=text},{name=rank,type=int},
               {name=category,type=text}, {name=author,type=text},{name=book_title,type=text},{name=publisher,type=text}],
               partitionKeys=[{name=year},{name=award}],clusteringKeys=[{name=category,orderBy=ASC},{name=rank,orderBy=ASC}]'
   ```

   This command results in the following output.

   ```
   {
       "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/catalog/table/book_awards"
   }
   ```

1. To confirm the metadata and properties of the table, you can use the following command.

   ```
   aws keyspaces get-table --keyspace-name 'catalog' --table-name 'book_awards'
   ```

   This command returns the following output.

   ```
   {
       "keyspaceName": "catalog",
       "tableName": "book_awards",
       "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/catalog/table/book_awards",
       "creationTimestamp": "2024-07-11T15:12:55.571000+00:00",
       "status": "ACTIVE",
       "schemaDefinition": {
           "allColumns": [
               {
                   "name": "year",
                   "type": "int"
               },
               {
                   "name": "award",
                   "type": "text"
               },
               {
                   "name": "category",
                   "type": "text"
               },
               {
                   "name": "rank",
                   "type": "int"
               },
               {
                   "name": "author",
                   "type": "text"
               },
               {
                   "name": "book_title",
                   "type": "text"
               },
               {
                   "name": "publisher",
                   "type": "text"
               }
           ],
           "partitionKeys": [
               {
                   "name": "year"
               },
               {
                   "name": "award"
               }
           ],
           "clusteringKeys": [
               {
                   "name": "category",
                   "orderBy": "ASC"
               },
               {
                   "name": "rank",
                   "orderBy": "ASC"
               }
           ],
           "staticColumns": []
       },
       "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": "2024-07-11T15:12:55.571000+00:00"
       },
       "encryptionSpecification": {
           "type": "AWS_OWNED_KMS_KEY"
       },
       "pointInTimeRecovery": {
           "status": "DISABLED"
       },
       "defaultTimeToLive": 0,
       "comment": {
           "message": ""
       },
       "replicaSpecifications": []
   }
   ```

To perform CRUD (create, read, update, and delete) operations on the data in your table, proceed to [Create, read, update, and delete data (CRUD) using CQL in Amazon Keyspaces](getting-started.dml.md).

# Check table creation status in Amazon Keyspaces
<a name="tables-create"></a>

Amazon Keyspaces performs data definition language (DDL) operations, such as creating and deleting tables, asynchronously. You can monitor the creation status of new tables in the AWS Management Console, which indicates when a table is pending or active. You can also monitor the creation status of a new table programmatically by using the system schema table. 

A table shows as active in the system schema when it's ready for use. The recommended design pattern to check when a new table is ready for use is to poll the Amazon Keyspaces system schema tables (`system_schema_mcs.*`). For a list of DDL statements for tables, see the [Tables](cql.ddl.table.md) section in the CQL language reference.

The following query shows the status of a table.

```
SELECT keyspace_name, table_name, status FROM system_schema_mcs.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';
```

For a table that is still being created and is pending,the output of the query looks like this.

```
keyspace_name | table_name | status
--------------+------------+--------
   mykeyspace |    mytable | CREATING
```

For a table that has been successfully created and is active, the output of the query looks like the following.

```
keyspace_name | table_name | status
--------------+------------+--------
   mykeyspace |    mytable | ACTIVE
```

# Create, read, update, and delete data (CRUD) using CQL in Amazon Keyspaces
<a name="getting-started.dml"></a>

In this step of the tutorial, you'll learn how to insert, read, update, and delete data in an Amazon Keyspaces table using CQL data manipulation language (DML) statements. In Amazon Keyspaces, you can only create DML statements in CQL language. In this tutorial, you'll practice running DML statements using the `cqlsh-expansion` with [AWS CloudShell](using-aws-with-cloudshell.md) in the AWS Management Console.
+ **Inserting data** – This section covers inserting single and multiple records into a table using the `INSERT` statement. You'll learn how to upload data from a CSV file and verify successful inserts using `SELECT` queries. 
+ **Reading data** – Here, you'll explore different variations of the `SELECT` statement to retrieve data from a table. Topics include selecting all data, selecting specific columns, filtering rows based on conditions using the `WHERE` clause, and understanding simple and compound conditions. 
+ **Updating data** – In this section, you'll learn how to modify existing data in a table using the `UPDATE` statement. You'll practice updating single and multiple columns while understanding restrictions around updating primary key columns. 
+ **Deleting data** – The final section covers deleting data from a table using the `DELETE`statement. You'll learn how to delete specific cells, entire rows, and the implications of deleting data versus deleting the entire table or keyspace. 

Throughout the tutorial, you'll find examples, tips, and opportunities to practice writing your own CQL queries for various scenarios.

**Topics**
+ [

# Inserting and loading data into an Amazon Keyspaces table
](getting-started.dml.create.md)
+ [

# Read data from a table using the CQL `SELECT` statement in Amazon Keyspaces
](getting-started.dml.read.md)
+ [

# Update data in an Amazon Keyspaces table using CQL
](getting-started.dml.update.md)
+ [

# Delete data from a table using the CQL `DELETE` statement
](getting-started.dml.delete.md)

# Inserting and loading data into an Amazon Keyspaces table
<a name="getting-started.dml.create"></a>

To create data in your `book_awards` table, use the `INSERT` statement to add a single row. 

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region.

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

   The output of that command should look like this.

   ```
   Connected to Amazon Keyspaces at cassandra.us-east-1.amazonaws.com:9142
   [cqlsh 6.1.0 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
   Use HELP for help.
   cqlsh current consistency level is ONE.
   ```

1. Before you can write data to your Amazon Keyspaces table using cqlsh, you must set the write consistency for the current cqlsh session to `LOCAL_QUORUM`. For more information about supported consistency levels, see [Write consistency levels](consistency.md#WriteConsistency). Note that this step is not required if you are using the CQL editor in the AWS Management Console.

   ```
   CONSISTENCY LOCAL_QUORUM;
   ```

1. To insert a single record, run the following command in the CQL editor.

   ```
   INSERT INTO catalog.book_awards (award, year, category, rank, author, book_title, publisher)
   VALUES ('Wolf', 2023, 'Fiction',3,'Shirley Rodriguez','Mountain', 'AnyPublisher') ;
   ```

1. Verify that the data was correctly added to your table by running the following command.

   ```
   SELECT * FROM catalog.book_awards;
   ```

   The output of the statement should look like this.

   ```
    year | award | category | rank | author            | book_title | publisher
   ------+-------+----------+------+-------------------+------------+--------------
    2023 |  Wolf |  Fiction |    3 | Shirley Rodriguez |   Mountain | AnyPublisher
   
   (1 rows)
   ```

**To insert multiple records from a file using cqlsh**

1. Download the sample CSV file (`keyspaces_sample_table.csv`) contained in the archive file [samplemigration.zip](samples/samplemigration.zip). Unzip the archive and take note of the path to `keyspaces_sample_table.csv`.  
![\[Screenshot of a CSV file showing the output of the table after importing the csv file.\]](http://docs.aws.amazon.com/keyspaces/latest/devguide/images/keyspaces-awards.png)

1. Open AWS CloudShell in the AWS Management Console and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region.

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

1. At the `cqlsh` prompt (`cqlsh>`), specify a keyspace.

   ```
   USE catalog ;
   ```

1. Set write consistency to `LOCAL_QUORUM`. For more information about supported consistency levels, see [Write consistency levels](consistency.md#WriteConsistency).

   ```
   CONSISTENCY LOCAL_QUORUM;
   ```

1. In the AWS CloudShell choose **Actions** on the top right side of the screen and then choose **Upload file** to upload the csv file downloaded earlier. Take note of the path to the file.

1. At the keyspace prompt (`cqlsh:catalog>`), run the following statement.

   ```
   COPY book_awards (award, year, category, rank, author, book_title, publisher) FROM '/home/cloudshell-user/keyspaces_sample_table.csv' WITH header=TRUE ;
   ```

   The output of the statement should look similar to this.

   ```
   cqlsh:catalog> COPY book_awards (award, year, category, rank, author, book_title, publisher)                      FROM '/home/cloudshell-user/keyspaces_sample_table.csv' WITH delimiter=',' AND header=TRUE ;
   cqlsh current consistency level is LOCAL_QUORUM.
   Reading options from /home/cloudshell-user/.cassandra/cqlshrc:[copy]: {'numprocesses': '16', 'maxattempts': '1000'}
   Reading options from /home/cloudshell-user/.cassandra/cqlshrc:[copy-from]: {'ingestrate': '1500', 'maxparseerrors': '1000', 'maxinserterrors': '-1', 'maxbatchsize': '10', 'minbatchsize': '1', 'chunksize': '30'}
   Reading options from the command line: {'delimiter': ',', 'header': 'TRUE'}
   Using 16 child processes
   
   Starting copy of catalog.book_awards with columns [award, year, category, rank, author, book_title, publisher].
   OSError: handle is closed      0 rows/s; Avg. rate:       0 rows/s
   Processed: 9 rows; Rate:       0 rows/s; Avg. rate:       0 rows/s
   9 rows imported from 1 files in 0 day, 0 hour, 0 minute, and 26.706 seconds (0 skipped).
   ```

1. Verify that the data was correctly added to your table by running the following query.

   ```
   SELECT * FROM book_awards ;
   ```

   You should see the following output.

   ```
    year | award            | category    | rank | author             | book_title            | publisher
   ------+------------------+-------------+------+--------------------+-----------------------+---------------
    2020 |             Wolf | Non-Fiction |    1 |        Wang Xiulan |      History of Ideas | Example Books
    2020 |             Wolf | Non-Fiction |    2 | Ana Carolina Silva |         Science Today | SomePublisher
    2020 |             Wolf | Non-Fiction |    3 |  Shirley Rodriguez | The Future of Sea Ice |  AnyPublisher
    2020 | Kwesi Manu Prize |     Fiction |    1 |         Akua Mansa |     Where did you go? | SomePublisher
    2020 | Kwesi Manu Prize |     Fiction |    2 |        John Stiles |             Yesterday | Example Books
    2020 | Kwesi Manu Prize |     Fiction |    3 |         Nikki Wolf | Moving to the Chateau |  AnyPublisher
    2020 |      Richard Roe |     Fiction |    1 |  Alejandro Rosalez |           Long Summer | SomePublisher
    2020 |      Richard Roe |     Fiction |    2 |        Arnav Desai |               The Key | Example Books
    2020 |      Richard Roe |     Fiction |    3 |      Mateo Jackson |      Inside the Whale |  AnyPublisher
   
   (9 rows)
   ```

To learn more about using `cqlsh COPY` to upload data from csv files to an Amazon Keyspaces table, see [Tutorial: Loading data into Amazon Keyspaces using cqlsh](bulk-upload.md).

# Read data from a table using the CQL `SELECT` statement in Amazon Keyspaces
<a name="getting-started.dml.read"></a>

In the [Inserting and loading data into an Amazon Keyspaces table](getting-started.dml.create.md) section, you used the `SELECT` statement to verify that you had successfully added data to your table. In this section, you refine your use of `SELECT` to display specific columns, and only rows that meet specific criteria.

The general form of the `SELECT` statement is as follows.

```
SELECT column_list FROM table_name [WHERE condition [ALLOW FILTERING]] ;
```

**Topics**
+ [

## Select all the data in your table
](#getting-started.dml.read.all)
+ [

## Select a subset of columns
](#getting-started.dml.read.columns)
+ [

## Select a subset of rows
](#getting-started.dml.read.rows)

## Select all the data in your table
<a name="getting-started.dml.read.all"></a>

The simplest form of the `SELECT` statement returns all the data in your table.

**Important**  
 In a production environment, it's typically not a best practice to run this command, because it returns all the data in your table. 

**To select all your table's data**

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region. 

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

1. Run the following query.

   ```
   SELECT * FROM catalog.book_awards ;
   ```

   Using the wild-card character ( `*` ) for the `column_list` selects all columns. The output of the statement looks like the following example.

   ```
    year | award            | category    | rank | author             | book_title            | publisher
   ------+------------------+-------------+------+--------------------+-----------------------+---------------
    2020 |             Wolf | Non-Fiction |    1 |        Wang Xiulan |      History of Ideas |  AnyPublisher
    2020 |             Wolf | Non-Fiction |    2 | Ana Carolina Silva |         Science Today | SomePublisher
    2020 |             Wolf | Non-Fiction |    3 |  Shirley Rodriguez | The Future of Sea Ice |  AnyPublisher
    2020 | Kwesi Manu Prize |     Fiction |    1 |         Akua Mansa |     Where did you go? | SomePublisher
    2020 | Kwesi Manu Prize |     Fiction |    2 |        John Stiles |             Yesterday | Example Books
    2020 | Kwesi Manu Prize |     Fiction |    3 |         Nikki Wolf | Moving to the Chateau |  AnyPublisher
    2020 |      Richard Roe |     Fiction |    1 |  Alejandro Rosalez |           Long Summer | SomePublisher
    2020 |      Richard Roe |     Fiction |    2 |        Arnav Desai |               The Key | Example Books
    2020 |      Richard Roe |     Fiction |    3 |      Mateo Jackson |      Inside the Whale |  AnyPublisher
   ```

## Select a subset of columns
<a name="getting-started.dml.read.columns"></a>

**To query for a subset of columns**

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region. 

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

1. To retrieve only the `award`, `category`, and `year` columns, run the following query.

   ```
   SELECT award, category, year FROM catalog.book_awards ;
   ```

   The output contains only the specified columns in the order listed in the `SELECT` statement.

   ```
    award            | category    | year
   ------------------+-------------+------
                Wolf | Non-Fiction | 2020
                Wolf | Non-Fiction | 2020
                Wolf | Non-Fiction | 2020
    Kwesi Manu Prize |     Fiction | 2020
    Kwesi Manu Prize |     Fiction | 2020
    Kwesi Manu Prize |     Fiction | 2020
         Richard Roe |     Fiction | 2020
         Richard Roe |     Fiction | 2020
         Richard Roe |     Fiction | 2020
   ```

## Select a subset of rows
<a name="getting-started.dml.read.rows"></a>

When querying a large dataset, you might only want records that meet certain criteria. To do this, you can append a `WHERE` clause to the end of our `SELECT` statement.

**To query for a subset of rows**

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region. 

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

1. To retrieve only the records for the awards of a given year, run the following query.

   ```
   SELECT * FROM catalog.book_awards WHERE year=2020 AND award='Wolf' ;
   ```

   The preceding `SELECT` statement returns the following output.

   ```
    year | award | category    | rank | author             | book_title            | publisher
   ------+-------+-------------+------+--------------------+-----------------------+---------------
    2020 |  Wolf | Non-Fiction |    1 |        Wang Xiulan |      History of Ideas |  AnyPublisher
    2020 |  Wolf | Non-Fiction |    2 | Ana Carolina Silva |         Science Today | SomePublisher
    2020 |  Wolf | Non-Fiction |    3 |  Shirley Rodriguez | The Future of Sea Ice |  AnyPublisher
   ```

### Understanding the `WHERE` clause
<a name="getting-started.dml.where"></a>

The `WHERE` clause is used to filter the data and return only the data that meets the specified criteria. The specified criteria can be a simple condition or a compound condition. 

**How to use conditions in a `WHERE` clause**
+ A simple condition – A single column.

  ```
  WHERE column_name=value
  ```

  You can use a simple condition in a `WHERE` clause if any of the following conditions are met:
  + The column is the only partition key column of the table.
  + You add `ALLOW FILTERING` after the condition in the `WHERE` clause.

    Be aware that using `ALLOW FILTERING` can result in inconsistent performance, especially with large, and multi-partitioned tables.
+ A compound condition – Multiple simple conditions connected by `AND`.

  ```
  WHERE column_name1=value1 AND column_name2=value2 AND column_name3=value3...
  ```

  You can use compound conditions in a `WHERE` clause if any of the following conditions are met:
  + The columns you can use in the `WHERE` clause need to include either all or a subset of the columns in the table's partition key. If you want to use only a subset of the columns in the `WHERE` clause, you must include a contiguous set of partition key columns from left to right, beginning with the partition key's leading column. For example, if the partition key columns are `year`, `month`, and `award` then you can use the following columns in the `WHERE` clause: 
    + `year`
    + `year` AND `month`
    + `year` AND `month` AND `award`
  + You add `ALLOW FILTERING` after the compound condition in the `WHERE` clause, as in the following example.

    ```
    SELECT * FROM my_table WHERE col1=5 AND col2='Bob' ALLOW FILTERING ;
    ```

    Be aware that using `ALLOW FILTERING` can result in inconsistent performance, especially with large, and multi-partitioned tables.

### Try it
<a name="getting-started.dml.try"></a>

Create your own CQL queries to find the following from your `book_awards` table:
+ Find the winners of the 2020 Wolf awards and display the book titles and authors, ordered by rank.
+ Show the first prize winners for all awards in 2020 and display the book titles and award names.

# Update data in an Amazon Keyspaces table using CQL
<a name="getting-started.dml.update"></a>

To update the data in your `book_awards` table, use the `UPDATE` statement.

The general form of the `UPDATE` statement is as follows.

```
UPDATE table_name SET column_name=new_value WHERE primary_key=value ;
```

**Tip**  
You can update multiple columns by using a comma-separated list of `column_names` and values, as in the following example.  

  ```
  UPDATE my_table SET col1='new_value_1', col2='new_value2' WHERE col3='1' ;
  ```
If the primary key is composed of multiple columns, all primary key columns and their values must be included in the `WHERE` clause.
You cannot update any column in the primary key because that would change the primary key for the record.

**To update a single cell**  
Using your `book_awards` table, change the name of a publisher the for winner of the non-fiction Wolf awards in 2020.

```
UPDATE book_awards SET publisher='new Books' WHERE year = 2020 AND award='Wolf' AND category='Non-Fiction' AND rank=1;
```

Verify that the publisher is now `new Books`.

```
SELECT * FROM book_awards WHERE year = 2020 AND award='Wolf' AND category='Non-Fiction' AND rank=1;
```

The statement should return the following output.

```
 year | award | category    | rank | author      | book_title       | publisher
------+-------+-------------+------+-------------+------------------+-----------
 2020 |  Wolf | Non-Fiction |    1 | Wang Xiulan | History of Ideas | new Books
```

## Try it
<a name="getting-started.dml.update.try"></a>

**Advanced:** The winner of the 2020 fiction "Kwezi Manu Prize" has changed their name. Update this record to change the name to `'Akua Mansa-House'`. 

# Delete data from a table using the CQL `DELETE` statement
<a name="getting-started.dml.delete"></a>

To delete data in your `book_awards` table, use the `DELETE` statement.

You can delete data from a row or from a partition. Be careful when deleting data, because deletions are irreversible.

Deleting one or all rows from a table doesn't delete the table. Thus you can repopulate it with data. Deleting a table deletes the table and all data in it. To use the table again, you must re-create it and add data to it. Deleting a keyspace deletes the keyspace and all tables within it. To use the keyspace and tables, you must re-create them, and then populate them with data. You can use Amazon Keyspaces Point-in-time (PITR) recovery to help restore deleted tables, to learn more see [Backup and restore data with point-in-time recovery for Amazon Keyspaces](PointInTimeRecovery.md) . To learn how to restore a deleted table with PITR enabled, see [Restore a deleted table using Amazon Keyspaces PITR](restoredeleted.md).

## Delete cells
<a name="getting-started.dml.delete-cell"></a>

Deleting a column from a row removes the data from the specified cell. When you display that column using a `SELECT` statement, the data is displayed as *null*, though a null value is not stored in that location.

The general syntax to delete one or more specific columns is as follows.

```
DELETE column_name1[, column_name2...] FROM table_name WHERE condition ;
```

In your `book_awards` table, you can see that the title of the book that won the first price of the 2020 "Richard Roe" price is "Long Summer". Imagine that this title has been recalled, and you need to delete the data from this cell.

**To delete a specific cell**

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region. 

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

1. Run the following `DELETE` query.

   ```
   DELETE book_title FROM catalog.book_awards WHERE year=2020 AND award='Richard Roe' AND category='Fiction' AND rank=1;
   ```

1. Verify that the delete request was made as expected.

   ```
   SELECT * FROM catalog.book_awards WHERE year=2020 AND award='Richard Roe' AND category='Fiction' AND rank=1;
   ```

   The output of this statement looks like this.

   ```
    year | award       | category | rank | author            | book_title | publisher
   ------+-------------+----------+------+-------------------+------------+---------------
    2020 | Richard Roe |  Fiction |    1 | Alejandro Rosalez |       null | SomePublisher
   ```

## Delete rows
<a name="getting-started.dml.delete-row"></a>

There might be a time when you need to delete an entire row, for example to meet a data deletion request. The general syntax for deleting a row is as follows.

```
DELETE FROM table_name WHERE condition ;
```

**To delete a row**

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region. 

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

1. Run the following `DELETE` query.

   ```
   DELETE FROM catalog.book_awards WHERE year=2020 AND award='Richard Roe' AND category='Fiction' AND rank=1;
   ```

1. Verify that the delete was made as expected.

   ```
   SELECT * FROM catalog.book_awards WHERE year=2020 AND award='Richard Roe' AND category='Fiction' AND rank=1;
   ```

   The output of this statement looks like this after the row has been deleted.

   ```
    year | award | category | rank | author | book_title | publisher
   ------+-------+----------+------+--------+------------+-----------
   
   (0 rows)
   ```

You can delete expired data automatically from your table using Amazon Keyspaces Time to Live, for more information, see [Expire data with Time to Live (TTL) for Amazon Keyspaces (for Apache Cassandra)](TTL.md).

# Delete a table in Amazon Keyspaces
<a name="getting-started.clean-up.table"></a>

To avoid being charged for tables and data that you don't need, delete all the tables that you're not using. When you delete a table, the table and its data are deleted and you stop accruing charges for them. However, the keyspace remains. When you delete a keyspace, the keyspace and all its tables are deleted and you stop accruing charges for them.

You can delete a table using the console, CQL, or the AWS CLI. When you delete a table, the table and all its data are deleted.

## Using the console
<a name="getting-started.clean-up.table.con"></a>

The following procedure deletes a table and all its data using the AWS Management Console.

**To delete a table using the console**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. In the navigation pane, choose **Tables**.

1. Choose the box to the left of the name of each table that you want to delete.

1. Choose **Delete**.

1. On the **Delete table** screen, enter **Delete** in the box. Then, choose **Delete table**.

1. To verify that the table was deleted, choose **Tables** in the navigation pane, and confirm that the `book_awards` table is no longer listed.

## Using CQL
<a name="getting-started.clean-up.table.cql"></a>

The following procedure deletes a table and all its data using CQL.

**To delete a table using CQL**

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region. 

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

1. Delete your table by entering the following statement.

   ```
   DROP TABLE IF EXISTS catalog.book_awards ;
   ```

1. Verify that your table was deleted.

   ```
   SELECT * FROM system_schema.tables WHERE keyspace_name = 'catalog' ;
   ```

   The output should look like this. Note that this might take some time, so re-run the statement after a minute if you don't see this result.

   ```
   keyspace_name | table_name | bloom_filter_fp_chance | caching | cdc | comment | compaction | compression | crc_check_chance | dclocal_read_repair_chance | default_time_to_live | extensions | flags | gc_grace_seconds | id | max_index_interval | memtable_flush_period_in_ms | min_index_interval | read_repair_chance | speculative_retry
   ---------------+------------+------------------------+---------+-----+---------+------------+-------------+------------------+----------------------------+----------------------+------------+-------+------------------+----+--------------------+-----------------------------+--------------------+--------------------+-------------------
   
   (0 rows)
   ```

## Using the AWS CLI
<a name="getting-started.clean-up.table.cli"></a>

The following procedure deletes a table and all its data using the AWS CLI.

**To delete a table using the AWS CLI**

1. Open CloudShell 

1. Delete your table with the following statement.

   ```
   aws keyspaces delete-table --keyspace-name 'catalog' --table-name 'book_awards'
   ```

1. To verify that your table was deleted, you can list all tables in a keyspace.

   ```
   aws keyspaces list-tables --keyspace-name 'catalog'
   ```

   You should see the following output. Note that this asynchronous operation can take some time. Re-run the command again after a short while to confirm that the table has been deleted.

   ```
   {
       "tables": []
   }
   ```

# Delete a keyspace in Amazon Keyspaces
<a name="getting-started.clean-up.keyspace"></a>

To avoid being charged for keyspaces, delete all the keyspaces that you're not using. When you delete a keyspace, the keyspace and all its tables are deleted and you stop accruing charges for them.

You can delete a keyspace using either the console, CQL, or the AWS CLI. 

## Using the console
<a name="getting-started.clean-up.keyspace.con"></a>

The following procedure deletes a keyspace and all its tables and data using the console.

**To delete a keyspace using the console**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**.

1. Choose the box to the left of the name of each keyspace that you want to delete.

1. Choose **Delete**.

1. On the **Delete keyspace** screen, enter **Delete** in the box. Then, choose **Delete keyspace**.

1. To verify that the keyspace `catalog` was deleted, choose **Keyspaces** in the navigation pane and confirm that it is no longer listed. Because you deleted its keyspace, the `book_awards` table under **Tables** should also not be listed.

## Using CQL
<a name="getting-started.clean-up.keyspace.cql"></a>

The following procedure deletes a keyspace and all its tables and data using CQL.

**To delete a keyspace using CQL**

1. Open AWS CloudShell and connect to Amazon Keyspaces using the following command. Make sure to update *us-east-1* with your own Region. 

   ```
   cqlsh-expansion cassandra.us-east-1.amazonaws.com 9142 --ssl
   ```

1. Delete your keyspace by entering the following statement.

   ```
   DROP KEYSPACE IF EXISTS catalog ;
   ```

1. Verify that your keyspace was deleted.

   ```
   SELECT * from system_schema.keyspaces ;
   ```

   Your keyspace should not be listed. Note that because this is an asynchronous operation, there can be a delay until the keyspace is deleted. After the keyspace has been deleted, the output of the statement should look like this.

   ```
   keyspace_name           | durable_writes | replication
   -------------------------+----------------+-------------------------------------------------------------------------------------
              system_schema |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
          system_schema_mcs |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
                     system |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
    system_multiregion_info |           True | {'class': 'org.apache.cassandra.locator.SimpleStrategy', 'replication_factor': '3'}
   
   (4 rows)
   ```

## Using the AWS CLI
<a name="getting-started.clean-up.keyspace.cli"></a>

The following procedure deletes a keyspace and all its tables and data using the AWS CLI.

**To delete a keyspace using the AWS CLI**

1. Open AWS CloudShell 

1. Delete your keyspace by entering the following statement.

   ```
   aws keyspaces delete-keyspace --keyspace-name 'catalog' 
   ```

1. Verify that your keyspace was deleted.

   ```
   aws keyspaces list-keyspaces
   ```

   The output of this statement should look similar to this, and only list the system keyspaces. Note that because this is an asynchronous operation, there can be a delay until your keyspace is deleted.

   ```
   {
       "keyspaces": [
           {
               "keyspaceName": "system_schema",
               "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/system_schema/",
               "replicationStrategy": "SINGLE_REGION"
           },
           {
               "keyspaceName": "system_schema_mcs",
               "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/system_schema_mcs/",
               "replicationStrategy": "SINGLE_REGION"
           },
           {
               "keyspaceName": "system",
               "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/system/",
               "replicationStrategy": "SINGLE_REGION"
           },
           {
               "keyspaceName": "system_multiregion_info",
               "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/system_multiregion_info/",
               "replicationStrategy": "SINGLE_REGION"
           }
       ]
   }
   ```