

# NoSQL Workbench for DynamoDB
<a name="workbench"></a>

 NoSQL Workbench for Amazon DynamoDB is a cross-platform, client-side GUI application that you can use for modern database development and operations. It's available for Windows, macOS, and Linux. NoSQL Workbench lets you design DynamoDB data models, define access patterns as real DynamoDB operations, and validate them using sample data. Additionally you can organize your data models into projects. NoSQL Workbench includes DynamoDB local, which makes it possible to test your tables and indexes before committing your data model to the cloud. To learn more about DynamoDB local and its requirements, see [Setting up DynamoDB local (downloadable version)](DynamoDBLocal.md).

**Data modeler**  
 With NoSQL Workbench for DynamoDB, you can start a new project from scratch or use a sample project that matches your use case. Then, you design tables and Global Secondary Indexes, define attributes, and configure sample data. You can also visualize your access patterns as real DynamoDB operations (PutItem, UpdateItem, Query, and others) and run these operations against the configured sample data to validate that the access pattern works as intended, making adjustments to the data model based on validation results. Finally, once validated, you commit the model to either DynamoDB local or your AWS account for further testing and production use. For collaboration, you can import and export designed data models. For more information, see [Building data models with NoSQL Workbench](workbench.Modeler.md).

**Operation builder**  
NoSQL Workbench provides a rich graphical user interface for you to develop and test queries. You can use the *operation builder* to view, explore, and query live datasets. The structured operation builder supports projection expression, condition expression, and generates sample code in multiple languages. You can directly clone tables from one Amazon DynamoDB account to another one in different Regions. You can also directly clone tables between DynamoDB local and an Amazon DynamoDB account for faster copying of your table’s key schema (and optionally GSI schema and items) between your development environments. For more information, see [Exploring datasets and building operations with NoSQL Workbench](workbench.querybuilder.md).

The video below details concepts of data modeling with NoSQL Workbench.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/p5va6ZX9_o0?si=vqQuf6FjoBuK1phR/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/p5va6ZX9_o0?si=vqQuf6FjoBuK1phR)


**Topics**
+ [

# Download NoSQL Workbench for DynamoDB
](workbench.settingup.md)
+ [

# Building data models with NoSQL Workbench
](workbench.Modeler.md)
+ [

# Exploring datasets and building operations with NoSQL Workbench
](workbench.querybuilder.md)
+ [

# Sample data models for NoSQL Workbench
](workbench.SampleModels.md)
+ [

# Release history for NoSQL Workbench
](WorkbenchDocumentHistory.md)

# Download NoSQL Workbench for DynamoDB
<a name="workbench.settingup"></a>

Follow these instructions to download NoSQL Workbench and DynamoDB local for Amazon DynamoDB.

**To download NoSQL Workbench and DynamoDB local**
+ Download the appropriate version of NoSQL Workbench for your operating system.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/workbench.settingup.html)

**Note**  
NoSQL Workbench includes DynamoDB local as part of the installation process.  
Java Runtime Environment (JRE) version 17.x or newer is required for running DynamoDB local.

**Note**  
NoSQL Workbench supports Ubuntu 12.04, Fedora 21, and Debian 8, or any newer versions of these Linux distributions.  
There are two prerequisite pieces of software required for Ubuntu installs: `libfuse2` and `curl`.  
As of Ubuntu 22.04, libfuse2 is no longer installed by default. To solve this, run `sudo add-apt-repository universe && sudo apt install libfuse2` to install for the newest Ubuntu version.  
For cURL, run `sudo apt update && sudo apt upgrade && sudo apt install curl`.

# Building data models with NoSQL Workbench
<a name="workbench.Modeler"></a>

You can use the data modeler tool in NoSQL Workbench for Amazon DynamoDB to build new data models, or to design models based on existing data models that satisfy your application data access patterns. The data modeler includes a few sample data models to help you get started.

**Topics**
+ [

# Creating a new data model
](workbench.Modeler.CreateNew.md)
+ [

# Importing an existing data model
](workbench.Modeler.ImportExisting.md)
+ [

# Editing an existing data model
](workbench.Modeler.Edit.md)
+ [

# Adding sample data to a data model
](workbench.Modeler.AddData.md)
+ [

# Adding and validating access patterns
](workbench.Modeler.AccessPatterns.md)
+ [

# Importing sample data from a CSV file
](workbench.Modeler.ImportCSV.md)
+ [

# Facets
](workbench.Modeler.Facets.md)
+ [

# Viewing all tables in a data model using aggregate view
](workbench.Modeler.AggregateView.md)
+ [

# Exporting a data model
](workbench.Modeler.ExportModel.md)
+ [

# Committing a data model to DynamoDB
](workbench.Modeler.Commit.md)

# Creating a new data model
<a name="workbench.Modeler.CreateNew"></a>

Follow these steps to create a new data model in Amazon DynamoDB using NoSQL Workbench.

**To create a new data model**

1.  Open NoSQL Workbench, and on the main screen, select **Create model manually**. 

    A new page will open with an empty configuration for your first table. NoSQL Workbench creates all new data models with a default name (i.e. untitled-2) and adds them to the **Drafts** project folder. 

1.  On **Table configuration screen**, specify the following: 
   +  **Table name** — Enter a unique name for the table. 
   +  **Partition key** — Enter a partition key name, and specify its type. Optionally, you can also select a more granular data type format for sample data generation. 
   +  If you want to add a **Sort key**, specify the sort key name and its type. Optionally, you can select a more granular data type format for sample data generation. 
**Note**  
 To learn more about primary key design, designing and using partition keys effectively, and using sort keys, see the following:   
 [Primary key](HowItWorks.CoreComponents.md#HowItWorks.CoreComponents.PrimaryKey) 
 [Best practices for designing and using partition keys effectively in DynamoDB](bp-partition-key-design.md) 
 [Best practices for using sort keys to organize data in DynamoDB](bp-sort-keys.md) 

1. You can add other attributes to more clearly validate your model and access patterns. To add other attributes:
   +  Choose **Add an attribute**. 
   +  Specify the attribute name and its type. 
   +  Optionally, you can select a more granular data type format for sample data generation. 

1.  If you want to add a global secondary index, choose **Add global secondary index**. Specify the **Global secondary index name**, **Partition key** attribute, and **Projection type**. 

   For more information about working with global secondary indexes in DynamoDB, see [Global secondary indexes](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html).

1.  Optionally, **Add a facet**. A facet is a virtual construct in NoSQL Workbench. It is not a functional construct in DynamoDB. Facets in NoSQL Workbench help you visualize an application's different data access patterns for DynamoDB with only a subset of the data in a table. 
**Note**  
 We recommend you use [Adding and validating access patterns](workbench.Modeler.AccessPatterns.md) to visualize how your application will access data in DynamoDB instead of Facets. Access patterns mirror your actual database interactions and help you build the correct data model for your use case, while facets are non-functional visualizations. 

    Choose **Add facet**. Specify the following: 
   +  The **Facet name**. 
   +  A **Partition key alias** to help distinguish this facet view. 
   +  A **Sort key alias** if you provided a **Sort key** for the table. 
   +  Choose the **Attributes** that are part of this facet. 

    Repeat this step if you want to add more facets. 

1.  Finally, click the **Save** button to create the table. 

1.  If you need other **Tables** or **Global Secondary Indexes**, click on the **\$1** icon above the table you just created. 

# Importing an existing data model
<a name="workbench.Modeler.ImportExisting"></a>

You can use NoSQL Workbench for Amazon DynamoDB to build a data model by importing and modifying an existing model. You can import data models in either NoSQL Workbench model format or in [AWS CloudFormation JSON template format](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html).

**To import a data model**

1.  Open NoSQL Workbench, and on the main screen, select **Import model**. You can import either a NoSQL Workbench model format or CloudFormation JSON template. 

1.  You can add the imported model to an existing project or create a new project. If you don't choose a different project, your model will be added to the drafts project folder. 

1.  Click **Browse your desktop** and choose a model to import from your computer. 

# Editing an existing data model
<a name="workbench.Modeler.Edit"></a>

**To edit an existing model**

1. To begin making changes to an existing model, open the model in the modeler page.

1. In the resource selector panel, you will see the list of **Tables** and **Global Secondary Indexes**.

1. To edit a **Table** or a **Global Secondary Index**, first click its name in the resource selector panel, and then use the action icons at the top. Available actions are **Delete**, **Duplicate**, and **Edit**.

1. If you want to edit **Model details**, click the three-dot icon next to the model name.

1. From there, you can click **Edit model details** and edit the information accordingly.

1. You can also **Duplicate**, **Move**, **Delete**, and **Export** the model from the same menu.

1. To change to another model, you can either go through the main screen again, or use the model selection dropdown. 

# Adding sample data to a data model
<a name="workbench.Modeler.AddData"></a>

**To auto-generate sample data**

1. Open NoSQL Workbench, and on the main screen, click the name of the model that you want to add sample data to.

1. Click the additional actions (three-dot icon) in the main content toolbar and select **Add sample data**.

1. Enter the number of items of sample data that you would like to generate, then select **Confirm**.

1. Auto-generating sample data helps you generate between 1 to 5000 rows of data for immediate use. You can specify a granular sample data type to create realistic data based on your design and testing needs. To utilize the capability to generate realistic data, you need to specify the sample data type format for your attributes in the Data modeler. See [Creating a new data model](workbench.Modeler.CreateNew.md) for specifying sample data type formats.

**To add sample data one item at a time**

1. Open the model you wish to edit then choose the table you want to add sample data to. Click the additional actions (three-dot icon) in the main content toolbar and select **Edit data**.

1. Now you can add, delete, and edit rows. After you have made necessary changes, click the **Save** button.

# Adding and validating access patterns
<a name="workbench.Modeler.AccessPatterns"></a>

You can use NoSQL Workbench for Amazon DynamoDB to create, store, and validate *access patterns*.

**Note**  
 See [Identify your data access patterns](https://docs.aws.amazon.com/prescriptive-guidance/latest/dynamodb-data-modeling/step3.html) for more details on identifying the right access patterns. 

**To create an access pattern**

1.  Open NoSQL Workbench, and on the main screen, click the name of the model that you want to add access patterns to. 

1.  On the left side, choose the **Access patterns** tab, and click the **\$1** icon. 

1.  On the next screen, provide a **Name**, an optional **Description**, the **Type** of the access pattern, and the **Table** or **Global Secondary Index** to test the access pattern against. 
**Note**  
 NoSQL Workbench currently supports the following operations for access patterns: `Scan`, `Query`, `GetItem`, `PutItem`, `UpdateItem`, `DeleteItem`. Amazon DynamoDB supports a broader list of operations. 

1.  After you create an access pattern, you can switch to the **Validate** tab to verify that your data model is designed to return expected results for the access pattern. See [Adding sample data to a data model](workbench.Modeler.AddData.md) for details on how to auto-generate sample data for your tables. Different types of access patterns will support different input parameters. 
**Note**  
To validate access patterns, NoSQL Workbench starts a separate DynamoDB local database on port `8001` (by default) with tables and indexes stored in memory.  
NoSQL Workbench automatically adds the sample data from your model to the temporary tables.
If you edit the sample data or the data model itself, NoSQL Workbench will update the temporary tables.
This temporary database is erased when you close the application.

**To edit your access patterns**

1.  Open NoSQL Workbench, and on the main screen, click the name of the model that you want to edit access patterns for. 

1.  On the left side, choose the **Access patterns** tab. 

1. To edit an access pattern, select it from the list on the left.

1. In the top bar, click the **Edit** action button.

# Importing sample data from a CSV file
<a name="workbench.Modeler.ImportCSV"></a>

If you have preexisting sample data in a CSV file, you can import it into NoSQL Workbench. This enables you to quickly populate your model with sample data without having to enter it line by line.

The column names in the CSV file must match the attribute names in your data model, but they do not need to be in the same order. For example, if your data model has attributes called `LoginAlias`, `FirstName`, and `LastName`, your CSV columns could be `LastName`, `FirstName`, and `LoginAlias`.

You can import up to 150 rows at a time from a CSV file.

**To import data from a CSV file into NoSQL Workbench**

1. To import CSV data to a **Table**, first click the table name in the resource panel, and then click the additional actions (three-dot icon) in the main content toolbar.

1. Select **Import sample data**.

1. Select your CSV file and choose **Open**. The CSV data appends to your table.

# Facets
<a name="workbench.Modeler.Facets"></a>

 In NoSQL Workbench, *Facets* give you a way to view a subset of the data in a table, without having to see records that don't meet the constraints of the facet. Facets are considered a visual data modeling tool, and don't exist as a usable construct in DynamoDB, as they are purely an aid to modeling of access patterns. 

**Note**  
 We recommend you use [Adding and validating access patterns](workbench.Modeler.AccessPatterns.md) to visualize how your application will access data in DynamoDB instead of Facets. Access patterns mirror your actual database interactions and help you build the correct data model for your use case, while facets are non-functional visualizations. 

**To create a facet**

1. In the resource selector panel, choose a **Table** you wish to edit

1. In the top bar, click the **Edit** action icon.

1. Scroll down to the **Facet filters** section.

1.  Choose **Add facet**. Specify the following: 
   +  The **Facet name**. 
   +  A **Partition key alias** to help distinguish this facet view. 
   +  A **Sort key alias** if you provided a **Sort key** for the table. 
   +  Choose the **Attributes** that are part of this facet. 

    Repeat this step if you want to add more facets. 

# Viewing all tables in a data model using aggregate view
<a name="workbench.Modeler.AggregateView"></a>

The aggregate view in NoSQL Workbench for Amazon DynamoDB allows you to visualize all the tables and indexes in a data model. For each table, the following information appears:
+ Table column names
+ Sample data
+ All global secondary indexes that are associated with the table. NoSQL Workbench displays the following information for each index:
  + Index column names
  + Sample data

**To view all table information**

1. Open NoSQL Workbench, and on the main screen, click the name of the model that you want to open.

1. In the top bar, click **Aggregate view**. You will see the data across all tables and indexes on the same screen.

**To export aggregate view as images**

1. With **Aggregate view** selected, click the three-dot icon and choose **Export aggregate view**

1. An archive with PNG images of all tables and indexes will be presented for download.

# Exporting a data model
<a name="workbench.Modeler.ExportModel"></a>

After you create a data model using NoSQL Workbench for Amazon DynamoDB, you can save and export the model in either NoSQL Workbench model format or [AWS CloudFormation JSON template format](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html).

**To export a data model**

1. Open NoSQL Workbench, and on the main screen, click on the name of the model that you want to edit.

1. Click the three-dot icon next to the data model name and select **Export model**.

1.  Choose whether to export your data model in NoSQL Workbench model format or CloudFormation JSON template format. 
   +  Choose **NoSQL Workbench model** format if you want to share your model with other team members using NoSQL Workbench or import it into NoSQL Workbench later. 
   +  Choose **CloudFormation JSON template** format if you want to deploy your model directly to AWS or integrate it into your infrastructure-as-code workflow. 

# Committing a data model to DynamoDB
<a name="workbench.Modeler.Commit"></a>

 When you are satisfied with your data model, you can commit the model to Amazon DynamoDB. 

**Note**  
This action creates server-side resources in AWS for the tables and global secondary indexes represented in the data model.
NoSQL Workbench creates tables and indexes with on-demand capacity by default.

**To commit the data model to DynamoDB**

1. Open NoSQL Workbench, and on the main screen, click on the name of the model that you want to commit.

1. In the top bar, click **Commit**.

1. Choose an existing connection, or create a new connection by clicking the **Add new connection** button.
   + To add a new connection, specify the following information:
     + **Account Alias**
     + **AWS Region**
     + **Access key ID**
     + **Secret access key**

     For more information about how to obtain the access keys, see [Getting an AWS access key](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SettingUp.DynamoWebService.html#SettingUp.DynamoWebService.GetCredentials).
   + You can optionally specify the following:
     + [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html)
     + [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns)

1. If you prefer to use [DynamoDB local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html):

   1. Choose the **Local connection** tab.

   1. Click the **Add new connection** button.

   1. Specify the **Connection name** and **Port**.
**Note**  
 To use DynamoDB local, you will need to turn it on by using the **DynamoDB local** toggle at the bottom left of the NoSQL Workbench screen. 

1. Click **Commit**.

# Exploring datasets and building operations with NoSQL Workbench
<a name="workbench.querybuilder"></a>

NoSQL Workbench for Amazon DynamoDB provides a rich graphical user interface for developing and testing queries. You can use the operation builder in NoSQL Workbench to view, explore, and query live datasets. The structured operation builder supports projection expression, condition expression, and generates sample code in multiple languages. You can directly clone tables from one Amazon DynamoDB account to another one in different Regions. You can also directly clone tables between DynamoDB local and an Amazon DynamoDB account for faster copying of your table’s key schema (and optionally GSI schema and items) between your development environments.You can save as many as 50 DynamoDB data operations in the operation builder.

**Topics**
+ [

# Connecting to live datasets
](workbench.querybuilder.connect.md)
+ [

# Building complex operations
](workbench.querybuilder.operationbuilder.md)
+ [

# Cloning tables with NoSQL Workbench
](workbench.querybuilder.cloning-tables.md)
+ [

# Exporting data to a CSV file
](workbench.querybuilder.exportcsv.md)

# Connecting to live datasets
<a name="workbench.querybuilder.connect"></a>

To connect to your Amazon DynamoDB tables with NoSQL Workbench, you must first connect to your AWS account.

**To add a connection to your database**

1. In NoSQL Workbench, in the navigation pane on the left side, choose the **Operation builder** icon.

1. Choose **Add connection**.

1. Specify the following information:
   + **Connection name**
   + **AWS Region**
   + **Access key ID**
   + **Secret access key**

   For more information about how to obtain the access keys, see [Getting an AWS access key](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SettingUp.DynamoWebService.html#SettingUp.DynamoWebService.GetCredentials).

   You can optionally, specify the following:
   + [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html)
   + [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns)

1. Choose **Connect**.

    If you don't want to sign up for a free tier account, and prefer to use [ DynamoDB local (downloadable version)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html): 

   1. Choose the **Local** tab on the connection screen.

   1. Specify the following information:
      + **Connection name**
      + **Port**

   1. Choose the **connect** button.
**Note**  
To connect to DynamoDB local, either manually launch DynamoDB local using your terminal (see [deploying DynamoDB local on your computer](DynamoDBLocal.DownloadingAndRunning.md)) or launch DynamoDB local directly using the DDB local toggle in the NoSQL Workbench navigation menu. Ensure the connection port is the same as your DynamoDB local port.

1. On the created connection, choose **Open**.

After connecting to your DynamoDB database, the list of available tables appears in the left pane. Choose one of the tables to return a sample of the data stored in the table.

You can now run queries against the selected table.

To run queries on a table, see the next section on building operations see [Building complex operations](workbench.querybuilder.operationbuilder.md).

# Building complex operations
<a name="workbench.querybuilder.operationbuilder"></a>

The operation builder in NoSQL Workbench for Amazon DynamoDB provides a visual interface where you can perform complex data plane operations. It includes support for projection expressions and condition expressions. Once you've built an operation, you can save it for later use (up to 50 operations can be saved). You can then browse a list of your frequently used data-plane operations in the **Saved Operations** menu, and use them to automatically populate and build a new operation. You can also generate sample code for these operations, in multiple languages.

NoSQL Workbench supports building [PartiQL](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.html) for DynamoDB statements, which allows you to interact with DynamoDB using a SQL-compatible query language. NoSQL Workbench also supports building DynamoDB CRUD API operations.

To use NoSQL Workbench to build operations, in the navigation pane on the left side, choose the **Operation builder** icon.

**Topics**
+ [

# Building PartiQL statements
](workbench.querybuilder.partiql.md)
+ [

# Building API operations
](workbench.querybuilder.operationbuilder.api.md)

# Building PartiQL statements
<a name="workbench.querybuilder.partiql"></a>

To use NoSQL Workbench to build [PartiQL for DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.html) statements, choose **PartiQL editor** near the top of the NoSQL Workbench UI.

You can build the following PartiQL statement types in the operation builder.

**Topics**
+ [

## Singleton statements
](#workbench.querybuilder.partiql.single)
+ [

## Transactions
](#workbench.querybuilder.partiql.transaction)
+ [

## Batch
](#workbench.querybuilder.partiql.batch)

## Singleton statements
<a name="workbench.querybuilder.partiql.single"></a>

To run or generate code for a PartiQL statement, do the following.

1. Choose **PartiQL editor** near the top of the window.

1. Enter a valid [PartiQL statement](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.statements.html).

1. If your statement uses parameters:

   1. Choose **Optional request parameters**.

   1. Choose **Add new parameters**.

   1. Enter the attribute type and value.

   1. If you want to add additional parameters, repeat steps b and c.

1. If you want to generate code, choose **Generate code**.

   Select your desired language from the displayed tabs. You can now copy this code and use it in your application.

1. If you want the operation to be run immediately, choose **Run**.

1. If you want to save this operation for later use, choose **Save operation**. Then enter a name for your operation and choose **Save**.

## Transactions
<a name="workbench.querybuilder.partiql.transaction"></a>

To run or generate code for a PartiQL transaction, do the following.

1. Choose **PartiQLTransaction** from the **More operations** dropdown.

1. Choose **Add a new statement**.

1. Enter a valid [PartiQL statement](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.statements.html).
**Note**  
Read and write operations are not supported in the same PartiQL transaction request. A SELECT statement cannot be in the same request with INSERT, UPDATE, and DELETE statements. See [Performing transactions with PartiQL for DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.multiplestatements.transactions.html) for more details.

1. If your statement uses parameters

   1. Choose **Optional request parameters**.

   1. Choose **Add new parameters**.

   1. Enter the attribute type and value.

   1. If you want to add additional parameters, repeat steps b and c.

1. If you want to add more statements, repeat steps 2 to 4.

1. If you want to generate code, choose **Generate code**.

   Select your desired language from the displayed tabs. You can now copy this code and use it in your application.

1. If you want the operation to be run immediately, choose **Run**.

1. If you want to save this operation for later use, choose **Save operation**. Then enter a name for your operation and choose **Save**.

## Batch
<a name="workbench.querybuilder.partiql.batch"></a>

To run or generate code for a PartiQL batch, do the following.

1. Choose **PartiQLBatch** from the **More operations** dropdown.

1. Choose **Add a new statement**.

1. Enter a valid [PartiQL statement](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.statements.html).
**Note**  
 Read and write operations are not supported in the same PartiQL batch request, which means a SELECT statement cannot be in the same request with INSERT, UPDATE, and DELETE statements. Write operations to the same item are not allowed. As with the BatchGetItem operation, only singleton read operations are supported. Scan and query operations are not supported. See [Running batch operations with PartiQL for DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.multiplestatements.batching.html) for more details.

1. If your statement uses parameters:

   1. Choose **Optional request parameters**.

   1. Choose **Add new parameters**.

   1. Enter the attribute type and value.

   1. If you want to add additional parameters, repeat steps b and c.

1. If you want to add more statements, repeat steps 2 to 4.

1. If you want to generate code, choose **Generate code**.

   Select your desired language from the displayed tabs. You can now copy this code and use it in your application.

1. If you want the operation to be run immediately, choose **Run**.

1. If you want to save this operation for later use, choose **Save operation**. Then enter a name for your operation and choose **Save**.

# Building API operations
<a name="workbench.querybuilder.operationbuilder.api"></a>

To use NoSQL Workbench to build DynamoDB CRUD APIs, select **Operation builder** from the left of the NoSQL Workbench user interface.

Then select **Open** and choose a connection.

You can perform the following operations in the operation builder.
+ [Delete Table](#workbench.querybuilder.operationbuilder.DeleteTable)
+ [Create Table](#workbench.querybuilder.operationbuilder.CreateTable)
+ [Update Table](#workbench.querybuilder.operationbuilder.UpdateTable)
+ [Put Item](#workbench.querybuilder.operationbuilder.Put)
+ [Update Item](#workbench.querybuilder.operationbuilder.update)
+ [Delete Item](#workbench.querybuilder.operationbuilder.Delete)
+ [Query](#workbench.querybuilder.operationbuilder.Query)
+ [Scan](#workbench.querybuilder.operationbuilder.scan)
+ [Transact Get Items](#workbench.querybuilder.operationbuilder.transactget)
+ [Transact Write Items](#workbench.querybuilder.operationbuilder.transactwrite)

## Delete table
<a name="workbench.querybuilder.operationbuilder.DeleteTable"></a>

To run a `Delete Table` operation, do the following.

1. Find the table you want to delete from the **Tables** section.

1. Select **Delete Table** from the horizontal ellipsis menu.

1. Confirm you want to delete the table by entering the **Table name**.

1. Select **Delete**.

For more information about this operation, see [Delete table](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteTable.html) in the *Amazon DynamoDB API Reference*. 

## Delete GSI
<a name="workbench.querybuilder.operationbuilder.DeleteGSI"></a>

To run a `Delete GSI` operation, do the following.

1. Find the GSI of a table you want to delete from the **Tables** section.

1. Select **Delete GSI** from the horizontal ellipsis menu.

1. Confirm you want to delete the GSI by entering the **GSI name**.

1. Select **Delete**.

For more information about this operation, see [Delete table](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteTable.html) in the *Amazon DynamoDB API Reference*. 

## Create table
<a name="workbench.querybuilder.operationbuilder.CreateTable"></a>

To run a `Create Table` operation, do the following.

1. Choose the **\$1** icon next to the **Tables** section.

1. Enter the table name desired.

1. Create a partition key.

1. Optional: create a sort key.

1. To customize capacity settings, and uncheck the box next to **Use default capacity settings**.
   + You can now select either **Provisioned** or **On-demand capacity.**

     With Provisioned selected, you can set minimum and maximum read and write capacity units. You can also enable or disable auto scaling.
   + If the table is currently set to On-demand, you will be unable to specify a provisioned throughput.
   + If you switch from On-demand to Provisioned throughput, then Autoscaling will automatically be applied to all GSIs with: min: 1, max: 10; target: 70%.

1. Select **Skip GSIs and create** to create this table without a GSI. Optionally, you can select **Next** to create a GSI with this new table.

For more information about this operation, see [Create table](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) in the *Amazon DynamoDB API Reference*. 

## Create GSI
<a name="workbench.querybuilder.operationbuilder.CreateGSI"></a>

To run a `Create GSI` operation, do the following.

1. Find a table that you want to add a GSI to.

1. From the horizontal ellipsis menu, select **Create GSI**.

1. Name your GSI under **Index name**.

1. Create a partition key.

1. Optional: create a sort key.

1. Choose a projection type option from the dropdown.

1. Select **Create GSI**.

For more information about this operation, see [Create table](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) in the *Amazon DynamoDB API Reference*. 

## Update table
<a name="workbench.querybuilder.operationbuilder.UpdateTable"></a>

To update capacity settings for a table with an `Update Table` operation, do the following.

1. Find the table you want to update capacity settings for.

1. From the horizontal ellipsis menu, select **Update capacity settings**.

1. Select either **Provisioned** or **On-demand capacity.**

   With **Provisioned** selected, you can set minimum and maximum read and write capacity units. You can also enable or disable auto scaling.

1. Select **Update**.

For more information about this operation, see [Update table](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) in the *Amazon DynamoDB API Reference*.

## Update GSI
<a name="workbench.querybuilder.operationbuilder.UpdateGSI"></a>

To update capacity settings for a GSI with an `Update Table` operation, do the following.

**Note**  
By default, global secondary indexes inherit the capacity settings of the base table. Global secondary indexes can have a different capacity mode only when the base table is in provisioned capacity mode. When you create a global secondary index on a provisioned mode table, you must specify read and write capacity units for the expected workload on that index. For more information, see [Provisioned throughput considerations for Global Secondary Indexes](GSI.md#GSI.ThroughputConsiderations).

1. Find the GSI you want to update capacity settings for.

1. From the horizontal ellipsis menu, select **Update capacity settings**.

1. You can now select either **Provisioned** or **On-demand capacity.**

   With **Provisioned** selected, you can set minimum and maximum read and write capacity units. You can also enable or disable auto scaling.

1. Select **Update**.

For more information about this operation, see [Update table](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) in the *Amazon DynamoDB API Reference*.

## Put item
<a name="workbench.querybuilder.operationbuilder.Put"></a>

You create an item by using the `Put Item` operation. To run or generate code for a `Put Item` operation, do the following.

1. Find the table you want to create an item in.

1. From the **Actions** dropdown, select **Create item**.

1. Enter the partition key value.

1. Enter the sort key value, if one exists.

1. If you want to add non-key attributes, do the following:

   1. Select **\$1 Add other attributes**.

   1. Specify the **Attribute name**, **Type**, and **Value**. 

1. If a condition expression must be satisfied for the `Put Item` operation to succeed, do the following:

   1. Choose **Condition**.

   1. Specify the attribute name, comparison operator, attribute type, and attribute value.

   1. If other conditions are needed, choose **Condition** again.

   For more information, see [DynamoDB condition expression CLI example](Expressions.ConditionExpressions.md).

1. If you want to generate code, choose **Generate code**.

   Select your desired language from the displayed tabs. You can now copy this code and use it in your application.

1. If you want the operation to be run immediately, choose **Run**.

1. If you want to save this operation for later use, choose **Save operation**, then enter a name for your operation and choose **Save**.

For more information about this operation, see [PutItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) in the *Amazon DynamoDB API Reference*. 

## Update item
<a name="workbench.querybuilder.operationbuilder.update"></a>

To run or generate code for an `Update Item` operation, do the following:

1. Find the table you want to update an item in.

1. Select the item.

1. Enter the attribute name and attribute value for the selected expression.

1. If you want to add more expressions, choose another expression in the **Update Expression** dropdown list, and then select the **\$1** icon.

1. If a condition expression must be satisfied for the `Update Item` operation to succeed, do the following:

   1. Choose **Condition**.

   1. Specify the attribute name, comparison operator, attribute type, and attribute value.

   1. If other conditions are needed, choose **Condition** again.

   For more information, see [DynamoDB condition expression CLI example](Expressions.ConditionExpressions.md).

1. If you want to generate code, choose **Generate code**.

   Choose the tab for the language that you want. You can now copy this code and use it in your application.

1. If you want the operation to be run immediately, choose **Run**.

1. If you want to save this operation for later use, choose **Save operation**, then enter a name for your operation and choose **Save**.

For more information about this operation, see [UpdateItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html) in the *Amazon DynamoDB API Reference*. 

## Delete item
<a name="workbench.querybuilder.operationbuilder.Delete"></a>

To run a `Delete Item` operation, do the following.

1. Find the table you want to delete an item in. 

1. Select the item.

1. From the **Actions** dropdown, select **Delete item**.

1. Confirm you want to delete the item by selecting **Delete**.

For more information about this operation, see [DeleteItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteItem.html) in the *Amazon DynamoDB API Reference*.

## Duplicate item
<a name="workbench.querybuilder.operationbuilder.Duplicate"></a>

You can duplicate an item by creating a new item with the same attributes. To duplicate an item, do the following.

1. Find the table you want to duplicate an item in.

1. Select the item.

1. From the **Actions** dropdown, select **Duplicate item**.

1. Specify a new partition key.

1. Specify a new sort key (if necessary).

1. Select **Run**.

For more information about this operation, see [DeleteItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteItem.html) in the *Amazon DynamoDB API Reference*.

## Query
<a name="workbench.querybuilder.operationbuilder.Query"></a>

To run or generate code for a `Query` operation, do the following.

1. Select **Query** from the top of the NoSQL Workbench UI.

1. Specify the partition key value.

1. If a sort key is needed for the `Query` operation:

   1. Select **Sort key**.

   1. Specify the comparison operator, and attribute value.

1. Select **Query** to run this query operation. If more options are needed, check the **More options** checkbox and continue on with the following steps.

1. If not all the attributes should be returned with the operation result, select **Projection expression**.

1. Choose the **\$1** icon.

1. Enter the attribute to return with the query result.

1. If more attributes are needed, choose the **\$1 **.

1. If a condition expression must be satisfied for the `Query` operation to succeed, do the following:

   1. Choose **Condition**.

   1. Specify the attribute name, comparison operator, attribute type, and attribute value.

   1. If other conditions are needed, choose **Condition** again.

   For more information, see [DynamoDB condition expression CLI example](Expressions.ConditionExpressions.md).

1. If you want to generate code, choose **Generate code**.

   Choose the tab for the language that you want. You can now copy this code and use it in your application.

1. If you want the operation to be run immediately, choose **Run**.

1. If you want to save this operation for later use, choose **Save operation**, then enter a name for your operation and choose **Save**.

For more information about this operation, see [Query](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) in the *Amazon DynamoDB API Reference*. 

## Scan
<a name="workbench.querybuilder.operationbuilder.scan"></a>

To run or generate code for a `Scan` operation, do the following.

1. Select **Scan** from the top of the NoSQL Workbench UI.

1. Select the **Scan** button to perform this basic scan operation. If more options are needed, check the **More options** checkbox and continue on with the following steps.

1. Specify an attribute name to filter your scan results.

1. If not all the attributes should be returned with the operation result, select **Projection expression**.

1. If a condition expression must be satisfied for the scan operation to succeed, do the following:

   1. Choose **Condition**.

   1. Specify the attribute name, comparison operator, attribute type, and attribute value.

   1. If other conditions are needed, choose **Condition** again. 

   For more information, see [DynamoDB condition expression CLI example](Expressions.ConditionExpressions.md).

1. If you want to generate code, choose **Generate code**.

   Choose the tab for the language that you want. You can now copy this code and use it in your application.

1. If you want the operation to be run immediately, choose **Run**.

1. If you want to save this operation for later use, choose **Save operation**, then enter a name for your operation and choose **Save**.

## TransactGetItems
<a name="workbench.querybuilder.operationbuilder.transactget"></a>

To run or generate code for a `TransactGetItems` operation, do the following.

1. From the **More operations** dropdown at the top of the NoSQL Workbench UI, choose **TransactGetItems**.

1. Choose the **\$1** icon near **TransactGetItem**.

1. Specify a partition key.

1. Specify a sort key (if necessary).

1. Select **Run** to perform the operation, **Save operation** to save it, or **Generate code** to generate code for it.

For more information about transactions, see [Amazon DynamoDB transactions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html).

## TransactWriteItems
<a name="workbench.querybuilder.operationbuilder.transactwrite"></a>

To run or generate code for a `TransactWriteItems` operation, do the following.

1. From the **More operations** dropdown at the top of the NoSQL Workbench UI, choose **TransactWriteItems**.

1. Choose an operation from the **Actions** dropdown.

1. Choose the **\$1** icon near **TransactWriteItem**.

1. In the **Actions** dropdown, choose the operation that you want to perform.
   + For `DeleteItem`, follow the instructions for the [Delete item](#workbench.querybuilder.operationbuilder.Delete) operation. 
   + For `PutItem`, follow the instructions for the [Put item](#workbench.querybuilder.operationbuilder.Put) operation. 
   + For `UpdateItem`, follow the instructions for the [Update item](#workbench.querybuilder.operationbuilder.update) operation.

   To change the order of actions, choose an action in the list on the left side, and then choose the up or down arrows to move it up or down in the list. 

   To delete an action, choose the action in the list, and then choose the **Delete** (trash can) icon. 

1. Select **Run** to perform the operation, **Save operation** to save it, or **Generate code** to generate code for it.

For more information about transactions, see [Amazon DynamoDB transactions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html).

# Cloning tables with NoSQL Workbench
<a name="workbench.querybuilder.cloning-tables"></a>

Cloning tables will copy a table’s key schema (and optionally GSI schema and items) between your development environments. You can clone a table between DynamoDB local to an Amazon DynamoDB account, and even clone a table from one account to another in different Regions for faster experimentation.

**To clone a table**

1. In the **Operation Builder**, select your connection and Region (Region selection is not available for DynamoDB local).

1. Once you are connected to DynamoDB, browse your tables and select the table you want to clone.

1. From the horizontal ellipsis menu, select the **Clone** option.

1. Input your clone destination details:

   1. Select a connection.

   1. Select a Region (Region is not available for DynamoDB local).

   1. Enter a new table name.

   1. Choose a clone option:

      1. **Key schema** is selected by default and cannot be unselected. By default, cloning a table will copy your primary key and sort key if they are available.

      1. **GSI schema** is selected by default if your table to be cloned has a GSI. Cloning a table will copy your GSI primary key and sort key if they are available. You have the option to deselect GSI schema to skip cloning the GSI schema. Cloning a table will copy your base table’s capacity settings as the GSI’s capacity settings. You can use the `UpdateTable` operation in Operation Builder to update the table’s GSI capacity setting after cloning is complete.

1. Enter the number of items to clone. To only clone the key schema and optionally the GSI schema, you can keep the **Items to clone** value at 0. The maximum number of items that can be cloned is 5000.

1. Choose a capacity mode:

   1. **On-demand mode** is selected by default. DynamoDB on-demand offers pay-per-request pricing for read and write requests so that you pay only for what you use. To learn more, see [DynamoDB On-demand mode](capacity-mode.md#capacity-mode-on-demand) .

   1. **Provisioned mode** lets you specify the number of reads and writes per second that you require for your application. You can use auto scaling to adjust your table’s provisioned capacity automatically in response to traffic changes. To learn more, see [DynamoDB Provisioned mode](provisioned-capacity-mode.md).

1. Select **Clone** to begin cloning.

1. The cloning process will run in the background. The **Operation builder** tab will show a notification when there is a change in the cloning table status. You can access this status by selecting the **Operation builder** tab and then selecting the arrow button. The arrow button is located on the cloning table status widget located near the bottom of the menu sidebar.

# Exporting data to a CSV file
<a name="workbench.querybuilder.exportcsv"></a>

You can export the results of a query from Operation Builder to a CSV file. This enables you to load the data into a spreadsheet or process it using your preferred programming language.

**Exporting to CSV**

1. In the Operation Builder, run an operation of your choice, such as a Scan or Query.
**Note**  
You can only export results from read API operations and PartiQL statements to a CSV file. You can't export results from transaction read statements.
Currently, you can export results one page at a time to a CSV file. If there are multiple pages of results, you must export each page individually.

1. Select the items you want to export from the results.

1. In the **Actions** dropdown, choose **Export as CSV**.

1. Choose a filename and location for your CSV file and select **Save**.

# Sample data models for NoSQL Workbench
<a name="workbench.SampleModels"></a>

The home page for the modeler displays a number of sample models that ship with the NoSQL Workbench. This section describes these models and their potential uses.

**Topics**
+ [

## Employee data model
](#workbench.SampleModels.EmployeeDataModel)
+ [

## Discussion forum data model
](#workbench.SampleModels.DiscussionForumDataModel)
+ [

## Music library data model
](#workbench.SampleModels.MusicLibraryDataModel)
+ [

## Ski resort data model
](#workbench.SampleModels.SkiResortDataModel)
+ [

## Credit card offers data model
](#workbench.SampleModels.CreditCardOffersDataModel)
+ [

## Bookmarks data model
](#workbench.SampleModels.BookmarksDataModel)

## Employee data model
<a name="workbench.SampleModels.EmployeeDataModel"></a>

This data model is an introductory model. It represents an employee’s basic details such as a unique alias, first name, last name, designation, manager, and skills.

This data model depicts a few techniques such as handling complex attribute such as having more than one skill. This model is also an example of one-to-many relationship through the manager and their reporting employees that has been achieved by the secondary index DirectReports.

The access patterns facilitated by this data model are:
+ Retrieval of an employee record using the employee’s login alias, facilitated by a table called `Employee`.
+ Search for employees by name, facilitated by the Employee table’s global secondary index called `Name`.
+ Retrieval of all direct reports of a manager using the manager’s login alias, facilitated by the Employee table’s global secondary index called `DirectReports`.

## Discussion forum data model
<a name="workbench.SampleModels.DiscussionForumDataModel"></a>

This data model represents a discussion forums. Using this model customers can engage with the developer community, ask questions, and respond to other customers' posts. Each AWS service has a dedicated forum. Anyone can start a new discussion thread by posting a message in a forum, and each thread receives any number of replies.

The access patterns facilitated by this data model are:
+ Retrieval of a forum record using the forum’s name, facilitated by a table called `Forum`.
+ Retrieval of a specific thread or all threads for a forum, facilitated by a table called `Thread`.
+ Search for replies using the posting user’s email address, facilitated by the Reply table’s global secondary index called `PostedBy-Message-Index`.

## Music library data model
<a name="workbench.SampleModels.MusicLibraryDataModel"></a>

This data model represents a music library that has a large collection of songs and showcases its most downloaded songs in near-real time.

The access patterns facilitated by this data model are:
+ Retrieval of a song record, facilitated by a table called `Songs`.
+ Retrieval of a specific download record or all download records for a song, facilitated by a table called `Songs`.
+ Retrieval of a specific monthly download count record or all monthly download count records for a song, facilitated by a table called `Song`.
+ Retrieval of all records (including song record, download records, and monthly download count records) for a song, facilitated by a table called `Songs`.
+ Search for most downloaded songs, facilitated by the Songs table’s global secondary index called `DownloadsByMonth`.

## Ski resort data model
<a name="workbench.SampleModels.SkiResortDataModel"></a>

This data model represents a ski resort that has a large collection of data for each ski lift collected daily.

The access patterns facilitated by this data model are:
+ Retrieval of all data for a given ski lift or overall resort, dynamic and static, facilitated by a table called `SkiLifts`.
+ Retrieval of all dynamic data (including unique lift riders, snow coverage, avalanche danger, and lift status) for a ski lift or the overall resort on a specific date, facilitated by a table called `SkiLifts`.
+ Retrieval of all static data (including if the lift is for experienced riders only, vertical feet the lift rises, and lift riding time) for a specific ski lift, facilitated by a table called `SkiLifts`.
+ Retrieval of date of data recorded for a specific ski lift or the overall resort sorted by total unique riders, facilitated by the SkiLifts table's global secondary index called `SkiLiftsByRiders`.

## Credit card offers data model
<a name="workbench.SampleModels.CreditCardOffersDataModel"></a>

This data model is used by a Credit Card Offers Application.

A credit card provider produces offers over time. These offers include balance transfers without fees, increased credit limits, lower interest rates, cash back, and airline miles. After a customer accepts or declines these offers, the respective offer status is updated accordingly.

The access patterns facilitated by this data model are:
+ Retrieval of account records using `AccountId`, as facilitated by the main table.
+ Retrieval of all the accounts with few projected items, as facilitated by the secondary index `AccountIndex`.
+ Retrieval of accounts and all the offer records associated with those accounts by using `AccountId`, as facilitated by the main table.
+ Retrieval of accounts and specific offer records associated with those accounts by using `AccountId` and `OfferId`, as facilitated by the main table.
+ Retrieval of all `ACCEPTED/DECLINED` offer records of specific `OfferType` associated with accounts using `AccountId`, `OfferType`, and `Status`, as facilitated by the secondary index `GSI1`.
+ Retrieval of offers and associated offer item records using `OfferId`, as facilitated by the main table.

## Bookmarks data model
<a name="workbench.SampleModels.BookmarksDataModel"></a>

This data model is used store bookmarks for customers.

A customer can have many bookmarks and a bookmark can belong to many customers. This data model represents a many-to-many relationship. 

The access patterns facilitated by this data model are:
+ A single query by `customerId` can now return customer data as well as bookmarks.
+ A query `ByEmail` index returns customer data by email address. Note that bookmarks are not retrieved by this index.
+ A query `ByUrl` index gets bookmarks data by URL. Note that we have customerId as the sort key for the index because the same URL can be bookmarked by multiple customers.
+ A query `ByCustomerFolder` index gets bookmarks by folder for each customer.

# Release history for NoSQL Workbench
<a name="WorkbenchDocumentHistory"></a>

The following table describes the important changes in each release of the *NoSQL Workbench* client tool. 


****  

| Version | Change | Description | Date | 
| --- | --- | --- | --- | 
| 3.20.0 | New Data Modeler for DynamoDB | Data Modeler for DynamoDB has an updated user experience. Data Modeler for DynamoDB now supports access patterns. | February 16, 2026 | 
| 3.13.5 | Capacity mode for default table settings is now on-demand | When you create a table with default settings, DynamoDB creates a table that uses on-demand capacity mode instead of provisioned capacity mode. | February 24, 2025 | 
| 3.13.0 | NoSQL Workbench operation builder improvements | NoSQL Workbench now includes native support for dark mode. Improved table and item operations in the operations builder. Item results and operation builder request information is available in JSON format. | April 24, 2024 | 
| 3.12.0 | Cloning tables with NoSQL Workbench and returning capacity consumed | You can now clone tables between DynamoDB local and a DynamoDB web service account or between DynamoDB web service accounts for faster development iterations. View RCU or WCU consumed after running an operation using the Operations Builder. We fixed the overwrite data issue when importing from a CSV file. | February 26, 2024  | 
| 3.11.0 |  DynamoDB local improvements  |  You can now specify port when launching the built-in DynamoDB local instance. NoSQL Workbench can now be installed on Windows without admin rights. We have updated the data model templates.  |  January 17, 2024  | 
| 3.10.0 |  Native support for Apple silicon  |   NoSQL Workbench now includes native support for Mac with Apple silicon. You can now configure sample data generation format for attributes of type `Number`.   |  December 5, 2023  | 
| 3.9.0 |  Data modeler improvements  |   Visualizer now supports committing data models to DynamoDB local with the option to overwrite existing tables.   |  November 3, 2023  | 
| 3.8.0 |  Sample data generation  |  NoSQL Workbench now supports generating sample data for your DynamoDB data models.  |  September 25, 2023  | 
| 3.6.0 |  Improvements in the Operations builder  |  Connections management improvements in the Operations builder. Attribute names in Data Modeler can now be changed without deleting data. Other bug fixes.  |  April 11, 2023  | 
| 3.5.0 |  Support for new AWS Regions  |  NoSQL Workbench now supports the ap-south-2, ap-southeast-3, ap-southeast-4, eu-central-2, eu-south-2, me-central-1, and me-west-1 regions.  |  February 23, 2023  | 
| 3.4.0 |  Support for DynamoDB local  |  NoSQL Workbench now supports installing DynamoDB local as part of the installation process.  |  December 6, 2022  | 
| 3.3.0 |  Support for control plane operations  |  Operation Builder now supports control plane operations.  |  June 1, 2022  | 
| 3.2.0 |  CSV import and export  |  You can now import sample data from a CSV file in the Visualizer tool, and also export the results of an Operation Builder query to a CSV file.  |  October 11, 2021  | 
| 3.1.0 |  Save operations  |  The Operation Builder in NoSQL Workbench now supports saving operations for later use.  |  July 12, 2021  | 
| 3.0.0 | Capacity settings and CloudFormation import/export | NoSQL Workbench for Amazon DynamoDB now supports specifying a read/write capacity mode for tables, and can now import and export data models in CloudFormation format. | April 21, 2021 | 
| 2.2.0 | Support for PartiQL | NoSQL Workbench for Amazon DynamoDB adds support for building PartiQL statements for DynamoDB. | December 4, 2020 | 
| 1.1.0 | Support for Linux. | NoSQL Workbench for Amazon DynamoDB is supported on Linux—Ubuntu, Fedora, and Debian. | May 4, 2020 | 
| 1.0.0 | NoSQL Workbench for Amazon DynamoDB – GA. | NoSQL Workbench for Amazon DynamoDB is generally available. | March 2, 2020 | 
| 0.4.1 | Support for IAM roles and temporary security credentials. | NoSQL Workbench for Amazon DynamoDB adds support for AWS Identity and Access Management (IAM) roles and temporary security credentials. | December 19, 2019 | 
| 0.3.1 | Support for [DynamoDB local (Downloadable Version)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). | The NoSQL Workbench now supports connecting to [DynamoDB local (Downloadable Version)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html) to design, create, query, and manage DynamoDB tables. | November 8, 2019 | 
| 0.2.1 | NoSQL Workbench preview released. | This is the initial release of NoSQL Workbench for DynamoDB. Use NoSQL Workbench to design, create, query, and manage DynamoDB tables. | September 16, 2019 | 