

# Programming with DynamoDB and the AWS SDKs
<a name="Programming"></a>

This section covers developer-related topics. If you want to run code examples instead, see [Running the code examples in this Developer Guide](CodeSamples.md). 

**Note**  
In December 2017, AWS began the process of migrating all Amazon DynamoDB endpoints to use secure certificates issued by Amazon Trust Services (ATS). For more information, see [Troubleshooting SSL/TLS connection establishment issues with DynamoDB](ats-certs.md). 

**Topics**
+ [Overview of AWS SDK support for DynamoDB](Programming.SDKOverview.md)
+ [Programming Amazon DynamoDB with Python and Boto3](programming-with-python.md)
+ [Programming Amazon DynamoDB with JavaScript](programming-with-javascript.md)
+ [Programming DynamoDB with the AWS SDK for Java 2.x](ProgrammingWithJava.md)
+ [Error handling with DynamoDB](Programming.Errors.md)
+ [Using DynamoDB with an AWS SDK](sdk-general-information-section.md)

# Overview of AWS SDK support for DynamoDB
<a name="Programming.SDKOverview"></a>

The following diagram provides a high-level overview of Amazon DynamoDB application programming using the AWS SDKs.

![\[Programming model for using DynamoDB with AWS SDKs.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/SDKSupport.png)


1. You write an application using an AWS SDK for your programming language.

1. Each AWS SDK provides one or more programmatic interfaces for working with DynamoDB. The specific interfaces available depend on which programming language and AWS SDK you use. Options include:
   + [Low-level interfaces that work with DynamoDB](Programming.SDKs.Interfaces.md#Programming.SDKs.Interfaces.LowLevel)
   + [Document interfaces that work with DynamoDB](Programming.SDKs.Interfaces.md#Programming.SDKs.Interfaces.Document)
   + [Object persistence interfaces that work with DynamoDB](Programming.SDKs.Interfaces.md#Programming.SDKs.Interfaces.Mapper)
   + [High Level Interfaces](HigherLevelInterfaces.md)

1. The AWS SDK constructs HTTP(S) requests for use with the low-level DynamoDB API.

1. The AWS SDK sends the request to the DynamoDB endpoint.

1. DynamoDB runs the request. If the request is successful, DynamoDB returns an HTTP 200 response code (OK). If the request is unsuccessful, DynamoDB returns an HTTP error code and an error message.

1. The AWS SDK processes the response and propagates it back to your application.

Each of the AWS SDKs provides important services to your application, including the following:
+ Formatting HTTP(S) requests and serializing request parameters.
+ Generating a cryptographic signature for each request.
+ Forwarding requests to a DynamoDB endpoint and receiving responses from DynamoDB.
+ Extracting the results from those responses.
+ Implementing basic retry logic in case of errors.

You do not need to write code for any of these tasks.

**Note**  
For more information about AWS SDKs, including installation instructions and documentation, see [Tools for Amazon Web Services](https://aws.amazon.com/tools).

## SDK support of AWS account-based endpoints
<a name="Programming.SDKs.endpoints"></a>

AWS is rolling out SDK support for AWS-account-based endpoints for DynamoDB, starting with the AWS SDK for Java V1 on September 4, 2024. These new endpoints help AWS to ensure high performance and scalability. The updated SDKs will automatically use the new endpoints, which have the format `https://(account-id).ddb.(region).amazonaws.com`.

If you use a single instance of an SDK client to make requests to multiple accounts, your application will have fewer opportunities to reuse connections. AWS recommends modifying your applications to connect to fewer accounts per SDK client instance. An alternative is to set your SDK client to continue using Regional endpoints using the `ACCOUNT_ID_ENDPOINT_MODE` setting, as documented in the [https://docs.aws.amazon.com/sdkref/latest/guide/feature-account-endpoints.html](https://docs.aws.amazon.com/sdkref/latest/guide/feature-account-endpoints.html).

# Programmatic interfaces that work with DynamoDB
<a name="Programming.SDKs.Interfaces"></a>

Every [AWS SDK](https://aws.amazon.com/tools) provides one or more programmatic interfaces for working with Amazon DynamoDB. These interfaces range from simple low-level DynamoDB wrappers to object-oriented persistence layers. The available interfaces vary depending on the AWS SDK and programming language that you use.

![\[Programmatic interfaces available in different AWS SDKs for working with DynamoDB.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/SDKSupport.SDKInterfaces.png)


The following section highlights some of the interfaces available, using the AWS SDK for Java as an example. (Not all interfaces are available in all AWS SDKs.)

**Topics**
+ [Low-level interfaces that work with DynamoDB](#Programming.SDKs.Interfaces.LowLevel)
+ [Document interfaces that work with DynamoDB](#Programming.SDKs.Interfaces.Document)
+ [Object persistence interfaces that work with DynamoDB](#Programming.SDKs.Interfaces.Mapper)

## Low-level interfaces that work with DynamoDB
<a name="Programming.SDKs.Interfaces.LowLevel"></a>

Every language-specific AWS SDK provides a low-level interface for Amazon DynamoDB, with methods that closely resemble low-level DynamoDB API requests.

In some cases, you will need to identify the data types of the attributes using [Data type descriptors](Programming.LowLevelAPI.md#Programming.LowLevelAPI.DataTypeDescriptors), such as `S` for string or `N` for number.

**Note**  
A low-level interface is available in every language-specific AWS SDK.

The following Java program uses the low-level interface of the AWS SDK for Java. 

### Low-level interface example
<a name="low-level-example"></a>

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.GetItemRequest;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;

/**
 * Before running this Java V2 code example, set up your development
 * environment, including your credentials.
 *
 * For more information, see the following documentation topic:
 *
 * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
 *
 * To get an item from an Amazon DynamoDB table using the AWS SDK for Java V2,
 * its better practice to use the
 * Enhanced Client, see the EnhancedGetItem example.
 */
public class GetItem {
    public static void main(String[] args) {
        final String usage = """

                Usage:
                    <tableName> <key> <keyVal>

                Where:
                    tableName - The Amazon DynamoDB table from which an item is retrieved (for example, Music3).\s
                    key - The key used in the Amazon DynamoDB table (for example, Artist).\s
                    keyval - The key value that represents the item to get (for example, Famous Band).
                """;

        if (args.length != 3) {
            System.out.println(usage);
            System.exit(1);
        }

        String tableName = args[0];
        String key = args[1];
        String keyVal = args[2];
        System.out.format("Retrieving item \"%s\" from \"%s\"\n", keyVal, tableName);
        Region region = Region.US_EAST_1;
        DynamoDbClient ddb = DynamoDbClient.builder()
                .region(region)
                .build();

        getDynamoDBItem(ddb, tableName, key, keyVal);
        ddb.close();
    }

    public static void getDynamoDBItem(DynamoDbClient ddb, String tableName, String key, String keyVal) {
        HashMap<String, AttributeValue> keyToGet = new HashMap<>();
        keyToGet.put(key, AttributeValue.builder()
                .s(keyVal)
                .build());

        GetItemRequest request = GetItemRequest.builder()
                .key(keyToGet)
                .tableName(tableName)
                .build();

        try {
            // If there is no matching item, GetItem does not return any data.
            Map<String, AttributeValue> returnedItem = ddb.getItem(request).item();
            if (returnedItem.isEmpty())
                System.out.format("No item found with the key %s!\n", key);
            else {
                Set<String> keys = returnedItem.keySet();
                System.out.println("Amazon DynamoDB table attributes: \n");
                for (String key1 : keys) {
                    System.out.format("%s: %s\n", key1, returnedItem.get(key1).toString());
                }
            }

        } catch (DynamoDbException e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
    }
}
```

## Document interfaces that work with DynamoDB
<a name="Programming.SDKs.Interfaces.Document"></a>

Many AWS SDKs provide a document interface, allowing you to perform data plane operations (create, read, update, delete) on tables and indexes. With a document interface, you do not need to specify [Data type descriptors](Programming.LowLevelAPI.md#Programming.LowLevelAPI.DataTypeDescriptors). The data types are implied by the semantics of the data itself. These AWS SDKs also provide methods to easily convert JSON documents to and from native Amazon DynamoDB data types.

**Note**  
Document interfaces are available in the AWS SDKs for [ Java](https://aws.amazon.com/sdk-for-java), [.NET](https://aws.amazon.com/sdk-for-net), [Node.js](https://aws.amazon.com/sdk-for-node-js), and [JavaScript SDK](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/).

The following Java program uses the document interface of the AWS SDK for Java. The program creates a `Table` object that represents the `Music` table, and then asks that object to use `GetItem` to retrieve a song. The program then prints the year that the song was released.

The `software.amazon.dynamodb.document.DynamoDB` class implements the DynamoDB document interface. Note how `DynamoDB` acts as a wrapper around the low-level client (`AmazonDynamoDB`).

### Document interface example
<a name="document-level-example"></a>

```
package com.amazonaws.codesamples.gsg;

import software.amazon.dynamodb.AmazonDynamoDB;
import software.amazon.dynamodb.AmazonDynamoDBClientBuilder;
import software.amazon.dynamodb.document.DynamoDB;
import software.amazon.dynamodb.document.GetItemOutcome;
import software.amazon.dynamodb.document.Table;

public class MusicDocumentDemo {

    public static void main(String[] args) {

        AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
        DynamoDB docClient = new DynamoDB(client);

        Table table = docClient.getTable("Music");
        GetItemOutcome outcome = table.getItemOutcome(
                "Artist", "No One You Know",
                "SongTitle", "Call Me Today");

        int year = outcome.getItem().getInt("Year");
        System.out.println("The song was released in " + year);

    }
}
```

## Object persistence interfaces that work with DynamoDB
<a name="Programming.SDKs.Interfaces.Mapper"></a>

Some AWS SDKs provide an object persistence interface where you do not directly perform data plane operations. Instead, you create objects that represent items in Amazon DynamoDB tables and indexes, and interact only with those objects. This allows you to write object-centric code, rather than database-centric code.

**Note**  
Object persistence interfaces are available in the AWS SDKs for Java and .NET. For more information, see [Higher-level programming interfaces for DynamoDB](HigherLevelInterfaces.md) for DynamoDB.

### Object persistence interface example
<a name="mapper-level-example"></a>

```
import com.example.dynamodb.Customer;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbEnhancedClient;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbTable;
import software.amazon.awssdk.enhanced.dynamodb.Key;
import software.amazon.awssdk.enhanced.dynamodb.TableSchema;
import software.amazon.awssdk.enhanced.dynamodb.model.GetItemEnhancedRequest;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
```

```
import com.example.dynamodb.Customer;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbEnhancedClient;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbTable;
import software.amazon.awssdk.enhanced.dynamodb.Key;
import software.amazon.awssdk.enhanced.dynamodb.TableSchema;
import software.amazon.awssdk.enhanced.dynamodb.model.GetItemEnhancedRequest;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;

/*
 * Before running this code example, create an Amazon DynamoDB table named Customer with these columns:
 *   - id - the id of the record that is the key. Be sure one of the id values is `id101`
 *   - custName - the customer name
 *   - email - the email value
 *   - registrationDate - an instant value when the item was added to the table. These values
 *                        need to be in the form of `YYYY-MM-DDTHH:mm:ssZ`, such as 2022-07-11T00:00:00Z
 *
 * Also, ensure that you have set up your development environment, including your credentials.
 *
 * For information, see this documentation topic:
 *
 * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
 */

public class EnhancedGetItem {
    public static void main(String[] args) {
        Region region = Region.US_EAST_1;
        DynamoDbClient ddb = DynamoDbClient.builder()
                .region(region)
                .build();

        DynamoDbEnhancedClient enhancedClient = DynamoDbEnhancedClient.builder()
                .dynamoDbClient(ddb)
                .build();

        getItem(enhancedClient);
        ddb.close();
    }

    public static String getItem(DynamoDbEnhancedClient enhancedClient) {
        Customer result = null;
        try {
            DynamoDbTable<Customer> table = enhancedClient.table("Customer", TableSchema.fromBean(Customer.class));
            Key key = Key.builder()
                    .partitionValue("id101").sortValue("tred@noserver.com")
                    .build();

            // Get the item by using the key.
            result = table.getItem(
                    (GetItemEnhancedRequest.Builder requestBuilder) -> requestBuilder.key(key));
            System.out.println("******* The description value is " + result.getCustName());

        } catch (DynamoDbException e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
        return result.getCustName();
    }
}
```

# Higher-level programming interfaces for DynamoDB
<a name="HigherLevelInterfaces"></a>

The AWS SDKs provide applications with low-level interfaces for working with Amazon DynamoDB. These client-side classes and methods correspond directly to the low-level DynamoDB API. However, many developers experience a sense of disconnect, or *impedance mismatch*, when they need to map complex data types to items in a database table. With a low-level database interface, developers must write methods for reading or writing object data to database tables, and vice versa. The amount of extra code required for each combination of object type and database table can seem overwhelming.

To simplify development, the AWS SDKs for Java and .NET provide additional interfaces with higher levels of abstraction. The higher-level interfaces for DynamoDB let you define the relationships between objects in your program and the database tables that store those objects' data. After you define this mapping, you call simple object methods such as `save`, `load`, or `delete`, and the underlying low-level DynamoDB operations are automatically invoked on your behalf. This allows you to write object-centric code, rather than database-centric code.

The higher-level programming interfaces for DynamoDB are available in the AWS SDKs for Java and .NET.

**Java**
+ [Java 1.x: DynamoDBMapper](DynamoDBMapper.md)
+ [Java 2.x: DynamoDB Enhanced Client](DynamoDBEnhanced.md)

**.NET**
+ [Working with the .NET document model in DynamoDB](DotNetSDKMidLevel.md)
+ [Working with the .NET object persistence model and DynamoDB](DotNetSDKHighLevel.md)

# Java 1.x: DynamoDBMapper
<a name="DynamoDBMapper"></a>

**Note**  
The SDK for Java has two versions: 1.x and 2.x. The end-of-support for 1.x was [announced](https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-aws-sdk-for-java-v1-x-on-december-31-2025/) on January 12, 2024. It will and its end-of-support is due on December 31, 2025. For new development, we highly recommend that you use 2.x.

The AWS SDK for Java provides a `DynamoDBMapper` class, allowing you to map your client-side classes to Amazon DynamoDB tables. To use `DynamoDBMapper`, you define the relationship between items in a DynamoDB table and their corresponding object instances in your code. The `DynamoDBMapper` class enables you to perform various create, read, update, and delete (CRUD) operations on items, and run queries and scans against tables.

**Topics**
+ [DynamoDBMapper Class](DynamoDBMapper.Methods.md)
+ [Supported data types for DynamoDBMapper for Java](DynamoDBMapper.DataTypes.md)
+ [Java Annotations for DynamoDB](DynamoDBMapper.Annotations.md)
+ [Optional configuration settings for DynamoDBMapper](DynamoDBMapper.OptionalConfig.md)
+ [DynamoDB and optimistic locking with version number](DynamoDBMapper.OptimisticLocking.md)
+ [Mapping arbitrary data in DynamoDB](DynamoDBMapper.ArbitraryDataMapping.md)
+ [DynamoDBMapper examples](DynamoDBMapper.Examples.md)

**Note**  
The `DynamoDBMapper` class does not allow you to create, update, or delete tables. To perform those tasks, use the low-level SDK for Java interface instead.

The SDK for Java provides a set of annotation types so that you can map your classes to tables. For example, consider a `ProductCatalog` table that has `Id` as the partition key. 

```
ProductCatalog(Id, ...)
```

You can map a class in your client application to the `ProductCatalog` table as shown in the following Java code. This code defines a plain old Java object (POJO) named `CatalogItem`, which uses annotations to map object fields to DynamoDB attribute names.

**Example**  

```
package com.amazonaws.codesamples;

import java.util.Set;

import software.amazon.dynamodb.datamodeling.DynamoDBAttribute;
import software.amazon.dynamodb.datamodeling.DynamoDBHashKey;
import software.amazon.dynamodb.datamodeling.DynamoDBIgnore;
import software.amazon.dynamodb.datamodeling.DynamoDBTable;

@DynamoDBTable(tableName="ProductCatalog")
public class CatalogItem {

    private Integer id;
    private String title;
    private String ISBN;
    private Set<String> bookAuthors;
    private String someProp;

    @DynamoDBHashKey(attributeName="Id")
    public Integer getId() { return id; }
    public void setId(Integer id) {this.id = id; }

    @DynamoDBAttribute(attributeName="Title")
    public String getTitle() {return title; }
    public void setTitle(String title) { this.title = title; }

    @DynamoDBAttribute(attributeName="ISBN")
    public String getISBN() { return ISBN; }
    public void setISBN(String ISBN) { this.ISBN = ISBN; }

    @DynamoDBAttribute(attributeName="Authors")
    public Set<String> getBookAuthors() { return bookAuthors; }
    public void setBookAuthors(Set<String> bookAuthors) { this.bookAuthors = bookAuthors; }

    @DynamoDBIgnore
    public String getSomeProp() { return someProp; }
    public void setSomeProp(String someProp) { this.someProp = someProp; }
}
```

In the preceding code, the `@DynamoDBTable` annotation maps the `CatalogItem` class to the `ProductCatalog` table. You can store individual class instances as items in the table. In the class definition, the `@DynamoDBHashKey` annotation maps the `Id` property to the primary key. 

By default, the class properties map to the same name attributes in the table. The properties `Title` and `ISBN` map to the same name attributes in the table. 

The `@DynamoDBAttribute` annotation is optional when the name of the DynamoDB attribute matches the name of the property declared in the class. When they differ, use this annotation with the `attributeName` parameter to specify which DynamoDB attribute this property corresponds to. 

In the preceding example, the `@DynamoDBAttribute` annotation is added to each property to ensure that the property names match exactly with the tables created in a previous step, and to be consistent with the attribute names used in other code examples in this guide. 

Your class definition can have properties that don't map to any attributes in the table. You identify these properties by adding the `@DynamoDBIgnore` annotation. In the preceding example, the `SomeProp` property is marked with the `@DynamoDBIgnore` annotation. When you upload a `CatalogItem` instance to the table, your `DynamoDBMapper` instance does not include the `SomeProp` property. In addition, the mapper does not return this attribute when you retrieve an item from the table. 

After you define your mapping class, you can use `DynamoDBMapper` methods to write an instance of that class to a corresponding item in the `Catalog` table. The following code example demonstrates this technique.

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();

DynamoDBMapper mapper = new DynamoDBMapper(client);

CatalogItem item = new CatalogItem();
item.setId(102);
item.setTitle("Book 102 Title");
item.setISBN("222-2222222222");
item.setBookAuthors(new HashSet<String>(Arrays.asList("Author 1", "Author 2")));
item.setSomeProp("Test");

mapper.save(item);
```

The following code example shows how to retrieve the item and access some of its attributes.

```
CatalogItem partitionKey = new CatalogItem();

partitionKey.setId(102);
DynamoDBQueryExpression<CatalogItem> queryExpression = new DynamoDBQueryExpression<CatalogItem>()
    .withHashKeyValues(partitionKey);

List<CatalogItem> itemList = mapper.query(CatalogItem.class, queryExpression);

for (int i = 0; i < itemList.size(); i++) {
    System.out.println(itemList.get(i).getTitle());
    System.out.println(itemList.get(i).getBookAuthors());
}
```

`DynamoDBMapper` offers an intuitive, natural way of working with DynamoDB data within Java. It also provides several built-in features, such as optimistic locking, ACID transactions, autogenerated partition key and sort key values, and object versioning.

# DynamoDBMapper Class
<a name="DynamoDBMapper.Methods"></a>



The `DynamoDBMapper` class is the entry point to Amazon DynamoDB. It provides access to a DynamoDB endpoint and enables you to access your data in various tables. It also enables you to perform various create, read, update, and delete (CRUD) operations on items, and run queries and scans against tables. This class provides the following methods for working with DynamoDB.

For the corresponding Javadoc documentation, see [DynamoDBMapper](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/datamodeling/DynamoDBMapper.html) in the *AWS SDK for Java API Reference*.

**Topics**
+ [save](#DynamoDBMapper.Methods.save)
+ [load](#DynamoDBMapper.Methods.load)
+ [delete](#DynamoDBMapper.Methods.delete)
+ [query](#DynamoDBMapper.Methods.query)
+ [queryPage](#DynamoDBMapper.Methods.queryPage)
+ [scan](#DynamoDBMapper.Methods.scan)
+ [scanPage](#DynamoDBMapper.Methods.scanPage)
+ [parallelScan](#DynamoDBMapper.Methods.parallelScan)
+ [batchSave](#DynamoDBMapper.Methods.batchSave)
+ [batchLoad](#DynamoDBMapper.Methods.batchLoad)
+ [batchDelete](#DynamoDBMapper.Methods.batchDelete)
+ [batchWrite](#DynamoDBMapper.Methods.batchWrite)
+ [transactionWrite](#DynamoDBMapper.Methods.transactionWrite)
+ [transactionLoad](#DynamoDBMapper.Methods.transactionLoad)
+ [count](#DynamoDBMapper.Methods.count)
+ [generateCreateTableRequest](#DynamoDBMapper.Methods.generateCreateTableRequest)
+ [createS3Link](#DynamoDBMapper.Methods.createS3Link)
+ [getS3ClientCache](#DynamoDBMapper.Methods.getS3ClientCache)

## save
<a name="DynamoDBMapper.Methods.save"></a>

Saves the specified object to the table. The object that you want to save is the only required parameter for this method. You can provide optional configuration parameters using the `DynamoDBMapperConfig` object. 

If an item that has the same primary key does not exist, this method creates a new item in the table. If an item that has the same primary key exists, it updates the existing item. If the partition key and sort key are of type String and are annotated with `@DynamoDBAutoGeneratedKey`, they are given a random universally unique identifier (UUID) if left uninitialized. Version fields that are annotated with `@DynamoDBVersionAttribute` are incremented by one. Additionally, if a version field is updated or a key generated, the object passed in is updated as a result of the operation. 

By default, only attributes corresponding to mapped class properties are updated. Any additional existing attributes on an item are unaffected. However, if you specify `SaveBehavior.CLOBBER`, you can force the item to be completely overwritten.

```
DynamoDBMapperConfig config = DynamoDBMapperConfig.builder()
    .withSaveBehavior(DynamoDBMapperConfig.SaveBehavior.CLOBBER).build();
        
mapper.save(item, config);
```

If you have versioning enabled, the client-side and server-side item versions must match. However, the version does not need to match if the `SaveBehavior.CLOBBER` option is used. For more information about versioning, see [DynamoDB and optimistic locking with version number](DynamoDBMapper.OptimisticLocking.md).

## load
<a name="DynamoDBMapper.Methods.load"></a>

Retrieves an item from a table. You must provide the primary key of the item that you want to retrieve. You can provide optional configuration parameters using the `DynamoDBMapperConfig` object. For example, you can optionally request strongly consistent reads to ensure that this method retrieves only the latest item values as shown in the following Java statement. 

```
DynamoDBMapperConfig config = DynamoDBMapperConfig.builder()
    .withConsistentReads(DynamoDBMapperConfig.ConsistentReads.CONSISTENT).build();

CatalogItem item = mapper.load(CatalogItem.class, item.getId(), config);
```

By default, DynamoDB returns the item that has values that are eventually consistent. For information about the eventual consistency model of DynamoDB, see [DynamoDB read consistency](HowItWorks.ReadConsistency.md).

## delete
<a name="DynamoDBMapper.Methods.delete"></a>

Deletes an item from the table. You must pass in an object instance of the mapped class. 

If you have versioning enabled, the client-side and server-side item versions must match. However, the version does not need to match if the `SaveBehavior.CLOBBER` option is used. For more information about versioning, see [DynamoDB and optimistic locking with version number](DynamoDBMapper.OptimisticLocking.md). 

## query
<a name="DynamoDBMapper.Methods.query"></a>

Queries a table or a secondary index.

Assume that you have a table, `Reply`, that stores forum thread replies. Each thread subject can have zero or more replies. The primary key of the `Reply` table consists of the `Id` and `ReplyDateTime` fields, where `Id` is the partition key and `ReplyDateTime` is the sort key of the primary key.

```
Reply ( Id, ReplyDateTime, ... )
```

Assume that you created a mapping between a `Reply` class and the corresponding `Reply` table in DynamoDB. The following Java code uses `DynamoDBMapper` to find all replies in the past two weeks for a specific thread subject.

**Example**  

```
String forumName = "&DDB;";
String forumSubject = "&DDB; Thread 1";
String partitionKey = forumName + "#" + forumSubject;

long twoWeeksAgoMilli = (new Date()).getTime() - (14L*24L*60L*60L*1000L);
Date twoWeeksAgo = new Date();
twoWeeksAgo.setTime(twoWeeksAgoMilli);
SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
String twoWeeksAgoStr = df.format(twoWeeksAgo);

Map<String, AttributeValue> eav = new HashMap<String, AttributeValue>();
eav.put(":v1", new AttributeValue().withS(partitionKey));
eav.put(":v2",new AttributeValue().withS(twoWeeksAgoStr.toString()));

DynamoDBQueryExpression<Reply> queryExpression = new DynamoDBQueryExpression<Reply>()
    .withKeyConditionExpression("Id = :v1 and ReplyDateTime > :v2")
    .withExpressionAttributeValues(eav);

List<Reply> latestReplies = mapper.query(Reply.class, queryExpression);
```

The query returns a collection of `Reply` objects. 

By default, the `query` method returns a "lazy-loaded" collection. It initially returns only one page of results, and then makes a service call for the next page if needed. To obtain all the matching items, iterate over the `latestReplies` collection. 

Note that calling the `size()` method on the collection will load every result in order to provide an accurate count. This can result in a lot of provisioned throughput being consumed, and on a very large table could even exhaust all the memory in your JVM.

To query an index, you must first model the index as a mapper class. Suppose that the `Reply` table has a global secondary index named *PostedBy-Message-Index*. The partition key for this index is `PostedBy`, and the sort key is `Message`. The class definition for an item in the index would look like the following.

```
@DynamoDBTable(tableName="Reply")
public class PostedByMessage {
    private String postedBy;
    private String message;

    @DynamoDBIndexHashKey(globalSecondaryIndexName = "PostedBy-Message-Index", attributeName = "PostedBy")
    public String getPostedBy() { return postedBy; }
    public void setPostedBy(String postedBy) { this.postedBy = postedBy; }

    @DynamoDBIndexRangeKey(globalSecondaryIndexName = "PostedBy-Message-Index", attributeName = "Message")
    public String getMessage() { return message; }
    public void setMessage(String message) { this.message = message; }

   // Additional properties go here.
}
```

The `@DynamoDBTable` annotation indicates that this index is associated with the `Reply` table. The `@DynamoDBIndexHashKey` annotation denotes the partition key (*PostedBy*) of the index, and `@DynamoDBIndexRangeKey` denotes the sort key (*Message*) of the index.

Now you can use `DynamoDBMapper` to query the index, retrieving a subset of messages that were posted by a particular user. You do not need to specify the index name if you do not have conflicting mappings across tables and indexes and the mappings are already made in the mapper. The mapper will infer based on the primary key and sort key. The following code queries a global secondary index. Because global secondary indexes support eventually consistent reads but not strongly consistent reads, you must specify `withConsistentRead(false)`.

```
HashMap<String, AttributeValue> eav = new HashMap<String, AttributeValue>();
eav.put(":v1",  new AttributeValue().withS("User A"));
eav.put(":v2",  new AttributeValue().withS("DynamoDB"));

DynamoDBQueryExpression<PostedByMessage> queryExpression = new DynamoDBQueryExpression<PostedByMessage>()
    .withIndexName("PostedBy-Message-Index")
    .withConsistentRead(false)
    .withKeyConditionExpression("PostedBy = :v1 and begins_with(Message, :v2)")
    .withExpressionAttributeValues(eav);

List<PostedByMessage> iList =  mapper.query(PostedByMessage.class, queryExpression);
```

The query returns a collection of `PostedByMessage` objects.

## queryPage
<a name="DynamoDBMapper.Methods.queryPage"></a>

Queries a table or secondary index and returns a single page of matching results. As with the `query` method, you must specify a partition key value and a query filter that is applied on the sort key attribute. However, `queryPage` returns only the first "page" of data, that is, the amount of data that fits in 1 MB 

## scan
<a name="DynamoDBMapper.Methods.scan"></a>

Scans an entire table or a secondary index. You can optionally specify a `FilterExpression` to filter the result set.

Assume that you have a table, `Reply`, that stores forum thread replies. Each thread subject can have zero or more replies. The primary key of the `Reply` table consists of the `Id` and `ReplyDateTime` fields, where `Id` is the partition key and `ReplyDateTime` is the sort key of the primary key.

```
Reply ( Id, ReplyDateTime, ... )
```

If you mapped a Java class to the `Reply` table, you can use the `DynamoDBMapper` to scan the table. For example, the following Java code scans the entire `Reply` table, returning only the replies for a particular year.

**Example**  

```
HashMap<String, AttributeValue> eav = new HashMap<String, AttributeValue>();
eav.put(":v1", new AttributeValue().withS("2015"));

DynamoDBScanExpression scanExpression = new DynamoDBScanExpression()
    .withFilterExpression("begins_with(ReplyDateTime,:v1)")
    .withExpressionAttributeValues(eav);

List<Reply> replies =  mapper.scan(Reply.class, scanExpression);
```

By default, the `scan` method returns a "lazy-loaded" collection. It initially returns only one page of results, and then makes a service call for the next page if needed. To obtain all the matching items, iterate over the `replies` collection.

Note that calling the `size()` method on the collection will load every result in order to provide an accurate count. This can result in a lot of provisioned throughput being consumed, and on a very large table could even exhaust all the memory in your JVM.

To scan an index, you must first model the index as a mapper class. Suppose that the `Reply` table has a global secondary index named `PostedBy-Message-Index`. The partition key for this index is `PostedBy`, and the sort key is `Message`. A mapper class for this index is shown in the [query](#DynamoDBMapper.Methods.query) section. It uses the `@DynamoDBIndexHashKey` and `@DynamoDBIndexRangeKey` annotations to specify the index partition key and sort key.

The following code example scans `PostedBy-Message-Index`. It does not use a scan filter, so all of the items in the index are returned to you.

```
DynamoDBScanExpression scanExpression = new DynamoDBScanExpression()
    .withIndexName("PostedBy-Message-Index")
    .withConsistentRead(false);

    List<PostedByMessage> iList =  mapper.scan(PostedByMessage.class, scanExpression);
    Iterator<PostedByMessage> indexItems = iList.iterator();
```

## scanPage
<a name="DynamoDBMapper.Methods.scanPage"></a>

Scans a table or secondary index and returns a single page of matching results. As with the `scan` method, you can optionally specify a `FilterExpression` to filter the result set. However, `scanPage` only returns the first "page" of data, that is, the amount of data that fits within 1 MB.

## parallelScan
<a name="DynamoDBMapper.Methods.parallelScan"></a>

Performs a parallel scan of an entire table or secondary index. You specify a number of logical segments for the table, along with a scan expression to filter the results. The `parallelScan` divides the scan task among multiple workers, one for each logical segment; the workers process the data in parallel and return the results.

The following Java code example performs a parallel scan on the `Product` table.

```
int numberOfThreads = 4;

Map<String, AttributeValue> eav = new HashMap<String, AttributeValue>();
eav.put(":n", new AttributeValue().withN("100"));

DynamoDBScanExpression scanExpression = new DynamoDBScanExpression()
    .withFilterExpression("Price <= :n")
    .withExpressionAttributeValues(eav);

List<Product> scanResult = mapper.parallelScan(Product.class, scanExpression, numberOfThreads);
```

## batchSave
<a name="DynamoDBMapper.Methods.batchSave"></a>

Saves objects to one or more tables using one or more calls to the `AmazonDynamoDB.batchWriteItem` method. This method does not provide transaction guarantees.

The following Java code saves two items (books) to the `ProductCatalog` table.

```
Book book1 = new Book();
book1.setId(901);
book1.setProductCategory("Book");
book1.setTitle("Book 901 Title");

Book book2 = new Book();
book2.setId(902);
book2.setProductCategory("Book");
book2.setTitle("Book 902 Title");

mapper.batchSave(Arrays.asList(book1, book2));
```

## batchLoad
<a name="DynamoDBMapper.Methods.batchLoad"></a>

Retrieves multiple items from one or more tables using their primary keys.

The following Java code retrieves two items from two different tables.

```
ArrayList<Object> itemsToGet = new ArrayList<Object>();

ForumItem forumItem = new ForumItem();
forumItem.setForumName("Amazon DynamoDB");
itemsToGet.add(forumItem);

ThreadItem threadItem = new ThreadItem();
threadItem.setForumName("Amazon DynamoDB");
threadItem.setSubject("Amazon DynamoDB thread 1 message text");
itemsToGet.add(threadItem);

Map<String, List<Object>> items = mapper.batchLoad(itemsToGet);
```

## batchDelete
<a name="DynamoDBMapper.Methods.batchDelete"></a>

Deletes objects from one or more tables using one or more calls to the `AmazonDynamoDB.batchWriteItem` method. This method does not provide transaction guarantees. 

The following Java code deletes two items (books) from the `ProductCatalog` table.

```
Book book1 = mapper.load(Book.class, 901);
Book book2 = mapper.load(Book.class, 902);
mapper.batchDelete(Arrays.asList(book1, book2));
```

## batchWrite
<a name="DynamoDBMapper.Methods.batchWrite"></a>

Saves objects to and deletes objects from one or more tables using one or more calls to the `AmazonDynamoDB.batchWriteItem` method. This method does not provide transaction guarantees or support versioning (conditional puts or deletes).

The following Java code writes a new item to the `Forum` table, writes a new item to the `Thread` table, and deletes an item from the `ProductCatalog` table.

```
// Create a Forum item to save
Forum forumItem = new Forum();
forumItem.setName("Test BatchWrite Forum");

// Create a Thread item to save
Thread threadItem = new Thread();
threadItem.setForumName("AmazonDynamoDB");
threadItem.setSubject("My sample question");

// Load a ProductCatalog item to delete
Book book3 = mapper.load(Book.class, 903);

List<Object> objectsToWrite = Arrays.asList(forumItem, threadItem);
List<Book> objectsToDelete = Arrays.asList(book3);

mapper.batchWrite(objectsToWrite, objectsToDelete);
```

## transactionWrite
<a name="DynamoDBMapper.Methods.transactionWrite"></a>

Saves objects to and deletes objects from one or more tables using one call to the `AmazonDynamoDB.transactWriteItems` method. 

For a list of transaction-specific exceptions, see [TransactWriteItems errors](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactWriteItems.html#API_TransactWriteItems_Errors). 

For more information about DynamoDB transactions and the provided atomicity, consistency, isolation, and durability (ACID) guarantees see [Amazon DynamoDB Transactions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html). 

**Note**  
 This method does not support the following:  
[DynamoDBMapperConfig.SaveBehavior](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBMapper.OptionalConfig.html).

The following Java code writes a new item to each of the `Forum` and `Thread` tables, transactionally.

```
Thread s3ForumThread = new Thread();
s3ForumThread.setForumName("S3 Forum");
s3ForumThread.setSubject("Sample Subject 1");
s3ForumThread.setMessage("Sample Question 1");

Forum s3Forum = new Forum();
s3Forum.setName("S3 Forum");
s3Forum.setCategory("Amazon Web Services");
s3Forum.setThreads(1);

TransactionWriteRequest transactionWriteRequest = new TransactionWriteRequest();
transactionWriteRequest.addPut(s3Forum);
transactionWriteRequest.addPut(s3ForumThread);
mapper.transactionWrite(transactionWriteRequest);
```

## transactionLoad
<a name="DynamoDBMapper.Methods.transactionLoad"></a>

Loads objects from one or more tables using one call to the `AmazonDynamoDB.transactGetItems` method. 

For a list of transaction-specific exceptions, see [TransactGetItems errors](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactGetItems.html#API_TransactGetItems_Errors). 

For more information about DynamoDB transactions and the provided atomicity, consistency, isolation, and durability (ACID) guarantees see [Amazon DynamoDB Transactions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html). 

The following Java code loads one item from each of the `Forum` and `Thread` tables, transactionally.

```
Forum dynamodbForum = new Forum();
dynamodbForum.setName("DynamoDB Forum");
Thread dynamodbForumThread = new Thread();
dynamodbForumThread.setForumName("DynamoDB Forum");

TransactionLoadRequest transactionLoadRequest = new TransactionLoadRequest();
transactionLoadRequest.addLoad(dynamodbForum);
transactionLoadRequest.addLoad(dynamodbForumThread);
mapper.transactionLoad(transactionLoadRequest);
```

## count
<a name="DynamoDBMapper.Methods.count"></a>

Evaluates the specified scan expression and returns the count of matching items. No item data is returned.

## generateCreateTableRequest
<a name="DynamoDBMapper.Methods.generateCreateTableRequest"></a>

Parses a POJO class that represents a DynamoDB table, and returns a `CreateTableRequest` for that table.

## createS3Link
<a name="DynamoDBMapper.Methods.createS3Link"></a>

Creates a link to an object in Amazon S3. You must specify a bucket name and a key name, which uniquely identifies the object in the bucket.

To use `createS3Link`, your mapper class must define getter and setter methods. The following code example illustrates this by adding a new attribute and getter/setter methods to the `CatalogItem` class.

```
@DynamoDBTable(tableName="ProductCatalog")
public class CatalogItem {

    ...

    public S3Link productImage;

    ....

    @DynamoDBAttribute(attributeName = "ProductImage")
    public S3Link getProductImage() {
            return productImage;
    }

    public void setProductImage(S3Link productImage) {
        this.productImage = productImage;
    }

...
}
```

The following Java code defines a new item to be written to the `Product` table. The item includes a link to a product image; the image data is uploaded to Amazon S3.

```
CatalogItem item = new CatalogItem();

item.setId(150);
item.setTitle("Book 150 Title");

String amzn-s3-demo-bucket = "amzn-s3-demo-bucket";
String myS3Key = "productImages/book_150_cover.jpg";
item.setProductImage(mapper.createS3Link(amzn-s3-demo-bucket, myS3Key));

item.getProductImage().uploadFrom(new File("/file/path/book_150_cover.jpg"));

mapper.save(item);
```

The `S3Link` class provides many other methods for manipulating objects in Amazon S3. For more information, see the [Javadocs for `S3Link`](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/datamodeling/S3Link.html).

## getS3ClientCache
<a name="DynamoDBMapper.Methods.getS3ClientCache"></a>

Returns the underlying `S3ClientCache` for accessing Amazon S3. An `S3ClientCache` is a smart Map for `AmazonS3Client` objects. If you have multiple clients, an `S3ClientCache` can help you keep the clients organized by AWS Region, and can create new Amazon S3 clients on demand.

# Supported data types for DynamoDBMapper for Java
<a name="DynamoDBMapper.DataTypes"></a>

This section describes the supported primitive Java data types, collections, and arbitrary data types in Amazon DynamoDB. 

Amazon DynamoDB supports the following primitive Java data types and primitive wrapper classes. 
+ `String`
+ `Boolean`, `boolean`
+ `Byte`, `byte`
+ `Date` (as [ISO\$18601](http://en.wikipedia.org/wiki/ISO_8601) millisecond-precision string, shifted to UTC)
+ `Calendar` (as [ISO\$18601](http://en.wikipedia.org/wiki/ISO_8601) millisecond-precision string, shifted to UTC)
+ `Long`, `long`
+ `Integer`, `int`
+ `Double`, `double`
+ `Float`, `float`
+ `BigDecimal`
+ `BigInteger`

**Note**  
For more information about DynamoDB naming rules and the various supported data types, see [Supported data types and naming rules in Amazon DynamoDB](HowItWorks.NamingRulesDataTypes.md). 
Empty Binary values are supported by the DynamoDBMapper.
Empty String values are supported by AWS SDK for Java 2.x.  
In AWS SDK for Java 1.x, DynamoDBMapper supports reading of empty String attribute values, however, it will not write empty String attribute values because these attributes are dropped from the request.

DynamoDB supports the Java [Set](http://docs.oracle.com/javase/6/docs/api/java/util/Set.html), [List](http://docs.oracle.com/javase/6/docs/api/java/util/List.html), and [Map](http://docs.oracle.com/javase/6/docs/api/java/util/Map.html) collection types. The following table summarizes how these Java types map to the DynamoDB types.


****  

| Java type | DynamoDB type | 
| --- | --- | 
|  All number types  |  `N` (number type)  | 
|  Strings  |  `S` (string type)   | 
|  Boolean  |  `BOOL` (Boolean type), 0 or 1.  | 
|  ByteBuffer  |  `B` (binary type)  | 
|  Date  |  `S` (string type). The Date values are stored as ISO-8601 formatted strings.  | 
| [Set](http://docs.oracle.com/javase/6/docs/api/java/util/Set.html) collection types |  `SS` (string set) type, `NS` (number set) type, or `BS` (binary set) type.  | 

 The `DynamoDBTypeConverter` interface lets you map your own arbitrary data types to a data type that is natively supported by DynamoDB. For more information, see [Mapping arbitrary data in DynamoDB](DynamoDBMapper.ArbitraryDataMapping.md). 

# Java Annotations for DynamoDB
<a name="DynamoDBMapper.Annotations"></a>

This section describes the annotations that are available for mapping your classes and properties to tables and attributes in Amazon DynamoDB.

For the corresponding Javadoc documentation, see [Annotation Types Summary](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/datamodeling/package-summary.html) in the [AWS SDK for Java API Reference](https://docs.aws.amazon.com/sdk-for-java/latest/reference/).

**Note**  
In the following annotations, only `DynamoDBTable` and the `DynamoDBHashKey` are required. 

**Topics**
+ [DynamoDBAttribute](#DynamoDBMapper.Annotations.DynamoDBAttribute)
+ [DynamoDBAutoGeneratedKey](#DynamoDBMapper.Annotations.DynamoDBAutoGeneratedKey)
+ [DynamoDBAutoGeneratedTimestamp](#DynamoDBMapper.Annotations.DynamoDBAutoGeneratedTimestamp)
+ [DynamoDBDocument](#DynamoDBMapper.Annotations.DynamoDBDocument)
+ [DynamoDBHashKey](#DynamoDBMapper.Annotations.DynamoDBHashKey)
+ [DynamoDBIgnore](#DynamoDBMapper.Annotations.DynamoDBIgnore)
+ [DynamoDBIndexHashKey](#DynamoDBMapper.Annotations.DynamoDBIndexHashKey)
+ [DynamoDBIndexRangeKey](#DynamoDBMapper.Annotations.DynamoDBIndexRangeKey)
+ [DynamoDBRangeKey](#DynamoDBMapper.Annotations.DynamoDBRangeKey)
+ [DynamoDBTable](#DynamoDBMapper.Annotations.DynamoDBTable)
+ [DynamoDBTypeConverted](#DynamoDBMapper.Annotations.DynamoDBTypeConverted)
+ [DynamoDBTyped](#DynamoDBMapper.Annotations.DynamoDBTyped)
+ [DynamoDBVersionAttribute](#DynamoDBMapper.Annotations.DynamoDBVersionAttribute)

## DynamoDBAttribute
<a name="DynamoDBMapper.Annotations.DynamoDBAttribute"></a>

Maps a property to a table attribute. By default, each class property maps to an item attribute with the same name. However, if the names are not the same, you can use this annotation to map a property to the attribute. In the following Java snippet, the `DynamoDBAttribute` maps the `BookAuthors` property to the `Authors` attribute name in the table.

```
@DynamoDBAttribute(attributeName = "Authors")
public List<String> getBookAuthors() { return BookAuthors; }
public void setBookAuthors(List<String> BookAuthors) { this.BookAuthors = BookAuthors; }
```

The `DynamoDBMapper` uses `Authors` as the attribute name when saving the object to the table. 

## DynamoDBAutoGeneratedKey
<a name="DynamoDBMapper.Annotations.DynamoDBAutoGeneratedKey"></a>

Marks a partition key or sort key property as being autogenerated. `DynamoDBMapper` generates a random [UUID](http://docs.oracle.com/javase/6/docs/api/java/util/UUID.html) when saving these attributes. Only String properties can be marked as autogenerated keys. 

The following example demonstrates using autogenerated keys.

```
@DynamoDBTable(tableName="AutoGeneratedKeysExample")
public class AutoGeneratedKeys {
    private String id;
    private String payload;

    @DynamoDBHashKey(attributeName = "Id")
    @DynamoDBAutoGeneratedKey
    public String getId() { return id; }
    public void setId(String id) { this.id = id; }

    @DynamoDBAttribute(attributeName="payload")
    public String getPayload() { return this.payload; }
    public void setPayload(String payload) { this.payload = payload; }

    public static void saveItem() {
        AutoGeneratedKeys obj = new AutoGeneratedKeys();
        obj.setPayload("abc123");

        // id field is null at this point
        DynamoDBMapper mapper = new DynamoDBMapper(dynamoDBClient);
        mapper.save(obj);

        System.out.println("Object was saved with id " + obj.getId());
    }
}
```

## DynamoDBAutoGeneratedTimestamp
<a name="DynamoDBMapper.Annotations.DynamoDBAutoGeneratedTimestamp"></a>

Automatically generates a timestamp.

```
@DynamoDBAutoGeneratedTimestamp(strategy=DynamoDBAutoGenerateStrategy.ALWAYS)
public Date getLastUpdatedDate() { return lastUpdatedDate; }
public void setLastUpdatedDate(Date lastUpdatedDate) { this.lastUpdatedDate = lastUpdatedDate; }
```

Optionally, the auto-generation strategy can be defined by providing a strategy attribute. The default is `ALWAYS`.

## DynamoDBDocument
<a name="DynamoDBMapper.Annotations.DynamoDBDocument"></a>

Indicates that a class can be serialized as an Amazon DynamoDB document.

For example, suppose that you wanted to map a JSON document to a DynamoDB attribute of type Map (`M`). The following code example defines an item containing a nested attribute (Pictures) of type Map.

```
public class ProductCatalogItem {

    private Integer id;  //partition key
    private Pictures pictures;
    /* ...other attributes omitted... */

    @DynamoDBHashKey(attributeName="Id")
    public Integer getId() { return id;}
    public void setId(Integer id) {this.id = id;}

    @DynamoDBAttribute(attributeName="Pictures")
    public Pictures getPictures() { return pictures;}
    public void setPictures(Pictures pictures) {this.pictures = pictures;}

    // Additional properties go here.

    @DynamoDBDocument
    public static class Pictures {
        private String frontView;
        private String rearView;
        private String sideView;

        @DynamoDBAttribute(attributeName = "FrontView")
        public String getFrontView() { return frontView; }
        public void setFrontView(String frontView) { this.frontView = frontView; }

        @DynamoDBAttribute(attributeName = "RearView")
        public String getRearView() { return rearView; }
        public void setRearView(String rearView) { this.rearView = rearView; }

        @DynamoDBAttribute(attributeName = "SideView")
        public String getSideView() { return sideView; }
        public void setSideView(String sideView) { this.sideView = sideView; }

     }
}
```

You could then save a new `ProductCatalog` item, with `Pictures`, as shown in the following example.

```
ProductCatalogItem item = new ProductCatalogItem();

Pictures pix = new Pictures();
pix.setFrontView("http://example.com/products/123_front.jpg");
pix.setRearView("http://example.com/products/123_rear.jpg");
pix.setSideView("http://example.com/products/123_left_side.jpg");
item.setPictures(pix);

item.setId(123);

mapper.save(item);
```

The resulting `ProductCatalog` item would look like the following (in JSON format).

```
{
  "Id" : 123
  "Pictures" : {
    "SideView" : "http://example.com/products/123_left_side.jpg",
    "RearView" : "http://example.com/products/123_rear.jpg",
    "FrontView" : "http://example.com/products/123_front.jpg"
  }
}
```

## DynamoDBHashKey
<a name="DynamoDBMapper.Annotations.DynamoDBHashKey"></a>

Maps a class property to the partition key of the table. The property must be one of the scalar string, number, or binary types. The property can't be a collection type. 

Assume that you have a table, `ProductCatalog`, that has `Id` as the primary key. The following Java code defines a `CatalogItem` class and maps its `Id` property to the primary key of the `ProductCatalog` table using the `@DynamoDBHashKey` tag.

```
@DynamoDBTable(tableName="ProductCatalog")
public class CatalogItem {
    private Integer Id;
   @DynamoDBHashKey(attributeName="Id")
   public Integer getId() {
        return Id;
   }
   public void setId(Integer Id) {
        this.Id = Id;
   }
   // Additional properties go here.
}
```

## DynamoDBIgnore
<a name="DynamoDBMapper.Annotations.DynamoDBIgnore"></a>

Indicates to the `DynamoDBMapper` instance that the associated property should be ignored. When saving data to the table, the `DynamoDBMapper` does not save this property to the table.

 Applied to the getter method or the class field for a non-modeled property. If the annotation is applied directly to the class field, the corresponding getter and setter must be declared in the same class. 

## DynamoDBIndexHashKey
<a name="DynamoDBMapper.Annotations.DynamoDBIndexHashKey"></a>

Maps a class property to the partition key of a global secondary index. The property must be one of the scalar string, number, or binary types. The property can't be a collection type. 

Use this annotation if you need to `Query` a global secondary index. You must specify the index name (`globalSecondaryIndexName`). If the name of the class property is different from the index partition key, you also must specify the name of that index attribute (`attributeName`).

## DynamoDBIndexRangeKey
<a name="DynamoDBMapper.Annotations.DynamoDBIndexRangeKey"></a>

Maps a class property to the sort key of a global secondary index or a local secondary index. The property must be one of the scalar string, number, or binary types. The property can't be a collection type. 

Use this annotation if you need to `Query` a local secondary index or a global secondary index and want to refine your results using the index sort key. You must specify the index name (either `globalSecondaryIndexName` or `localSecondaryIndexName`). If the name of the class property is different from the index sort key, you must also specify the name of that index attribute (`attributeName`).

## DynamoDBRangeKey
<a name="DynamoDBMapper.Annotations.DynamoDBRangeKey"></a>

Maps a class property to the sort key of the table. The property must be one of the scalar string, number, or binary types. It cannot be a collection type. 

If the primary key is composite (partition key and sort key), you can use this tag to map your class field to the sort key. For example, assume that you have a `Reply` table that stores replies for forum threads. Each thread can have many replies. So the primary key of this table is both the `ThreadId` and `ReplyDateTime`. The `ThreadId` is the partition key, and `ReplyDateTime` is the sort key. 

The following Java code defines a `Reply` class and maps it to the `Reply` table. It uses both the `@DynamoDBHashKey` and `@DynamoDBRangeKey` tags to identify class properties that map to the primary key.

```
@DynamoDBTable(tableName="Reply")
public class Reply {
    private Integer id;
    private String replyDateTime;

    @DynamoDBHashKey(attributeName="Id")
    public Integer getId() { return id; }
    public void setId(Integer id) { this.id = id; }

    @DynamoDBRangeKey(attributeName="ReplyDateTime")
    public String getReplyDateTime() { return replyDateTime; }
    public void setReplyDateTime(String replyDateTime) { this.replyDateTime = replyDateTime; }

   // Additional properties go here.
}
```

## DynamoDBTable
<a name="DynamoDBMapper.Annotations.DynamoDBTable"></a>

Identifies the target table in DynamoDB. For example, the following Java code defines a class `Developer` and maps it to the `People` table in DynamoDB. 

```
@DynamoDBTable(tableName="People")
public class Developer { ...}
```

The `@DynamoDBTable` annotation can be inherited. Any new class that inherits from the `Developer` class also maps to the `People` table. For example, assume that you create a `Lead` class that inherits from the `Developer` class. Because you mapped the `Developer` class to the `People` table, the `Lead` class objects are also stored in the same table.

The `@DynamoDBTable` can also be overridden. Any new class that inherits from the `Developer` class by default maps to the same `People` table. However, you can override this default mapping. For example, if you create a class that inherits from the `Developer` class, you can explicitly map it to another table by adding the `@DynamoDBTable` annotation as shown in the following Java code example.

```
@DynamoDBTable(tableName="Managers")
public class Manager extends Developer { ...}
```

## DynamoDBTypeConverted
<a name="DynamoDBMapper.Annotations.DynamoDBTypeConverted"></a>

An annotation to mark a property as using a custom type converter. Can be annotated on a user-defined annotation to pass additional properties to the `DynamoDBTypeConverter`. 

 The `DynamoDBTypeConverter` interface lets you map your own arbitrary data types to a data type that is natively supported by DynamoDB. For more information, see [Mapping arbitrary data in DynamoDB](DynamoDBMapper.ArbitraryDataMapping.md).

## DynamoDBTyped
<a name="DynamoDBMapper.Annotations.DynamoDBTyped"></a>

An annotation to override the standard attribute type binding. Standard types do not require the annotation if applying the default attribute binding for that type. 

## DynamoDBVersionAttribute
<a name="DynamoDBMapper.Annotations.DynamoDBVersionAttribute"></a>

Identifies a class property for storing an optimistic locking version number. `DynamoDBMapper` assigns a version number to this property when it saves a new item, and increments it each time you update the item. Only number scalar types are supported. For more information about data types, see [Data types](HowItWorks.NamingRulesDataTypes.md#HowItWorks.DataTypes). For more information about versioning, see [DynamoDB and optimistic locking with version number](DynamoDBMapper.OptimisticLocking.md).

# Optional configuration settings for DynamoDBMapper
<a name="DynamoDBMapper.OptionalConfig"></a>

When you create an instance of `DynamoDBMapper`, it has certain default behaviors; you can override these defaults by using the `DynamoDBMapperConfig` class. 

The following code snippet creates a `DynamoDBMapper` with custom settings:

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();

DynamoDBMapperConfig mapperConfig = DynamoDBMapperConfig.builder()
        .withSaveBehavior(DynamoDBMapperConfig.SaveBehavior.CLOBBER)
        .withConsistentReads(DynamoDBMapperConfig.ConsistentReads.CONSISTENT)
        .withTableNameOverride(null)
        .withPaginationLoadingStrategy(DynamoDBMapperConfig.PaginationLoadingStrategy.EAGER_LOADING)
    .build();

DynamoDBMapper mapper = new DynamoDBMapper(client, mapperConfig);
```

For more information, see [https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/datamodeling/DynamoDBMapperConfig.html](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/datamodeling/DynamoDBMapperConfig.html) in the [AWS SDK for Java API Reference](https://docs.aws.amazon.com/sdk-for-java/latest/reference/).

You can use the following arguments for an instance of `DynamoDBMapperConfig`:
+ A `DynamoDBMapperConfig.ConsistentReads` enumeration value:
  + `EVENTUAL`—the mapper instance uses an eventually consistent read request.
  + `CONSISTENT`—the mapper instance uses a strongly consistent read request. You can use this optional setting with `load`, `query`, or `scan` operations. Strongly consistent reads have implications for performance and billing; see the DynamoDB [product detail page](https://aws.amazon.com/dynamodb) for more information.

  If you do not specify a read consistency setting for your mapper instance, the default is `EVENTUAL`.
**Note**  
This value is applied in the `query`, `querypage`, `load`, and `batch load` operations of the DynamoDBMapper.
+ A `DynamoDBMapperConfig.PaginationLoadingStrategy` enumeration value—Controls how the mapper instance processes a paginated list of data, such as the results from a `query` or `scan`:
  + `LAZY_LOADING`—the mapper instance loads data when possible, and keeps all loaded results in memory.
  + `EAGER_LOADING`—the mapper instance loads the data as soon as the list is initialized.
  + `ITERATION_ONLY`—you can only use an Iterator to read from the list. During the iteration, the list will clear all the previous results before loading the next page, so that the list will keep at most one page of the loaded results in memory. This also means the list can only be iterated once. This strategy is recommended when handling large items, in order to reduce memory overhead.

  If you do not specify a pagination loading strategy for your mapper instance, the default is `LAZY_LOADING`.
+ A `DynamoDBMapperConfig.SaveBehavior` enumeration value - Specifies how the mapper instance should deal with attributes during save operations:
  + `UPDATE`—during a save operation, all modeled attributes are updated, and unmodeled attributes are unaffected. Primitive number types (byte, int, long) are set to 0. Object types are set to null. 
  + `CLOBBER`—clears and replaces all attributes, included unmodeled ones, during a save operation. This is done by deleting the item and re-creating it. Versioned field constraints are also disregarded.

   If you do not specify the save behavior for your mapper instance, the default is `UPDATE`.
**Note**  
DynamoDBMapper transactional operations do not support `DynamoDBMapperConfig.SaveBehavior` enumeration. 
+ A `DynamoDBMapperConfig.TableNameOverride` object—Instructs the mapper instance to ignore the table name specified by a class's `DynamoDBTable` annotation, and instead use a different table name that you supply. This is useful when partitioning your data into multiple tables at runtime. 

You can override the default configuration object for `DynamoDBMapper` per operation, as needed.

# DynamoDB and optimistic locking with version number
<a name="DynamoDBMapper.OptimisticLocking"></a>

*Optimistic locking* is a strategy to ensure that the client-side item that you are updating (or deleting) is the same as the item in Amazon DynamoDB. If you use this strategy, your database writes are protected from being overwritten by the writes of others, and vice versa.

With optimistic locking, each item has an attribute that acts as a version number. If you retrieve an item from a table, the application records the version number of that item. You can update the item, but only if the version number on the server side has not changed. If there is a version mismatch, it means that someone else has modified the item before you did. The update attempt fails, because you have a stale version of the item. If this happens, try again by retrieving the item and then trying to update it. Optimistic locking prevents you from accidentally overwriting changes that were made by others. It also prevents others from accidentally overwriting your changes.

While you can implement your own optimistic locking strategy, the AWS SDK for Java provides the `@DynamoDBVersionAttribute` annotation. In the mapping class for your table, you designate one property to store the version number, and mark it using this annotation. When you save an object, the corresponding item in the DynamoDB table will have an attribute that stores the version number. The `DynamoDBMapper` assigns a version number when you first save the object, and it automatically increments the version number each time you update the item. Your update or delete requests succeed only if the client-side object version matches the corresponding version number of the item in the DynamoDB table.

 `ConditionalCheckFailedException` is thrown if: 
+  You use optimistic locking with `@DynamoDBVersionAttribute` and the version value on the server is different from the value on the client side. 
+  You specify your own conditional constraints while saving data by using `DynamoDBMapper` with `DynamoDBSaveExpression` and these constraints failed. 

**Note**  
DynamoDB global tables use a “last writer wins” reconciliation between concurrent updates. If you use global tables, last writer policy wins. So in this case, the locking strategy does not work as expected.
`DynamoDBMapper` transactional write operations do not support `@DynamoDBVersionAttribute` annotation and condition expressions on the same object. If an object within a transactional write is annotated with `@DynamoDBVersionAttribute` and also has a condition expression, then an SdkClientException will be thrown.

For example, the following Java code defines a `CatalogItem` class that has several properties. The `Version` property is tagged with the `@DynamoDBVersionAttribute` annotation.

**Example**  

```
@DynamoDBTable(tableName="ProductCatalog")
public class CatalogItem {

    private Integer id;
    private String title;
    private String ISBN;
    private Set<String> bookAuthors;
    private String someProp;
    private Long version;

    @DynamoDBHashKey(attributeName="Id")
    public Integer getId() { return id; }
    public void setId(Integer Id) { this.id = Id; }

    @DynamoDBAttribute(attributeName="Title")
    public String getTitle() { return title; }
    public void setTitle(String title) { this.title = title; }

    @DynamoDBAttribute(attributeName="ISBN")
    public String getISBN() { return ISBN; }
    public void setISBN(String ISBN) { this.ISBN = ISBN;}

    @DynamoDBAttribute(attributeName = "Authors")
    public Set<String> getBookAuthors() { return bookAuthors; }
    public void setBookAuthors(Set<String> bookAuthors) { this.bookAuthors = bookAuthors; }

    @DynamoDBIgnore
    public String getSomeProp() { return someProp;}
    public void setSomeProp(String someProp) {this.someProp = someProp;}

    @DynamoDBVersionAttribute
    public Long getVersion() { return version; }
    public void setVersion(Long version) { this.version = version;}
}
```

You can apply the `@DynamoDBVersionAttribute` annotation to nullable types provided by the primitive wrappers classes that provide a nullable type, such as `Long` and `Integer`. 

Optimistic locking has the following impact on these `DynamoDBMapper` methods:
+ `save` — For a new item, the `DynamoDBMapper` assigns an initial version number of 1. If you retrieve an item, update one or more of its properties, and attempt to save the changes, the save operation succeeds only if the version number on the client side and the server side match. The `DynamoDBMapper` increments the version number automatically.
+ `delete` — The `delete` method takes an object as a parameter, and the `DynamoDBMapper` performs a version check before deleting the item. The version check can be disabled if `DynamoDBMapperConfig.SaveBehavior.CLOBBER` is specified in the request.

  The internal implementation of optimistic locking within `DynamoDBMapper` uses conditional update and conditional delete support provided by DynamoDB. 
+ `transactionWrite` —
  + `Put` — For a new item, the `DynamoDBMapper` assigns an initial version number of 1. If you retrieve an item, update one or more of its properties, and attempt to save the changes, the put operation succeeds only if the version number on the client side and the server side match. The `DynamoDBMapper` increments the version number automatically.
  + `Update` — For a new item, the `DynamoDBMapper` assigns an initial version number of 1. If you retrieve an item, update one or more of its properties, and attempt to save the changes, the update operation succeeds only if the version number on the client side and the server side match. The `DynamoDBMapper` increments the version number automatically.
  + `Delete` — The `DynamoDBMapper` performs a version check before deleting the item. The delete operation succeeds only if the version number on the client side and the server side match.
  + `ConditionCheck` — The `@DynamoDBVersionAttribute` annotation is not supported for `ConditionCheck` operations. An SdkClientException will be thrown when a `ConditionCheck` item is annotated with `@DynamoDBVersionAttribute`. 

## Disabling optimistic locking
<a name="DynamoDBMapper.OptimisticLocking.Disabling"></a>

To disable optimistic locking, you can change the `DynamoDBMapperConfig.SaveBehavior` enumeration value from `UPDATE` to `CLOBBER`. You can do this by creating a `DynamoDBMapperConfig` instance that skips version checking and use this instance for all your requests. For information about `DynamoDBMapperConfig.SaveBehavior` and other optional `DynamoDBMapper` parameters, see [Optional configuration settings for DynamoDBMapper](DynamoDBMapper.OptionalConfig.md). 

You can also set locking behavior for a specific operation only. For example, the following Java snippet uses the `DynamoDBMapper` to save a catalog item. It specifies `DynamoDBMapperConfig.SaveBehavior` by adding the optional `DynamoDBMapperConfig` parameter to the `save` method. 

**Note**  
The transactionWrite method does not support DynamoDBMapperConfig.SaveBehavior configuration. Disabling optimistic locking for transactionWrite is not supported.

**Example**  

```
DynamoDBMapper mapper = new DynamoDBMapper(client);

// Load a catalog item.
CatalogItem item = mapper.load(CatalogItem.class, 101);
item.setTitle("This is a new title for the item");
...
// Save the item.
mapper.save(item,
    new DynamoDBMapperConfig(
        DynamoDBMapperConfig.SaveBehavior.CLOBBER));
```

# Mapping arbitrary data in DynamoDB
<a name="DynamoDBMapper.ArbitraryDataMapping"></a>

In addition to the supported Java types (see [Supported data types for DynamoDBMapper for Java](DynamoDBMapper.DataTypes.md)), you can use types in your application for which there is no direct mapping to the Amazon DynamoDB types. To map these types, you must provide an implementation that converts your complex type to a DynamoDB supported type and vice versa, and annotate the complex type accessor method using the `@DynamoDBTypeConverted` annotation. The converter code transforms data when objects are saved or loaded. It is also used for all operations that consume complex types. Note that when comparing data during query and scan operations, the comparisons are made against the data stored in DynamoDB.

For example, consider the following `CatalogItem` class that defines a property, `Dimension`, that is of `DimensionType`. This property stores the item dimensions as height, width, and thickness. Assume that you decide to store these item dimensions as a string (such as 8.5x11x.05) in DynamoDB. The following example provides converter code that converts the `DimensionType` object to a string and a string to the `DimensionType`.



**Note**  
This code example assumes that you have already loaded data into DynamoDB for your account by following the instructions in the [Creating tables and loading data for code examples in DynamoDB](SampleData.md) section.  
For step-by-step instructions to run the following example, see [Java code examples](CodeSamples.Java.md).

**Example**  

```
public class DynamoDBMapperExample {

    static AmazonDynamoDB client;

    public static void main(String[] args) throws IOException {

        // Set the AWS region you want to access.
        Regions usWest2 = Regions.US_WEST_2;
        client = AmazonDynamoDBClientBuilder.standard().withRegion(usWest2).build();

        DimensionType dimType = new DimensionType();
        dimType.setHeight("8.00");
        dimType.setLength("11.0");
        dimType.setThickness("1.0");

        Book book = new Book();
        book.setId(502);
        book.setTitle("Book 502");
        book.setISBN("555-5555555555");
        book.setBookAuthors(new HashSet<String>(Arrays.asList("Author1", "Author2")));
        book.setDimensions(dimType);

        DynamoDBMapper mapper = new DynamoDBMapper(client);
        mapper.save(book);

        Book bookRetrieved = mapper.load(Book.class, 502);
        System.out.println("Book info: " + "\n" + bookRetrieved);

        bookRetrieved.getDimensions().setHeight("9.0");
        bookRetrieved.getDimensions().setLength("12.0");
        bookRetrieved.getDimensions().setThickness("2.0");

        mapper.save(bookRetrieved);

        bookRetrieved = mapper.load(Book.class, 502);
        System.out.println("Updated book info: " + "\n" + bookRetrieved);
    }

    @DynamoDBTable(tableName = "ProductCatalog")
    public static class Book {
        private int id;
        private String title;
        private String ISBN;
        private Set<String> bookAuthors;
        private DimensionType dimensionType;

        // Partition key
        @DynamoDBHashKey(attributeName = "Id")
        public int getId() {
            return id;
        }

        public void setId(int id) {
            this.id = id;
        }

        @DynamoDBAttribute(attributeName = "Title")
        public String getTitle() {
            return title;
        }

        public void setTitle(String title) {
            this.title = title;
        }

        @DynamoDBAttribute(attributeName = "ISBN")
        public String getISBN() {
            return ISBN;
        }

        public void setISBN(String ISBN) {
            this.ISBN = ISBN;
        }

        @DynamoDBAttribute(attributeName = "Authors")
        public Set<String> getBookAuthors() {
            return bookAuthors;
        }

        public void setBookAuthors(Set<String> bookAuthors) {
            this.bookAuthors = bookAuthors;
        }

        @DynamoDBTypeConverted(converter = DimensionTypeConverter.class)
        @DynamoDBAttribute(attributeName = "Dimensions")
        public DimensionType getDimensions() {
            return dimensionType;
        }

        @DynamoDBAttribute(attributeName = "Dimensions")
        public void setDimensions(DimensionType dimensionType) {
            this.dimensionType = dimensionType;
        }

        @Override
        public String toString() {
            return "Book [ISBN=" + ISBN + ", bookAuthors=" + bookAuthors + ", dimensionType= "
                    + dimensionType.getHeight() + " X " + dimensionType.getLength() + " X "
                    + dimensionType.getThickness()
                    + ", Id=" + id + ", Title=" + title + "]";
        }
    }

    static public class DimensionType {

        private String length;
        private String height;
        private String thickness;

        public String getLength() {
            return length;
        }

        public void setLength(String length) {
            this.length = length;
        }

        public String getHeight() {
            return height;
        }

        public void setHeight(String height) {
            this.height = height;
        }

        public String getThickness() {
            return thickness;
        }

        public void setThickness(String thickness) {
            this.thickness = thickness;
        }
    }

    // Converts the complex type DimensionType to a string and vice-versa.
    static public class DimensionTypeConverter implements DynamoDBTypeConverter<String, DimensionType> {

        @Override
        public String convert(DimensionType object) {
            DimensionType itemDimensions = (DimensionType) object;
            String dimension = null;
            try {
                if (itemDimensions != null) {
                    dimension = String.format("%s x %s x %s", itemDimensions.getLength(), itemDimensions.getHeight(),
                            itemDimensions.getThickness());
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
            return dimension;
        }

        @Override
        public DimensionType unconvert(String s) {

            DimensionType itemDimension = new DimensionType();
            try {
                if (s != null && s.length() != 0) {
                    String[] data = s.split("x");
                    itemDimension.setLength(data[0].trim());
                    itemDimension.setHeight(data[1].trim());
                    itemDimension.setThickness(data[2].trim());
                }
            } catch (Exception e) {
                e.printStackTrace();
            }

            return itemDimension;
        }
    }
}
```

# DynamoDBMapper examples
<a name="DynamoDBMapper.Examples"></a>

The AWS SDK for Java provides a `DynamoDBMapper` class, allowing you to map your client-side classes to DynamoDB tables. To use `DynamoDBMapper`, you define the relationship between items in a DynamoDB table and their corresponding object instances in your code. The `DynamoDBMapper` class enables you to perform various create, read, update, and delete (CRUD) operations on items, and run queries and scans against tables.

To learn more about how to use `DynamoDBMapper`, see [DynamoDB Examples Using the AWS SDK for Java ](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-dynamodb.html) in the *AWS SDK for Java 1.x Developer Guide*. 

# Java 2.x: DynamoDB Enhanced Client
<a name="DynamoDBEnhanced"></a>

The DynamoDB enhanced client is a high-level library that is part of the AWS SDK for Java version 2 (v2). It offers a straightforward way to map client-side classes to DynamoDB tables. You define the relationships between tables and their corresponding model classes in your code. After you define those relationships, you can intuitively perform various create, read, update, or delete (CRUD) operations on tables or items in DynamoDB.

For more information on how you can use the enhanced client with DynamoDB, see [Using the DynamoDB Enhanced Client in the AWS SDK for Java 2.x ](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/dynamodb-enhanced-client.html). 

# Working with the .NET document model in DynamoDB
<a name="DotNetSDKMidLevel"></a>

The AWS SDK for .NET provides document model classes that wrap some of the low-level Amazon DynamoDB operations, further simplifying your coding. In the document model, the primary classes are `Table` and `Document`. The `Table` class provides data operation methods such as `PutItem`, `GetItem`, and `DeleteItem`. It also provides the `Query` and the `Scan` methods. The `Document` class represents a single item in a table.

The preceding document model classes are available in the `Amazon.DynamoDBv2.DocumentModel` namespace.

**Note**  
You can't use the document model classes to create, update, and delete tables. However, the document model does support most common data operations.

**Topics**
+ [Supported data types](#MidLevelAPILimitations.SupportedTypes)

## Supported data types
<a name="MidLevelAPILimitations.SupportedTypes"></a>

The document model supports a set of primitive .NET data types and collections data types. The model supports the following primitive data types. 
+ `bool`
+ `byte` 
+ `char`
+ `DateTime`
+ `decimal`
+ `double`
+ `float`
+ `Guid`
+ `Int16`
+ `Int32`
+ `Int64`
+ `SByte`
+ `string`
+ `UInt16`
+ `UInt32`
+ `UInt64`

The following table summarizes the mapping of the preceding .NET types to the DynamoDB types.


****  

| .NET primitive type | DynamoDB type | 
| --- | --- | 
|  All number types  |  `N` (number type)  | 
|  All string types  |  `S` (string type)   | 
|  MemoryStream, byte[]  |  `B` (binary type)   | 
| bool | N (number type). 0 represents false and 1 represents true. | 
| DateTime | S (string type). The DateTime values are stored as ISO-8601 formatted strings. | 
| Guid | S (string type). | 
| Collection types (List, HashSet, and array) | BS (binary set) type, SS (string set) type, and NS (number set) type | 

AWS SDK for .NET defines types for mapping DynamoDB's Boolean, null, list and map types to .NET document model API:
+ Use `DynamoDBBool` for Boolean type.
+ Use `DynamoDBNull` for null type.
+ Use `DynamoDBList` for list type.
+ Use `Document` for map type.

**Note**  
Empty binary values are supported.
Reading of empty string values is supported. Empty string attribute values are supported within attribute values of string Set type while writing to DynamoDB. Empty string attribute values of string type and empty string values contained within List or Map type are dropped from write requests

# Working with the .NET object persistence model and DynamoDB
<a name="DotNetSDKHighLevel"></a>

The AWS SDK for .NET provides an object persistence model that enables you to map your client-side classes to Amazon DynamoDB tables. Each object instance then maps to an item in the corresponding tables. To save your client-side objects to the tables, the object persistence model provides the `DynamoDBContext` class, an entry point to DynamoDB. This class provides you a connection to DynamoDB and enables you to access tables, perform various CRUD operations, and run queries.

The object persistence model provides a set of attributes to map client-side classes to tables, and properties/fields to table attributes.

**Note**  
The object persistence model does not provide an API to create, update, or delete tables. It provides only data operations. You can use only the AWS SDK for .NET low-level API to create, update, and delete tables.

The following example shows how the object persistence model works. It starts with the `ProductCatalog` table. It has `Id` as the primary key.

```
ProductCatalog(Id, ...)
```

Suppose that you have a `Book` class with `Title`, `ISBN`, and `Authors` properties. You can map the `Book` class to the `ProductCatalog` table by adding the attributes defined by the object persistence model, as shown in the following C\$1 code example.

**Example**  

```
[DynamoDBTable("ProductCatalog")]
  public class Book
  {
    [DynamoDBHashKey]
    public int Id { get; set; }

    public string Title { get; set; }
    public int ISBN { get; set; }

    [DynamoDBProperty("Authors")]
    public List<string> BookAuthors { get; set; }

    [DynamoDBIgnore]
    public string CoverPage { get; set; }
  }
```

In the preceding example, the `DynamoDBTable` attribute maps the `Book` class to the `ProductCatalog` table.

The object persistence model supports both the explicit and default mapping between class properties and table attributes.
+ **Explicit mapping—**To map a property to a primary key, you must use the `DynamoDBHashKey` and `DynamoDBRangeKey` object persistence model attributes. Additionally, for the nonprimary key attributes, if a property name in your class and the corresponding table attribute to which you want to map it are not the same, you must define the mapping by explicitly adding the `DynamoDBProperty` attribute.

  In the preceding example, the `Id` property maps to the primary key with the same name, and the `BookAuthors` property maps to the `Authors` attribute in the `ProductCatalog` table.
+ **Default mapping—**By default, the object persistence model maps the class properties to the attributes with the same name in the table.

  In the preceding example, the properties `Title` and `ISBN` map to the attributes with the same name in the `ProductCatalog` table.

You don't have to map every single class property. You identify these properties by adding the `DynamoDBIgnore` attribute. When you save a `Book` instance to the table, the `DynamoDBContext` does not include the `CoverPage` property. It also does not return this property when you retrieve the book instance.

You can map properties of .NET primitive types such as int and string. You also can map any arbitrary data types as long as you provide an appropriate converter to map the arbitrary data to one of the DynamoDB types. To learn about mapping arbitrary types, see [Mapping arbitrary data with DynamoDB using the AWS SDK for .NET object persistence model](DynamoDBContext.ArbitraryDataMapping.md).

The object persistence model supports optimistic locking. During an update operation, this ensures that you have the latest copy of the item you are about to update. For more information, see [Optimistic locking using DynamoDB and the AWS SDK for .NET object persistence model](DynamoDBContext.VersionSupport.md).

For more information, see the topics below.

**Topics**
+ [Supported data types](#DotNetDynamoDBContext.SupportedTypes)
+ [DynamoDB attributes from the .NET object persistence model](DeclarativeTagsList.md)
+ [DynamoDBContext class from the .NET object persistence model](DotNetDynamoDBContext.md)
+ [Optimistic locking using DynamoDB and the AWS SDK for .NET object persistence model](DynamoDBContext.VersionSupport.md)
+ [Mapping arbitrary data with DynamoDB using the AWS SDK for .NET object persistence model](DynamoDBContext.ArbitraryDataMapping.md)

## Supported data types
<a name="DotNetDynamoDBContext.SupportedTypes"></a>

The object persistence model supports a set of primitive .NET data types, collections, and arbitrary data types. The model supports the following primitive data types. 
+ `bool`
+ `byte` 
+ `char`
+ `DateTime`
+ `decimal`
+ `double`
+ `float`
+ `Int16`
+ `Int32`
+ `Int64`
+ `SByte`
+ `string`
+ `UInt16`
+ `UInt32`
+ `UInt64`

The object persistence model also supports the .NET collection types. `DynamoDBContext` is able to convert concrete collection types and simple Plain Old CLR Objects (POCOs).

The following table summarizes the mapping of the preceding .NET types to the DynamoDB types.


****  

| .NET primitive type | DynamoDB type | 
| --- | --- | 
|  All number types  |  `N` (number type)  | 
|  All string types  |  `S` (string type)   | 
|  MemoryStream, byte[]  |  `B` (binary type)   | 
| bool | N (number type). 0 represents false and 1 represents true. | 
| Collection types | BS (binary set) type, SS (string set) type, and NS (number set) type | 
| DateTime | S (string type). The DateTime values are stored as ISO-8601 formatted strings. | 

The object persistence model also supports arbitrary data types. However, you must provide converter code to map the complex types to the DynamoDB types.

**Note**  
Empty binary values are supported.
Reading of empty string values is supported. Empty string attribute values are supported within attribute values of string Set type while writing to DynamoDB. Empty string attribute values of string type and empty string values contained within List or Map type are dropped from write requests

# DynamoDB attributes from the .NET object persistence model
<a name="DeclarativeTagsList"></a>

This section describes the attributes that the object persistence model offers so that you can map your classes and properties to DynamoDB tables and attributes.

**Note**  
In the following attributes, only `DynamoDBTable` and `DynamoDBHashKey` are required.

## DynamoDBGlobalSecondaryIndexHashKey
<a name="w2aac17b9c21c23c37b7"></a>

Maps a class property to the partition key of a global secondary index. Use this attribute if you need to `Query` a global secondary index.

## DynamoDBGlobalSecondaryIndexRangeKey
<a name="w2aac17b9c21c23c37b9"></a>

Maps a class property to the sort key of a global secondary index. Use this attribute if you need to `Query` a global secondary index and want to refine your results using the index sort key.

## DynamoDBHashKey
<a name="w2aac17b9c21c23c37c11"></a>

Maps a class property to the partition key of the table's primary key. The primary key attributes cannot be a collection type.

The following C\$1 code example maps the `Book` class to the `ProductCatalog` table, and the `Id` property to the table's primary key partition key.

```
[DynamoDBTable("ProductCatalog")]
public class Book 
{
    [DynamoDBHashKey]
    public int Id { get; set; }

    // Additional properties go here.
}
```

## DynamoDBIgnore
<a name="w2aac17b9c21c23c37c13"></a>

Indicates that the associated property should be ignored. If you don't want to save any of your class properties, you can add this attribute to instruct `DynamoDBContext` not to include this property when saving objects to the table.

## DynamoDBLocalSecondaryIndexRangeKey
<a name="w2aac17b9c21c23c37c15"></a>

Maps a class property to the sort key of a local secondary index. Use this attribute if you need to `Query` a local secondary index and want to refine your results using the index sort key.

## DynamoDBProperty
<a name="w2aac17b9c21c23c37c17"></a>

Maps a class property to a table attribute. If the class property maps to a table attribute of the same name, you don't need to specify this attribute. However, if the names are not the same, you can use this tag to provide the mapping. In the following C\$1 statement, the `DynamoDBProperty` maps the `BookAuthors` property to the `Authors` attribute in the table. 

```
[DynamoDBProperty("Authors")]
public List<string> BookAuthors { get; set; }
```

`DynamoDBContext` uses this mapping information to create the `Authors` attribute when saving object data to the corresponding table.

## DynamoDBRenamable
<a name="w2aac17b9c21c23c37c19"></a>

Specifies an alternative name for a class property. This is useful if you are writing a custom converter for mapping arbitrary data to a DynamoDB table where the name of a class property is different from a table attribute.

## DynamoDBRangeKey
<a name="w2aac17b9c21c23c37c21"></a>

Maps a class property to the sort key of the table's primary key. If the table has a composite primary key (partition key and sort key), you must specify both the `DynamoDBHashKey` and `DynamoDBRangeKey` attributes in your class mapping.

For example, the sample table `Reply` has a primary key made of the `Id` partition key and `Replenishment` sort key. The following C\$1 code example maps the `Reply` class to the `Reply` table. The class definition also indicates that two of its properties map to the primary key.

```
[DynamoDBTable("Reply")]
public class Reply 
{
   [DynamoDBHashKey]
   public int ThreadId { get; set; }
   [DynamoDBRangeKey]
   public string Replenishment { get; set; }
   
   // Additional properties go here.
}
```

## DynamoDBTable
<a name="w2aac17b9c21c23c37c23"></a>

Identifies the target table in DynamoDB to which the class maps. For example, the following C\$1 code example maps the `Developer` class to the `People` table in DynamoDB.

```
[DynamoDBTable("People")]
public class Developer { ...}
```

This attribute can be inherited or overridden.
+ The `DynamoDBTable` attribute can be inherited. In the preceding example, if you add a new class, `Lead`, that inherits from the `Developer` class, it also maps to the `People` table. Both the `Developer` and `Lead` objects are stored in the `People` table.
+ The `DynamoDBTable` attribute can also be overridden. In the following C\$1 code example, the `Manager` class inherits from the `Developer` class. However, the explicit addition of the `DynamoDBTable` attribute maps the class to another table (`Managers`).

  ```
  [DynamoDBTable("Managers")]
  public class Manager : Developer { ...}
  ```

 You can add the optional parameter, `LowerCamelCaseProperties`, to request DynamoDB to make the first letter of the property name lowercase when storing the objects to a table, as shown in the following C\$1 example.

```
[DynamoDBTable("People", LowerCamelCaseProperties=true)]
public class Developer 
{
    string DeveloperName;
    ...
}
```

When saving instances of the `Developer` class, `DynamoDBContext` saves the `DeveloperName` property as the `developerName`.

## DynamoDBVersion
<a name="w2aac17b9c21c23c37c25"></a>

Identifies a class property for storing the item version number. For more information about versioning, see [Optimistic locking using DynamoDB and the AWS SDK for .NET object persistence model](DynamoDBContext.VersionSupport.md).

# DynamoDBContext class from the .NET object persistence model
<a name="DotNetDynamoDBContext"></a>

The `DynamoDBContext` class is the entry point to the Amazon DynamoDB database. It provides a connection to DynamoDB and enables you to access your data in various tables, perform various CRUD operations, and run queries. The `DynamoDBContext` class provides the following methods.

**Topics**
+ [Create​MultiTable​BatchGet](#w2aac17b9c21c23c39b7)
+ [Create​MultiTable​BatchWrite](#w2aac17b9c21c23c39b9)
+ [CreateBatchGet](#w2aac17b9c21c23c39c11)
+ [CreateBatchWrite](#w2aac17b9c21c23c39c13)
+ [Delete](#w2aac17b9c21c23c39c15)
+ [Dispose](#w2aac17b9c21c23c39c17)
+ [Execute​Batch​Get](#w2aac17b9c21c23c39c19)
+ [Execute​Batch​Write](#w2aac17b9c21c23c39c21)
+ [FromDocument](#w2aac17b9c21c23c39c23)
+ [FromQuery](#w2aac17b9c21c23c39c25)
+ [FromScan](#w2aac17b9c21c23c39c27)
+ [Get​Target​Table](#w2aac17b9c21c23c39c29)
+ [Load](#w2aac17b9c21c23c39c31)
+ [Query](#w2aac17b9c21c23c39c33)
+ [Save](#w2aac17b9c21c23c39c35)
+ [Scan](#w2aac17b9c21c23c39c37)
+ [ToDocument](#w2aac17b9c21c23c39c39)
+ [Specifying optional parameters for DynamoDBContext](#OptionalConfigParams)

## Create​MultiTable​BatchGet
<a name="w2aac17b9c21c23c39b7"></a>

Creates a `MultiTableBatchGet` object, composed of multiple individual `BatchGet` objects. Each of these `BatchGet` objects can be used for retrieving items from a single DynamoDB table.

To retrieve the items from tables, use the `ExecuteBatchGet` method, passing the `MultiTableBatchGet` object as a parameter.

## Create​MultiTable​BatchWrite
<a name="w2aac17b9c21c23c39b9"></a>

Creates a `MultiTableBatchWrite` object, composed of multiple individual `BatchWrite` objects. Each of these `BatchWrite` objects can be used for writing or deleting items in a single DynamoDB table.

To write to tables, use the `ExecuteBatchWrite` method, passing the `MultiTableBatchWrite` object as a parameter.

## CreateBatchGet
<a name="w2aac17b9c21c23c39c11"></a>

Creates a `BatchGet` object that you can use to retrieve multiple items from a table. 

## CreateBatchWrite
<a name="w2aac17b9c21c23c39c13"></a>

Creates a `BatchWrite` object that you can use to put multiple items into a table, or to delete multiple items from a table. 

## Delete
<a name="w2aac17b9c21c23c39c15"></a>

Deletes an item from the table. The method requires the primary key of the item you want to delete. You can provide either the primary key value or a client-side object containing a primary key value as a parameter to this method.
+ If you specify a client-side object as a parameter and you have enabled optimistic locking, the delete succeeds only if the client-side and the server-side versions of the object match.
+ If you specify only the primary key value as a parameter, the delete succeeds regardless of whether you have enabled optimistic locking or not.

**Note**  
To perform this operation in the background, use the `DeleteAsync` method instead.

## Dispose
<a name="w2aac17b9c21c23c39c17"></a>

Disposes of all managed and unmanaged resources.

## Execute​Batch​Get
<a name="w2aac17b9c21c23c39c19"></a>

Reads data from one or more tables, processing all of the `BatchGet` objects in a `MultiTableBatchGet`.

**Note**  
To perform this operation in the background, use the `ExecuteBatchGetAsync` method instead.

## Execute​Batch​Write
<a name="w2aac17b9c21c23c39c21"></a>

Writes or deletes data in one or more tables, processing all of the `BatchWrite` objects in a `MultiTableBatchWrite`.

**Note**  
To perform this operation in the background, use the `ExecuteBatchWriteAsync` method instead.

## FromDocument
<a name="w2aac17b9c21c23c39c23"></a>

Given an instance of a `Document`, the `FromDocument` method returns an instance of a client-side class.

This is helpful if you want to use the document model classes along with the object persistence model to perform any data operations. For more information about the document model classes provided by the AWS SDK for .NET, see [Working with the .NET document model in DynamoDB](DotNetSDKMidLevel.md).

Suppose that you have a `Document` object named `doc`, that contains a representation of a `Forum` item. (To see how to construct this object, see the description for the `ToDocument` method later in this topic.) You can use `FromDocument` to retrieve the `Forum` item from the `Document`, as shown in the following C\$1 code example.

**Example**  

```
forum101 = context.FromDocument<Forum>(101);
```

**Note**  
If your `Document` object implements the `IEnumerable` interface, you can use the `FromDocuments` method instead. This allows you to iterate over all of the class instances in the `Document`.

## FromQuery
<a name="w2aac17b9c21c23c39c25"></a>

Runs a `Query` operation, with the query parameters defined in a `QueryOperationConfig` object.

**Note**  
To perform this operation in the background, use the `FromQueryAsync` method instead.

## FromScan
<a name="w2aac17b9c21c23c39c27"></a>

Runs a `Scan` operation, with the scan parameters defined in a `ScanOperationConfig` object.

**Note**  
To perform this operation in the background, use the `FromScanAsync` method instead.

## Get​Target​Table
<a name="w2aac17b9c21c23c39c29"></a>

Retrieves the target table for the specified type. This is useful if you are writing a custom converter for mapping arbitrary data to a DynamoDB table, and you need to determine which table is associated with a custom data type.

## Load
<a name="w2aac17b9c21c23c39c31"></a>

Retrieves an item from a table. The method requires only the primary key of the item you want to retrieve. 

By default, DynamoDB returns the item with values that are eventually consistent. For information about the eventual consistency model, see [DynamoDB read consistency](HowItWorks.ReadConsistency.md).

`Load` or `LoadAsync` method calls the [GetItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_GetItem.html) operation, which requires you to specify the primary key for the table. Because `GetItem` ignores the `IndexName` parameter, you can’t load an item using an index’s partition or sort key. Therefore, you must use the table's primary key to load an item.

**Note**  
To perform this operation in the background, use the `LoadAsync` method instead. To view an example of using the `LoadAsync` method to perform high-level CRUD operations on a DynamoDB table, see the following example.

```
    /// <summary>
    /// Shows how to perform high-level CRUD operations on an Amazon DynamoDB
    /// table.
    /// </summary>
    public class HighLevelItemCrud
    {
        public static async Task Main()
        {
            var client = new AmazonDynamoDBClient();
            DynamoDBContext context = new DynamoDBContext(client);
            await PerformCRUDOperations(context);
        }

        public static async Task PerformCRUDOperations(IDynamoDBContext context)
        {
            int bookId = 1001; // Some unique value.
            Book myBook = new Book
            {
                Id = bookId,
                Title = "object persistence-AWS SDK for.NET SDK-Book 1001",
                Isbn = "111-1111111001",
                BookAuthors = new List<string> { "Author 1", "Author 2" },
            };

            // Save the book to the ProductCatalog table.
            await context.SaveAsync(myBook);

            // Retrieve the book from the ProductCatalog table.
            Book bookRetrieved = await context.LoadAsync<Book>(bookId);

            // Update some properties.
            bookRetrieved.Isbn = "222-2222221001";

            // Update existing authors list with the following values.
            bookRetrieved.BookAuthors = new List<string> { " Author 1", "Author x" };
            await context.SaveAsync(bookRetrieved);

            // Retrieve the updated book. This time, add the optional
            // ConsistentRead parameter using DynamoDBContextConfig object.
            await context.LoadAsync<Book>(bookId, new DynamoDBContextConfig
            {
                ConsistentRead = true,
            });

            // Delete the book.
            await context.DeleteAsync<Book>(bookId);

            // Try to retrieve deleted book. It should return null.
            Book deletedBook = await context.LoadAsync<Book>(bookId, new DynamoDBContextConfig
            {
                ConsistentRead = true,
            });

            if (deletedBook == null)
            {
                Console.WriteLine("Book is deleted");
            }
        }
    }
```

## Query
<a name="w2aac17b9c21c23c39c33"></a>

Queries a table based on query parameters you provide.

You can query a table only if it has a composite primary key (partition key and sort key). When querying, you must specify a partition key and a condition that applies to the sort key.

Suppose that you have a client-side `Reply` class mapped to the `Reply` table in DynamoDB. The following C\$1 code example queries the `Reply` table to find forum thread replies posted in the past 15 days. The `Reply` table has a primary key that has the `Id` partition key and the `ReplyDateTime` sort key.

**Example**  

```
DynamoDBContext context = new DynamoDBContext(client);

string replyId = "DynamoDB#DynamoDB Thread 1"; //Partition key
DateTime twoWeeksAgoDate = DateTime.UtcNow.Subtract(new TimeSpan(14, 0, 0, 0)); // Date to compare.
IEnumerable<Reply> latestReplies = context.Query<Reply>(replyId, QueryOperator.GreaterThan, twoWeeksAgoDate);
```

This returns a collection of `Reply` objects. 

The `Query` method returns a "lazy-loaded" `IEnumerable` collection. It initially returns only one page of results, and then makes a service call for the next page if needed. To obtain all the matching items, you need to iterate only over the `IEnumerable`.

If your table has a simple primary key (partition key), you can't use the `Query` method. Instead, you can use the `Load` method and provide the partition key to retrieve the item.

**Note**  
To perform this operation in the background, use the `QueryAsync` method instead.

## Save
<a name="w2aac17b9c21c23c39c35"></a>

Saves the specified object to the table. If the primary key specified in the input object doesn't exist in the table, the method adds a new item to the table. If the primary key exists, the method updates the existing item.

If you have optimistic locking configured, the update succeeds only if the client and the server-side versions of the item match. For more information, see [Optimistic locking using DynamoDB and the AWS SDK for .NET object persistence model](DynamoDBContext.VersionSupport.md).

**Note**  
To perform this operation in the background, use the `SaveAsync` method instead.

## Scan
<a name="w2aac17b9c21c23c39c37"></a>

Performs an entire table scan. 

You can filter scan results by specifying a scan condition. The condition can be evaluated on any attributes in the table. Suppose that you have a client-side class `Book` mapped to the `ProductCatalog` table in DynamoDB. The following C\$1 example scans the table and returns only the book items priced less than 0.

**Example**  

```
IEnumerable<Book> itemsWithWrongPrice = context.Scan<Book>(
                    new ScanCondition("Price", ScanOperator.LessThan, price),
                    new ScanCondition("ProductCategory", ScanOperator.Equal, "Book")
      );
```

The `Scan` method returns a "lazy-loaded" `IEnumerable` collection. It initially returns only one page of results, and then makes a service call for the next page if needed. To obtain all the matching items, you only need to iterate over the `IEnumerable`.

For performance reasons, you should query your tables and avoid a table scan.

**Note**  
To perform this operation in the background, use the `ScanAsync` method instead.

## ToDocument
<a name="w2aac17b9c21c23c39c39"></a>

Returns an instance of the `Document` document model class from your class instance. 

This is helpful if you want to use the document model classes along with the object persistence model to perform any data operations. For more information about the document model classes provided by the AWS SDK for .NET, see [Working with the .NET document model in DynamoDB](DotNetSDKMidLevel.md). 

Suppose that you have a client-side class mapped to the sample `Forum` table. You can then use a `DynamoDBContext` to get an item as a `Document` object from the `Forum` table, as shown in the following C\$1 code example.

**Example**  

```
DynamoDBContext context = new DynamoDBContext(client);

Forum forum101 = context.Load<Forum>(101); // Retrieve a forum by primary key.
Document doc = context.ToDocument<Forum>(forum101);
```

## Specifying optional parameters for DynamoDBContext
<a name="OptionalConfigParams"></a>

When using the object persistence model, you can specify the following optional parameters for the `DynamoDBContext`.
+ **`ConsistentRead`—**When retrieving data using the `Load`, `Query`, or `Scan` operations, you can add this optional parameter to request the latest values for the data.
+ **`IgnoreNullValues`—**This parameter informs `DynamoDBContext` to ignore null values on attributes during a `Save` operation. If this parameter is false (or if it is not set), then a null value is interpreted as a directive to delete the specific attribute. 
+ **`SkipVersionCheck`—** This parameter informs `DynamoDBContext` not to compare versions when saving or deleting an item. For more information about versioning, see [Optimistic locking using DynamoDB and the AWS SDK for .NET object persistence model](DynamoDBContext.VersionSupport.md).
+ **`TableNamePrefix`—** Prefixes all table names with a specific string. If this parameter is null (or if it is not set), then no prefix is used.
+ `DynamoDBEntryConversion` – Specifies the conversion schema that is used by the client. You can set this parameter to version V1 or V2. V1 is the default version.

  Based on the version that you set, the behavior of this parameter changes. For example:
  + In V1, the `bool` data type is converted to the `N` number type, where 0 represents false and 1 represents true. In V2, `bool` is converted to `BOOL`.
  + In V2, lists and arrays aren’t grouped together with HashSets. Lists and arrays of numerics, string-based types, and binary-based types are converted to the `L` (List) type, which can be sent empty to update a list. This is unlike V1, where an empty list isn't sent over the wire.

    In V1, collection types, such as List, HashSet, and arrays are treated the same. List, HashSet, and array of numerics is converted to the `NS` (number set) type. 

  The following example sets the conversion schema version to V2, which changes the conversion behavior between .NET types and DynamoDB data types.

  ```
  var config = new DynamoDBContextConfig
  {
      Conversion = DynamoDBEntryConversion.V2
  };
  var contextV2 = new DynamoDBContext(client, config);
  ```

The following C\$1 example creates a new `DynamoDBContext` by specifying two of the preceding optional parameters, `ConsistentRead` and `SkipVersionCheck`.

**Example**  

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
...
DynamoDBContext context =
       new DynamoDBContext(client, new DynamoDBContextConfig { ConsistentRead = true, SkipVersionCheck = true});
```

`DynamoDBContext` includes these optional parameters with each request that you send using this context. 

Instead of setting these parameters at the `DynamoDBContext` level, you can specify them for individual operations you run using `DynamoDBContext`, as shown in the following C\$1 code example. The example loads a specific book item. The `Load` method of `DynamoDBContext` specifies the `ConsistentRead` and `SkipVersionCheck` optional parameters.

**Example**  

```
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
...
DynamoDBContext context = new DynamoDBContext(client);
Book bookItem = context.Load<Book>(productId,new DynamoDBContextConfig{ ConsistentRead = true, SkipVersionCheck = true });
```

In this case, `DynamoDBContext` includes these parameters only when sending the `Get` request.

# Optimistic locking using DynamoDB and the AWS SDK for .NET object persistence model
<a name="DynamoDBContext.VersionSupport"></a>

Optimistic locking support in the object persistence model ensures that the item version for your application is the same as the item version on the server side before updating or deleting the item. Suppose that you retrieve an item for update. However, before you send your updates back, some other application updates the same item. Now your application has a stale copy of the item. Without optimistic locking, any update you perform will overwrite the update made by the other application. 

The optimistic locking feature of the object persistence model provides the `DynamoDBVersion` tag that you can use to enable optimistic locking. To use this feature, you add a property to your class for storing the version number. You add the `DynamoDBVersion` attribute to the property. When you first save the object, the `DynamoDBContext` assigns a version number and increments this value each time you update the item. 

Your update or delete request succeeds only if the client-side object version matches the corresponding version number of the item on the server side. If your application has a stale copy, it must get the latest version from the server before it can update or delete that item.

The following C\$1 code example defines a `Book` class with object persistence attributes mapping it to the `ProductCatalog` table. The `VersionNumber` property in the class decorated with the `DynamoDBVersion` attribute stores the version number value.

**Example**  

```
[DynamoDBTable("ProductCatalog")]
  public class Book
  {
    [DynamoDBHashKey]   //Partition key
    public int Id { get; set; }
    [DynamoDBProperty]
    public string Title { get; set; }
    [DynamoDBProperty]
    public string ISBN { get; set; }
    [DynamoDBProperty("Authors")]
    public List<string> BookAuthors { get; set; }
    [DynamoDBVersion]
    public int? VersionNumber { get; set; }
  }
```

**Note**  
You can apply the `DynamoDBVersion` attribute only to a nullable numeric primitive type (such as `int?`). 

Optimistic locking has the following impact on `DynamoDBContext` operations:
+ For a new item, `DynamoDBContext` assigns initial version number 0. If you retrieve an existing item, update one or more of its properties, and try to save the changes, the save operation succeeds only if the version number on the client side and the server side match. `DynamoDBContext` increments the version number. You don't need to set the version number.
+ The `Delete` method provides overloads that can take either a primary key value or an object as parameter, as shown in the following C\$1 code example.  
**Example**  

  ```
  DynamoDBContext context = new DynamoDBContext(client);
  ...
  // Load a book.
  Book book = context.Load<ProductCatalog>(111);
  // Do other operations.
  // Delete 1 - Pass in the book object.
  context.Delete<ProductCatalog>(book);
  
  // Delete 2 - Pass in the Id (primary key)
  context.Delete<ProductCatalog>(222);
  ```

  If you provide an object as the parameter, the delete succeeds only if the object version matches the corresponding server-side item version. However, if you provide a primary key value as the parameter, `DynamoDBContext` is unaware of any version numbers, and it deletes the item without making the version check. 

  Note that the internal implementation of optimistic locking in the object persistence model code uses the conditional update and the conditional delete API actions in DynamoDB.

## Disabling optimistic locking
<a name="DotNetDynamoDBContext.DisablingOptimisticLocking"></a>

To disable optimistic locking, you use the `SkipVersionCheck` configuration property. You can set this property when creating `DynamoDBContext`. In this case, optimistic locking is disabled for any requests that you make using the context. For more information, see [Specifying optional parameters for DynamoDBContext](DotNetDynamoDBContext.md#OptionalConfigParams). 

Instead of setting the property at the context level, you can disable optimistic locking for a specific operation, as shown in the following C\$1 code example. The example uses the context to delete a book item. The `Delete` method sets the optional `SkipVersionCheck` property to true, disabling version checking.

**Example**  

```
DynamoDBContext context = new DynamoDBContext(client);
// Load a book.
Book book = context.Load<ProductCatalog>(111);
...
// Delete the book.
context.Delete<Book>(book, new DynamoDBContextConfig { SkipVersionCheck = true });
```

# Mapping arbitrary data with DynamoDB using the AWS SDK for .NET object persistence model
<a name="DynamoDBContext.ArbitraryDataMapping"></a>

In addition to the supported .NET types (see [Supported data types](DotNetSDKHighLevel.md#DotNetDynamoDBContext.SupportedTypes)), you can use types in your application for which there is no direct mapping to the Amazon DynamoDB types. The object persistence model supports storing data of arbitrary types as long as you provide the converter to convert data from the arbitrary type to the DynamoDB type and vice versa. The converter code transforms data during both the saving and loading of the objects.

You can create any types on the client-side. However the data stored in the tables is one of the DynamoDB types, and during query and scan, any data comparisons made are against the data stored in DynamoDB.

The following C\$1 code example defines a `Book` class with `Id`, `Title`, `ISBN`, and `Dimension` properties. The `Dimension` property is of the `DimensionType` that describes `Height`, `Width`, and `Thickness` properties. The example code provides the converter methods `ToEntry` and `FromEntry` to convert data between the `DimensionType` and the DynamoDB string types. For example, when saving a `Book` instance, the converter creates a book `Dimension` string such as "8.5x11x.05". When you retrieve a book, it converts the string to a `DimensionType` instance.

The example maps the `Book` type to the `ProductCatalog` table. It saves a sample `Book` instance, retrieves it, updates its dimensions, and saves the updated `Book` again.



For step-by-step instructions for testing the following example, see [.NET code examples](CodeSamples.DotNet.md).

**Example**  

```
using System;
using System.Collections.Generic;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Amazon.DynamoDBv2.DocumentModel;
using Amazon.Runtime;
using Amazon.SecurityToken;

namespace com.amazonaws.codesamples
{
    class HighLevelMappingArbitraryData
    {
        private static AmazonDynamoDBClient client = new AmazonDynamoDBClient();

        static void Main(string[] args)
        {
            try
            {
                DynamoDBContext context = new DynamoDBContext(client);

                // 1. Create a book.
                DimensionType myBookDimensions = new DimensionType()
                {
                    Length = 8M,
                    Height = 11M,
                    Thickness = 0.5M
                };

                Book myBook = new Book
                {
                    Id = 501,
                    Title = "AWS SDK for .NET Object Persistence Model Handling Arbitrary Data",
                    ISBN = "999-9999999999",
                    BookAuthors = new List<string> { "Author 1", "Author 2" },
                    Dimensions = myBookDimensions
                };

                context.Save(myBook);

                // 2. Retrieve the book.
                Book bookRetrieved = context.Load<Book>(501);

                // 3. Update property (book dimensions).
                bookRetrieved.Dimensions.Height += 1;
                bookRetrieved.Dimensions.Length += 1;
                bookRetrieved.Dimensions.Thickness += 0.2M;
                // Update the book.
                context.Save(bookRetrieved);

                Console.WriteLine("To continue, press Enter");
                Console.ReadLine();
            }
            catch (AmazonDynamoDBException e) { Console.WriteLine(e.Message); }
            catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
            catch (Exception e) { Console.WriteLine(e.Message); }
        }
    }
    [DynamoDBTable("ProductCatalog")]
    public class Book
    {
        [DynamoDBHashKey] //Partition key
        public int Id
        {
            get; set;
        }
        [DynamoDBProperty]
        public string Title
        {
            get; set;
        }
        [DynamoDBProperty]
        public string ISBN
        {
            get; set;
        }
        // Multi-valued (set type) attribute.
        [DynamoDBProperty("Authors")]
        public List<string> BookAuthors
        {
            get; set;
        }
        // Arbitrary type, with a converter to map it to DynamoDB type.
        [DynamoDBProperty(typeof(DimensionTypeConverter))]
        public DimensionType Dimensions
        {
            get; set;
        }
    }

    public class DimensionType
    {
        public decimal Length
        {
            get; set;
        }
        public decimal Height
        {
            get; set;
        }
        public decimal Thickness
        {
            get; set;
        }
    }

    // Converts the complex type DimensionType to string and vice-versa.
    public class DimensionTypeConverter : IPropertyConverter
    {
        public DynamoDBEntry ToEntry(object value)
        {
            DimensionType bookDimensions = value as DimensionType;
            if (bookDimensions == null) throw new ArgumentOutOfRangeException();

            string data = string.Format("{1}{0}{2}{0}{3}", " x ",
                            bookDimensions.Length, bookDimensions.Height, bookDimensions.Thickness);

            DynamoDBEntry entry = new Primitive
            {
                Value = data
            };
            return entry;
        }

        public object FromEntry(DynamoDBEntry entry)
        {
            Primitive primitive = entry as Primitive;
            if (primitive == null || !(primitive.Value is String) || string.IsNullOrEmpty((string)primitive.Value))
                throw new ArgumentOutOfRangeException();

            string[] data = ((string)(primitive.Value)).Split(new string[] { " x " }, StringSplitOptions.None);
            if (data.Length != 3) throw new ArgumentOutOfRangeException();

            DimensionType complexData = new DimensionType
            {
                Length = Convert.ToDecimal(data[0]),
                Height = Convert.ToDecimal(data[1]),
                Thickness = Convert.ToDecimal(data[2])
            };
            return complexData;
        }
    }
}
```

# Running the code examples in this Developer Guide
<a name="CodeSamples"></a>

The AWS SDKs provide broad support for Amazon DynamoDB in the following languages:
+ [ Java](https://aws.amazon.com/sdk-for-java)
+ [JavaScript in the browser](https://aws.amazon.com/sdk-for-browser)
+ [.NET](https://aws.amazon.com/sdk-for-net)
+ [Node.js](https://aws.amazon.com/sdk-for-node-js)
+ [PHP](https://aws.amazon.com/sdk-for-php)
+ [Python](https://aws.amazon.com/sdk-for-python)
+ [Ruby](https://aws.amazon.com/sdk-for-ruby)
+ [C\$1\$1](https://aws.amazon.com/sdk-for-cpp)
+ [Go](https://aws.amazon.com/sdk-for-go)
+ [Android](https://aws.amazon.com/mobile/sdk/)
+ [iOS](https://aws.amazon.com/mobile/sdk/)

The code examples in this developer guide provide more in-depth coverage of DynamoDB operations, using the following programming languages:
+ [Java code examples](CodeSamples.Java.md)
+ [.NET code examples](CodeSamples.DotNet.md)

Before you can begin with this exercise, you need to create an AWS account, get your access key and secret key, and set up the AWS Command Line Interface (AWS CLI) on your computer. For more information, see [Setting up DynamoDB (web service)](SettingUp.DynamoWebService.md).

**Note**  
If you are using the downloadable version of DynamoDB, you need to use the AWS CLI to create the tables and sample data. You also need to specify the `--endpoint-url` parameter with each AWS CLI command. For more information, see [Setting the local endpoint](DynamoDBLocal.UsageNotes.md#DynamoDBLocal.Endpoint).

# Creating tables and loading data for code examples in DynamoDB
<a name="SampleData"></a>

See below for the basics on creating tables in DynamoDB, loading in a sample dataset, querying the data, and updating the data.
+ [Step 1: Create a table in DynamoDB](getting-started-step-1.md)
+ [Step 2: Write data to a DynamoDB table](getting-started-step-2.md)
+ [Step 3: Read data from a DynamoDB table](getting-started-step-3.md)
+ [Step 4: Update data in a DynamoDB table](getting-started-step-4.md)

# Java code examples
<a name="CodeSamples.Java"></a>

**Topics**
+ [Java: Setting your AWS credentials](#CodeSamples.Java.Credentials)
+ [Java: Setting the AWS Region and endpoint](#CodeSamples.Java.RegionAndEndpoint)

This Developer Guide contains Java code snippets and ready-to-run programs. You can find these code examples in the following sections:
+ [Working with items and attributes in DynamoDB](WorkingWithItems.md)
+ [Working with tables and data in DynamoDB](WorkingWithTables.md)
+ [Querying tables in DynamoDB](Query.md)
+ [Scanning tables in DynamoDB](Scan.md)
+ [Improving data access with secondary indexes in DynamoDB](SecondaryIndexes.md)
+ [Java 1.x: DynamoDBMapper](DynamoDBMapper.md)
+ [Change data capture for DynamoDB Streams](Streams.md)

You can get started quickly by using Eclipse with the [AWS Toolkit for Eclipse](https://aws.amazon.com/eclipse/). In addition to a full-featured IDE, you also get the AWS SDK for Java with automatic updates, and preconfigured templates for building AWS applications.

**To run the Java code examples (using Eclipse)**

1. Download and install the [Eclipse](http://www.eclipse.org) IDE.

1. Download and install the [AWS Toolkit for Eclipse](https://aws.amazon.com/eclipse/).

1. Start Eclipse, and on the **Eclipse** menu, choose **File**, **New**, and then **Other**.

1. In **Select a wizard**, choose **AWS**, choose **AWS Java Project**, and then choose **Next**.

1. In **Create an AWS Java**, do the following:

   1. In **Project name**, enter a name for your project.

   1. In **Select Account**, choose your credentials profile from the list.

      If this is your first time using the [AWS Toolkit for Eclipse](https://aws.amazon.com/eclipse/), choose **Configure AWS Accounts** to set up your AWS credentials.

1. Choose **Finish** to create the project.

1. From the **Eclipse** menu, choose **File**, **New**, and then **Class**.

1. In **Java Class**, enter a name for your class in **Name** (use the same name as the code example that you want to run), and then choose **Finish** to create the class.

1. Copy the code example from the documentation page into the Eclipse editor.

1. To run the code, choose **Run** on the Eclipse menu.

The SDK for Java provides thread-safe clients for working with DynamoDB. As a best practice, your applications should create one client and reuse the client between threads.

For more information, see the [AWS SDK for Java](https://aws.amazon.com/sdk-for-java).

**Note**  
The code examples in this guide are intended for use with the latest version of the AWS SDK for Java.  
If you are using the AWS Toolkit for Eclipse, you can configure automatic updates for the SDK for Java. To do this in Eclipse, go to **Preferences** and choose **AWS Toolkit**, **AWS SDK for Java**, **Download new SDKs automatically**.

## Java: Setting your AWS credentials
<a name="CodeSamples.Java.Credentials"></a>

The SDK for Java requires that you provide AWS credentials to your application at runtime. The code examples in this guide assume that you are using an AWS credentials file, as described in [Set up your AWS credentials](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/set-up-creds.html) in the *AWS SDK for Java Developer Guide*.

The following is an example of an AWS credentials file named `~/.aws/credentials`, where the tilde character (`~`) represents your home directory.

```
[default]
aws_access_key_id = AWS access key ID goes here
aws_secret_access_key = Secret key goes here
```

## Java: Setting the AWS Region and endpoint
<a name="CodeSamples.Java.RegionAndEndpoint"></a>

By default, the code examples access DynamoDB in the US West (Oregon) Region. You can change the Region by modifying the `AmazonDynamoDB` properties.

The following code example instantiates a new `AmazonDynamoDB`.

```
import software.amazon.dynamodb.AmazonDynamoDBClientBuilder;
import com.amazonaws.regions.Regions;
...
// This client will default to US West (Oregon)
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard()
.withRegion(Regions.US_WEST_2)
.build();
```

You can use the `withRegion` method to run your code against DynamoDB in any Region where it is available. For a complete list, see [AWS regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#ddb_region) in the *Amazon Web Services General Reference*.

If you want to run the code examples using DynamoDB locally on your computer, set the endpoint as follows.

### AWS SDK V1
<a name="CodeSamples.Java.RegionAndEndpoint.V1"></a>

```
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration("http://localhost:8000", "us-west-2"))
.build();
```

### AWS SDK V2
<a name="CodeSamples.Java.RegionAndEndpoint.V2"></a>

```
DynamoDbClient client = DynamoDbClient.builder()
    .endpointOverride(URI.create("http://localhost:8000"))
    // The region is meaningless for local DynamoDb but required for client builder validation
    .region(Region.US_EAST_1)
    .credentialsProvider(StaticCredentialsProvider.create(
    AwsBasicCredentials.create("dummy-key", "dummy-secret")))
    .build();
```

# .NET code examples
<a name="CodeSamples.DotNet"></a>

**Topics**
+ [.NET: Setting your AWS credentials](#CodeSamples.DotNet.Credentials)
+ [.NET: Setting the AWS Region and endpoint](#CodeSamples.DotNet.RegionAndEndpoint)

This guide contains .NET code snippets and ready-to-run programs. You can find these code examples in the following sections:
+ [Working with items and attributes in DynamoDB](WorkingWithItems.md)
+ [Working with tables and data in DynamoDB](WorkingWithTables.md)
+ [Querying tables in DynamoDB](Query.md)
+ [Scanning tables in DynamoDB](Scan.md)
+ [Improving data access with secondary indexes in DynamoDB](SecondaryIndexes.md)
+ [Working with the .NET document model in DynamoDB](DotNetSDKMidLevel.md)
+ [Working with the .NET object persistence model and DynamoDB](DotNetSDKHighLevel.md)
+ [Change data capture for DynamoDB Streams](Streams.md)

You can get started quickly by using the AWS SDK for .NET with the Toolkit for Visual Studio.

**To run the .NET code examples (using Visual Studio)**

1. Download and install [Microsoft Visual Studio](https://www.visualstudio.com).

1. (Optional) Download and install the [Toolkit for Visual Studio](https://aws.amazon.com/visualstudio/).

1. Set up your AWS credentials. Configure a credentials profile in your shared AWS credentials file (`~/.aws/credentials`). For more information, see [Configure AWS credentials](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-creds.html) in the *AWS SDK for .NET Developer Guide*.

1. Start Visual Studio. Choose **File**, **New**, **Project**.

1. Search for **Console App**, select the C\$1 template that targets .NET, and then choose **Next**. Configure your project name and location, and then choose **Create**.

1. Add the AWS SDK for DynamoDB NuGet package to your project:

   1. In Solution Explorer, open the context (right-click) menu for your project, and then choose **Manage NuGet Packages**.

   1. In NuGet Package Manager, choose **Browse**.

   1. In the search box, enter **AWSSDK.DynamoDBv2**, and wait for the search to complete.

   1. Choose **AWSSDK.DynamoDBv2**, and then choose **Install**.

1. In your Visual Studio project, open `Program.cs`. Replace the contents with the code example from the documentation page that you want to run.

1. To run the code, choose **Start** in the Visual Studio toolbar.

The SDK for .NET provides thread-safe clients for working with DynamoDB. As a best practice, your applications should create one client and reuse the client between threads.

For more information, see [AWS SDK for .NET](https://aws.amazon.com/sdk-for-net).

**Note**  
The code examples in this guide are intended for use with the latest version of the AWS SDK for .NET.

## .NET: Setting your AWS credentials
<a name="CodeSamples.DotNet.Credentials"></a>

The SDK for .NET requires that you provide AWS credentials to your application at runtime. The code examples in this guide assume that you are using the SDK Store to manage your AWS credentials file, as described in [Using the SDK store](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-creds.html#sdk-store) in the *AWS SDK for .NET Developer Guide*.

The Toolkit for Visual Studio supports multiple sets of credentials from any number of accounts. Each set is referred to as a *profile*. Visual Studio adds entries to the project's `App.config` file so that your application can find the AWS credentials at runtime.

The following example shows the default `App.config` file that is generated when you create a new project using Toolkit for Visual Studio.

```
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <appSettings>
    <add key="AWSProfileName" value="default"/>
    <add key="AWSRegion" value="us-west-2" />
 </appSettings>
</configuration>
```

At runtime, the program uses the `default` set of AWS credentials, as specified by the `AWSProfileName` entry. The AWS credentials themselves are kept in the SDK Store in encrypted form. The Toolkit for Visual Studio provides a graphical user interface for managing your credentials, all from within Visual Studio. For more information, see [Specifying credentials](https://docs.aws.amazon.com/AWSToolkitVS/latest/UserGuide/tkv_setup.html#creds) in the *AWS Toolkit for Visual Studio User Guide*.

**Note**  
By default, the code examples access DynamoDB in the US West (Oregon) Region. You can change the Region by modifying the `AWSRegion` entry in the App.config file. You can set `AWSRegion` to any Region where DynamoDB is available. For a complete list, see [AWS regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#ddb_region) in the *Amazon Web Services General Reference*.

## .NET: Setting the AWS Region and endpoint
<a name="CodeSamples.DotNet.RegionAndEndpoint"></a>

By default, the code examples access DynamoDB in the US West (Oregon) Region. You can change the Region by modifying the `AWSRegion` entry in the `App.config` file. Or, you can change the Region by modifying the `AmazonDynamoDBClient` properties.

The following code example instantiates a new `AmazonDynamoDBClient`. The client is modified so that the code runs against DynamoDB in a different Region.

```
AmazonDynamoDBConfig clientConfig = new AmazonDynamoDBConfig();
// This client will access the US East 1 region.
clientConfig.RegionEndpoint = RegionEndpoint.USEast1;
AmazonDynamoDBClient client = new AmazonDynamoDBClient(clientConfig);
```

For a complete list of Regions, see [AWS regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#ddb_region) in the *Amazon Web Services General Reference*.

If you want to run the code examples using DynamoDB locally on your computer, set the endpoint as follows.

```
AmazonDynamoDBConfig clientConfig = new AmazonDynamoDBConfig();
// Set the endpoint URL
clientConfig.ServiceURL = "http://localhost:8000";
AmazonDynamoDBClient client = new AmazonDynamoDBClient(clientConfig);
```

# DynamoDB low-level API
<a name="Programming.LowLevelAPI"></a>

The Amazon DynamoDB *low-level API* is the protocol-level interface for DynamoDB. At this level, every HTTP(S) request must be correctly formatted and carry a valid digital signature.

The AWS SDKs construct low-level DynamoDB API requests on your behalf and process the responses from DynamoDB. This lets you focus on your application logic, instead of low-level details. However, you can still benefit from a basic knowledge of how the low-level DynamoDB API works.

For more information about the low-level DynamoDB API, see [Amazon DynamoDB API Reference](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/).

**Note**  
DynamoDB Streams has its own low-level API, which is separate from that of DynamoDB and is fully supported by the AWS SDKs.  
For more information, see [Change data capture for DynamoDB Streams](Streams.md). For the low-level DynamoDB Streams API, see the [Amazon DynamoDB Streams API Reference](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Operations_Amazon_DynamoDB_Streams.html).

The low-level DynamoDB API uses JavaScript Object Notation (JSON) as a wire protocol format. JSON presents data in a hierarchy so that both data values and data structure are conveyed simultaneously. Name-value pairs are defined in the format `name:value`. The data hierarchy is defined by nested brackets of name-value pairs.

DynamoDB uses JSON only as a transport protocol, not as a storage format. The AWS SDKs use JSON to send data to DynamoDB, and DynamoDB responds with JSON. DynamoDB does not store data persistently in JSON format.

**Note**  
For more information about JSON, see [Introducing JSON](http://json.org) on the `JSON.org` website.

**Topics**
+ [Request format](#Programming.LowLevelAPI.RequestFormat)
+ [Response format](#Programming.LowLevelAPI.ResponseFormat)
+ [Data type descriptors](#Programming.LowLevelAPI.DataTypeDescriptors)
+ [Numeric data](#Programming.LowLevelAPI.Numbers)
+ [Binary data](#Programming.LowLevelAPI.Binary)

![\[DynamoDB low-level API and how AWS SDKs handle the protocol-level requests and responses.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/SDKSupport.DDBLowLevelAPI.png)


## Request format
<a name="Programming.LowLevelAPI.RequestFormat"></a>

The DynamoDB low-level API accepts HTTP(S) `POST` requests as input. The AWS SDKs construct these requests for you.

Suppose that you have a table named `Pets`, with a key schema consisting of `AnimalType` (partition key) and `Name` (sort key). Both of these attributes are of type `string`. To retrieve an item from `Pets`, the AWS SDK constructs the following request.

```
POST / HTTP/1.1
Host: dynamodb.<region>.<domain>;
Accept-Encoding: identity
Content-Length: <PayloadSizeBytes>
User-Agent: <UserAgentString>
Content-Type: application/x-amz-json-1.0
Authorization: AWS4-HMAC-SHA256 Credential=<Credential>, SignedHeaders=<Headers>, Signature=<Signature>
X-Amz-Date: <Date> 
X-Amz-Target: DynamoDB_20120810.GetItem

{
    "TableName": "Pets",
    "Key": {
        "AnimalType": {"S": "Dog"},
        "Name": {"S": "Fido"}
    }
}
```

Note the following about this request:
+ The `Authorization` header contains information required for DynamoDB to authenticate the request. For more information, see [Signing AWS API requests](https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) and [Signature Version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the *Amazon Web Services General Reference*.
+ The `X-Amz-Target` header contains the name of a DynamoDB operation: `GetItem`. (This is also accompanied by the low-level API version, in this case `20120810`.)
+ The payload (body) of the request contains the parameters for the operation, in JSON format. For the `GetItem` operation, the parameters are `TableName` and `Key`.

## Response format
<a name="Programming.LowLevelAPI.ResponseFormat"></a>

Upon receipt of the request, DynamoDB processes it and returns a response. For the request shown previously, the HTTP(S) response payload contains the results from the operation, as shown in the following example.

```
HTTP/1.1 200 OK
x-amzn-RequestId: <RequestId>
x-amz-crc32: <Checksum>
Content-Type: application/x-amz-json-1.0
Content-Length: <PayloadSizeBytes>
Date: <Date>
{
    "Item": {
        "Age": {"N": "8"},
        "Colors": {
            "L": [
                {"S": "White"},
                {"S": "Brown"},
                {"S": "Black"}
            ]
        },
        "Name": {"S": "Fido"},
        "Vaccinations": {
            "M": {
                "Rabies": {
                    "L": [
                        {"S": "2009-03-17"},
                        {"S": "2011-09-21"},
                        {"S": "2014-07-08"}
                    ]
                },
                "Distemper": {"S": "2015-10-13"}
            }
        },
        "Breed": {"S": "Beagle"},
        "AnimalType": {"S": "Dog"}
    }
}
```

At this point, the AWS SDK returns the response data to your application for further processing.

**Note**  
If DynamoDB can't process a request, it returns an HTTP error code and message. The AWS SDK propagates these to your application in the form of exceptions. For more information, see [Error handling with DynamoDB](Programming.Errors.md).

## Data type descriptors
<a name="Programming.LowLevelAPI.DataTypeDescriptors"></a>

The low-level DynamoDB API protocol requires each attribute to be accompanied by a data type descriptor. *Data type descriptors* are tokens that tell DynamoDB how to interpret each attribute.

The examples in [Request format](#Programming.LowLevelAPI.RequestFormat) and [Response format](#Programming.LowLevelAPI.ResponseFormat) show examples of how data type descriptors are used. The `GetItem` request specifies `S` for the `Pets` key schema attributes (`AnimalType` and `Name`), which are of type `string`. The `GetItem` response contains a *Pets* item with attributes of type `string` (`S`), `number` (`N`), `map` (`M`), and `list` (`L`).

The following is a complete list of DynamoDB data type descriptors:
+ **`S`** – String
+ **`N`** – Number
+ **`B`** – Binary
+ **`BOOL`** – Boolean
+ **`NULL`** – Null
+ **`M`** – Map
+ **`L`** – List
+ **`SS`** – String Set
+ **`NS`** – Number Set
+ **`BS`** – Binary Set

The following table shows the correct JSON format for each data type descriptor. Note that numbers are represented as strings to preserve precision, while booleans and nulls use their native JSON types.


| Descriptor | JSON format | Notes | 
| --- | --- | --- | 
| S | \$1"S": "Hello"\$1 | Value is a JSON string. | 
| N | \$1"N": "123.45"\$1 | Value is a string, not a JSON number. This preserves precision across languages. | 
| B | \$1"B": "dGhpcyBpcyBhIHRlc3Q="\$1 | Value is a base64-encoded string. | 
| BOOL | \$1"BOOL": true\$1 | Value is a JSON boolean (true or false), not a string. | 
| NULL | \$1"NULL": true\$1 | Value is the JSON boolean true to indicate null. | 
| M | \$1"M": \$1"Name": \$1"S": "Joe"\$1\$1\$1 | Value is a JSON object of attribute name–value pairs. | 
| L | \$1"L": [\$1"S": "Red"\$1, \$1"N": "5"\$1]\$1 | Value is a JSON array of attribute values. | 
| SS | \$1"SS": ["Red", "Blue"]\$1 | Value is a JSON array of strings. | 
| NS | \$1"NS": ["1", "2.5"]\$1 | Value is a JSON array of number strings. | 
| BS | \$1"BS": ["U3Vubnk=", "UmFpbnk="]\$1 | Value is a JSON array of base64-encoded strings. | 

**Note**  
 For detailed descriptions of DynamoDB data types, see [Data types](HowItWorks.NamingRulesDataTypes.md#HowItWorks.DataTypes).

## Numeric data
<a name="Programming.LowLevelAPI.Numbers"></a>

Different programming languages offer different levels of support for JSON. In some cases, you might decide to use a third-party library for validating and parsing JSON documents.

Some third-party libraries build upon the JSON number type, providing their own types such as `int`, `long`, or `double`. However, the native number data type in DynamoDB does not map exactly to these other data types, so these type distinctions can cause conflicts. In addition, many JSON libraries do not handle fixed-precision numeric values, and they automatically infer a double data type for digit sequences that contain a decimal point.

To solve these problems, DynamoDB provides a single numeric type with no data loss. To avoid unwanted implicit conversions to a double value, DynamoDB uses strings for the data transfer of numeric values. This approach provides flexibility for updating attribute values while maintaining proper sorting semantics, such as putting the values "01", "2", and "03" in the proper sequence.

If number precision is important to your application, you should convert numeric values to strings before you pass them to DynamoDB.

## Binary data
<a name="Programming.LowLevelAPI.Binary"></a>

DynamoDB supports binary attributes. However, JSON does not natively support encoding binary data. To send binary data in a request, you will need to encode it in base64 format. Upon receiving the request, DynamoDB decodes the base64 data back to binary. 

The base64 encoding scheme used by DynamoDB is described at [RFC 4648](http://tools.ietf.org/html/rfc4648) on the Internet Engineering Task Force (IETF) website.

# Programming Amazon DynamoDB with Python and Boto3
<a name="programming-with-python"></a>

This guide provides an orientation to programmers wanting to use Amazon DynamoDB with Python. Learn about the different abstraction layers, configuration management, error handling, controlling retry policies, managing keep-alive, and more.

**Topics**
+ [About Boto](#programming-with-python-about)
+ [Using the Boto documentation](#programming-with-python-documentation)
+ [Understanding the client and resource abstraction layers](#programming-with-python-client-resource)
+ [Using the table resource batch\$1writer](#programming-with-python-batch-writer)
+ [Additional code examples that explore the client and resource layers](#programming-with-python-additional-code)
+ [Understanding how the Client and Resource objects interact with sessions and threads](#programming-with-python-sessions-thread-safety)
+ [Customizing the Config object](#programming-with-python-config)
+ [Error handling](#programming-with-python-error-handling)
+ [Logging](#programming-with-python-logging)
+ [Event hooks](#programming-with-python-event-hooks)
+ [Pagination and the Paginator](#programming-with-python-pagination)
+ [Waiters](#programming-with-python-waiters)

## About Boto
<a name="programming-with-python-about"></a>

You can access DynamoDB from Python by using the official AWS SDK for Python, commonly referred to as **Boto3**. The name Boto (pronounced boh-toh) comes from a freshwater dolphin native to the Amazon River. The Boto3 library is the library’s third major version, first released in 2015. The Boto3 library is quite large, as it supports all AWS services, not just DynamoDB. This orientation targets only the parts of Boto3 relevant to DynamoDB.

Boto is maintained and published by AWS as open-source project hosted on GitHub. It’s split into two packages: [Botocore](https://github.com/boto/botocore) and [Boto3](https://github.com/boto/boto3).
+ **Botocore** provides the low-level functionality. In Botocore you’ll find the client, session, credentials, config, and exception classes. 
+ **Boto3** builds on top of Botocore. It offers a higher-level, more Pythonic interface. Specifically, it exposes a DynamoDB table as a Resource and offers a simpler, more elegant interface compared to the lower-level, service-oriented client interface.

Because these projects are hosted on GitHub, you can view the source code, track open issues, or submit your own issues.

## Using the Boto documentation
<a name="programming-with-python-documentation"></a>

Get started with the Boto documentation with the following resources:
+ Begin with the [Quickstart section](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) that provides a solid starting point for the package installation. Go there for instructions on getting Boto3 installed if it’s not already (Boto3 is often automatically available within AWS services such as AWS Lambda).
+ After that, focus on the documentation’s [DynamoDB guide](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/dynamodb.html). It shows you how to perform the basic DynamoDB activities: create and delete a table, manipulate items, run batch operations, run a query, and perform a scan. Its examples use the **resource** interface. When you see `boto3.resource('dynamodb')` that indicates you’re using the higher-level **resource** interface.
+ After the guide, you can review the [DynamoDB reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html). This landing page provides an exhaustive list of the classes and methods available to you. At the top, you’ll see the `DynamoDB.Client` class. This provides low-level access to all the control-plane and data-plane operations. At the bottom, look at the `DynamoDB.ServiceResource` class. This is the higher-level Pythonic interface. With it you can create a table, do batch operations across tables, or obtain a `DynamoDB.ServiceResource.Table` instance for table-specific actions.

## Understanding the client and resource abstraction layers
<a name="programming-with-python-client-resource"></a>

The two interfaces you'll be working with are the **client** interface and the **resource** interface. 
+ The low-level **client** interface provides a 1-to-1 mapping to the underlying service API. Every API offered by DynamoDB is available through the client. This means the client interface can provide complete functionality, but it's often more verbose and complex to use.
+ The higher-level **resource** interface does not provide a 1-to-1 mapping of the underlying service API. However, it provides methods that make it more convenient for you to access the service such as `batch_writer`.

Here’s an example of inserting an item using the client interface. Notice how all values are passed as a map with the key indicating their type ('S' for string, 'N' for number) and their value as a string. This is known as DynamoDB JSON format.

```
import boto3

dynamodb = boto3.client('dynamodb')

dynamodb.put_item(
    TableName='YourTableName',
    Item={
        'pk': {'S': 'id#1'},
        'sk': {'S': 'cart#123'},
        'name': {'S': 'SomeName'},
        'inventory': {'N': '500'},
        # ... more attributes ...
    }
)
```

Here's the same `PutItem` operation using the resource interface. The data typing is implicit:

```
import boto3

dynamodb = boto3.resource('dynamodb')

table = dynamodb.Table('YourTableName')

table.put_item(
    Item={
        'pk': 'id#1',
        'sk': 'cart#123',
        'name': 'SomeName',
        'inventory': 500,
        # ... more attributes ...
    }
)
```

If needed, you can convert between regular JSON and DynamoDB JSON using the `TypeSerializer` and `TypeDeserializer` classes provided with boto3:

```
def dynamo_to_python(dynamo_object: dict) -> dict:
    deserializer = TypeDeserializer()
    return {
        k: deserializer.deserialize(v) 
        for k, v in dynamo_object.items()
    }  
  
def python_to_dynamo(python_object: dict) -> dict:
    serializer = TypeSerializer()
    return {
        k: serializer.serialize(v)
        for k, v in python_object.items()
    }
```

Here is how to perform a query using the client interface. It expresses the query as a JSON construct. It uses a `KeyConditionExpression` string which requires variable substitution to handle any potential keyword conflicts:

```
import boto3

client = boto3.client('dynamodb')

# Construct the query
response = client.query(
    TableName='YourTableName',
    KeyConditionExpression='pk = :pk_val AND begins_with(sk, :sk_val)',
    FilterExpression='#name = :name_val',
    ExpressionAttributeValues={
        ':pk_val': {'S': 'id#1'},
        ':sk_val': {'S': 'cart#'},
        ':name_val': {'S': 'SomeName'},
    },
    ExpressionAttributeNames={
        '#name': 'name',
    }
)
```

The same query operation using the resource interface can be shortened and simplified:

```
import boto3
from boto3.dynamodb.conditions import Key, Attr

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('YourTableName')

response = table.query(
    KeyConditionExpression=Key('pk').eq('id#1') & Key('sk').begins_with('cart#'),
    FilterExpression=Attr('name').eq('SomeName')
)
```

As a final example, imagine you want to get the approximate size of a table (which is metadata kept on the table that is updated about every 6 hours). With the client interface, you do a `describe_table()` operation and pull the answer from the JSON structure returned:

```
import boto3

dynamodb = boto3.client('dynamodb')

response = dynamodb.describe_table(TableName='YourTableName')
size = response['Table']['TableSizeBytes']
```

With the resource interface, the table performs the describe operation implicitly and presents the data directly as an attribute:

```
import boto3

dynamodb = boto3.resource('dynamodb')

table = dynamodb.Table('YourTableName')
size = table.table_size_bytes
```

**Note**  
When considering whether to develop using the client or resource interface, be aware that new features will not be added to the resource interface per the [resource documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/resources.html): “The AWS Python SDK team does not intend to add new features to the resources interface in boto3. Existing interfaces will continue to operate during boto3’s lifecycle. Customers can find access to newer service features through the client interface.”

## Using the table resource batch\$1writer
<a name="programming-with-python-batch-writer"></a>

One convenience available only with the higher-level table resource is the `batch_writer`. DynamoDB supports batch write operations allowing up to 25 put or delete operations in one network request. Batching like this improves efficiency by minimizing network round trips.

With the low-level client library, you use the `client.batch_write_item()` operation to run batches. You must manually split your work into batches of 25. After each operation, you also have to request to receive a list of unprocessed items (some of the write operations may succeed while others could fail). You then have to pass those unprocessed items again into a later `batch_write_item()` operation. There's a significant amount of boilerplate code.

The [Table.batch\$1writer](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb/table/batch_writer.html) method creates a context manager for writing objects in a batch. It presents an interface where it seems as if you're writing items one at a time, but internally it's buffering and sending the items in batches. It also handles unprocessed item retries implicitly.

```
dynamodb = boto3.resource('dynamodb')

table = dynamodb.Table('YourTableName')

movies = # long list of movies in {'pk': 'val', 'sk': 'val', etc} format
with table.batch_writer() as writer:
    for movie in movies:
        writer.put_item(Item=movie)
```

## Additional code examples that explore the client and resource layers
<a name="programming-with-python-additional-code"></a>

You can also refer to the following code sample repositories that explore usage of the various functions, using both client and resource:
+ [Official AWS single-action code examples.](https://docs.aws.amazon.com/code-library/latest/ug/python_3_dynamodb_code_examples.html) 
+ [Official AWS scenario-oriented code examples.](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/python)
+ [Community-maintained single-action code examples.](https://github.com/aws-samples/aws-dynamodb-examples/tree/master/examples/SDK/python)

## Understanding how the Client and Resource objects interact with sessions and threads
<a name="programming-with-python-sessions-thread-safety"></a>

The Resource object is not thread safe and should not be shared across threads or processes. Refer to the [guide on Resource](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/resources.html#multithreading-or-multiprocessing-with-resources) for more details.

The Client object, in contrast, is generally thread safe, except for specific advanced features. Refer to the [guide on Clients](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/clients.html#multithreading-or-multiprocessing-with-clients) for more details. 

The Session object is not thread safe. So, each time you make a Client or Resource in a multi-threaded environment you should create a new Session first and then make the Client or Resource from the Session. Refer to the [guide on Sessions](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/session.html#multithreading-or-multiprocessing-with-sessions) for more details. 

When you call the `boto3.resource()`, you’re implicitly using the default Session. This is convenient for writing single-threaded code. When writing multi-threaded code, you’ll want to first construct a new Session for each thread and then retrieve the resource from that Session:

```
# Explicitly create a new Session for this thread 
session = boto3.Session()
dynamodb = session.resource('dynamodb')
```

## Customizing the Config object
<a name="programming-with-python-config"></a>

When constructing a Client or Resource object, you can pass optional named parameters to customize behavior. The parameter named `config` unlocks a variety of functionality. It’s an instance of `botocore.client.Config` and the [reference documentation for Config](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html) shows everything it exposes for you to control. The [guide to Configuration](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html) provides a good overview.

**Note**  
You can modify many of these behavioral settings at the Session level, within the AWS configuration file, or as environment variables.

**Config for timeouts**

One use of a custom config is to adjust networking behaviors:
+ **connect\$1timeout (float or int)** – The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.
+ **read\$1timeout (float or int)** – The time in seconds till a timeout exception is thrown when attempting to read from a connection. The default is 60 seconds.

Timeouts of 60 seconds are excessive for DynamoDB. It means a transient network glitch will cause a minute’s delay for the client before it can try again. The following code shortens the timeouts to a second:

```
import boto3
from botocore.config import Config

my_config = Config(
   connect_timeout = 1.0,
   read_timeout = 1.0
)
dynamodb = boto3.resource('dynamodb', config=my_config)
```

For more discussion about timeouts, see [Tuning AWS Java SDK HTTP request settings for latency-aware DynamoDB applications](https://aws.amazon.com/blogs/database/tuning-aws-java-sdk-http-request-settings-for-latency-aware-amazon-dynamodb-applications/). Note the Java SDK has more timeout configurations than Python.

**Config for keep-alive**

If you're using botocore 1.27.84 or later, you can also control **TCP Keep-Alive**:
+ **tcp\$1keepalive** (bool) - Enables the TCP Keep-Alive socket option used when creating new connections if set to `True` ( defaults to `False`). This is only available starting with botocore 1.27.84.

Setting TCP Keep-Alive to `True` can reduce average latencies. Here's sample code that conditionally sets TCP Keep-Alive to true when you have the right botocore version:

```
import botocore
import boto3
from botocore.config import Config
from distutils.version import LooseVersion

required_version = "1.27.84"
current_version = botocore.__version__

my_config = Config(
   connect_timeout = 0.5,
   read_timeout = 0.5
)
if LooseVersion(current_version) > LooseVersion(required_version):
    my_config = my_config.merge(Config(tcp_keepalive = True))

dynamodb = boto3.resource('dynamodb', config=my_config)
```

**Note**  
TCP Keep-Alive is different than HTTP Keep-Alive. With TCP Keep-Alive, small packets are sent by the underlying operating system over the socket connection to keep the connection alive and immediately detect any drops. With HTTP Keep-Alive, the web connection built on the underlying socket gets reused. HTTP Keep-Alive is always enabled with boto3.

There's a limit to how long an idle connection can be kept alive. Consider sending periodic requests (say every minute) if you have an idle connection but want the next request to use an already-established connection.

**Config for retries**

The config also accepts a dictionary called **retries** where you can specify your desired retry behavior. Retries happen within the SDK when the SDK receives an error and the error is of a transient type. If an error is retried internally (and a retry eventually produces a successful response), there's no error seen from the calling code's perspective, just a slightly elevated latency. Here are the values you can specify:
+ **max\$1attempts** – An integer representing the maximum number of retry attempts that will be made on a single request. For example, setting this value to 2 will result in the request being retried at most two times after the initial request. Setting this value to 0 will result in no retries ever being attempted after the initial request. 
+ **total\$1max\$1attempts** – An integer representing the maximum number of total attempts that will be made on a single request. This includes the initial request, so a value of 1 indicates that no requests will be retried. If `total_max_attempts` and `max_attempts` are both provided, `total_max_attempts` takes precedence. `total_max_attempts` is preferred over `max_attempts` because it maps to the `AWS_MAX_ATTEMPTS` environment variable and the `max_attempts` config file value.
+ **mode** – A string representing the type of retry mode botocore should use. Valid values are:
  + **legacy** – The default mode. Waits 50ms the first retry, then uses exponential backoff with a base factor of 2. For DynamoDB, it performs up to 10 total max attempts (unless overridden with the above).
**Note**  
With exponential backoff, the last attempt will wait almost 13 seconds.
  + **standard** – Named standard because it’s more consistent with other AWS SDKs. Waits a random time from 0ms to 1,000ms for the first retry. If another retry is necessary, it picks another random time from 0ms to 1,000ms and multiplies it by 2. If an additional retry is necessary, it does the same random pick multiplied by 4, and so on. Each wait is capped at 20 seconds. This mode will perform retries on more detected failure conditions than the `legacy` mode. For DynamoDB, it performs up to 3 total max attempts (unless overridden with the above). 
  + **adaptive** - An experimental retry mode that includes all the functionality of standard mode but adds automatic client-side throttling. With adaptive rate limiting, SDKs can slow down the rate at which requests are sent to better accommodate the capacity of AWS services. This is a provisional mode whose behavior might change. 

An expanded definition of these retry modes can be found in the [guide to retries](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html) as well as in the [Retry behavior topic in the SDK reference](https://docs.aws.amazon.com/sdkref/latest/guide/feature-retry-behavior.html).

Here’s an example that explicitly uses the `legacy` retry policy with a maximum of 3 total requests (2 retries):

```
import boto3
from botocore.config import Config

my_config = Config(
   connect_timeout = 1.0,
   read_timeout = 1.0,
   retries = {
     'mode': 'legacy',
     'total_max_attempts': 3
   }
)
dynamodb = boto3.resource('dynamodb', config=my_config)
```

Because DynamoDB is a highly-available, low-latency system, you may want to be more aggressive with the speed of retries than the built-in retry policies allow. You can implement your own retry policy by setting the max attempts to 0, catching the exceptions yourself, and retrying as appropriate from your own code instead of relying on boto3 to do implicit retries.

If you manage your own retry policy, you'll want to differentiate between throttles and errors:
+ A **throttle** (indicated by a `ProvisionedThroughputExceededException` or `ThrottlingException`) indicates a healthy service that's informing you that you've exceeded your read or write capacity on a DynamoDB table or partition. Every millisecond that passes, a bit more read or write capacity is made available, so you can retry quickly (such as every 50ms) to attempt to access that newly released capacity. With throttles, you don't especially need exponential backoff because throttles are lightweight for DynamoDB to return and incur no per-request charge to you. Exponential backoff assigns longer delays to client threads that have already waited the longest, which statistically extends the p50 and p99 outward.
+ An **error** (indicated by an `InternalServerError` or a `ServiceUnavailable`, among others) indicates a transient issue with the service. This can be for the whole table or possibly just the partition you're reading from or writing to. With errors, you can pause longer before retries (such as 250ms or 500ms) and use jitter to stagger the retries.

**Config for max pool connections**

Lastly, the config lets you control the connection pool size:
+ **max\$1pool\$1connections (int)** – The maximum number of connections to keep in a connection pool. If this value is not set, the default value of 10 is used.

This option controls the maximum number of HTTP connections to keep pooled for reuse. A different pool is kept per Session. If you anticipate more than 10 threads going against clients or resources built off the same Session, you should consider raising this, so threads don't have to wait on other threads using a pooled connection.

```
import boto3
from botocore.config import Config

my_config = Config(
   max_pool_connections = 20
)

# Setup a single session holding up to 20 pooled connections
session = boto3.Session(my_config)

# Create up to 20 resources against that session for handing to threads
# Notice the single-threaded access to the Session and each Resource
resource1 = session.resource('dynamodb')
resource2 = session.resource('dynamodb')
# etc
```

## Error handling
<a name="programming-with-python-error-handling"></a>

AWS service exceptions aren’t all statically defined in Boto3. This is because errors and exceptions from AWS services vary widely and are subject to change. Boto3 wraps all service exceptions as a `ClientError` and exposes the details as structured JSON. For example, an error response might be structured like this:

```
{
    'Error': {
        'Code': 'SomeServiceException',
        'Message': 'Details/context around the exception or error'
    },
    'ResponseMetadata': {
        'RequestId': '1234567890ABCDEF',
        'HostId': 'host ID data will appear here as a hash',
        'HTTPStatusCode': 400,
        'HTTPHeaders': {'header metadata key/values will appear here'},
        'RetryAttempts': 0
    }
}
```

The following code catches any `ClientError` exceptions and looks at the string value of the `Code` within the `Error` to determine what action to take:

```
import botocore
import boto3

dynamodb = boto3.client('dynamodb')

try:
    response = dynamodb.put_item(...)

except botocore.exceptions.ClientError as err:
    print('Error Code: {}'.format(err.response['Error']['Code']))
    print('Error Message: {}'.format(err.response['Error']['Message']))
    print('Http Code: {}'.format(err.response['ResponseMetadata']['HTTPStatusCode']))
    print('Request ID: {}'.format(err.response['ResponseMetadata']['RequestId']))

    if err.response['Error']['Code'] in ('ProvisionedThroughputExceededException', 'ThrottlingException'):
        print("Received a throttle")
    elif err.response['Error']['Code'] == 'InternalServerError':
        print("Received a server error")
    else:
        raise err
```

Some (but not all) exception codes have been materialized as top-level classes. You can choose to handle these directly. When using the Client interface, these exceptions are dynamically populated on your client and you catch these exceptions using your client instance, like this:

```
except ddb_client.exceptions.ProvisionedThroughputExceededException:
```

When using the Resource interface, you have to use `.meta.client` to traverse from the resource to the underlying Client to access the exceptions, like this:

```
except ddb_resource.meta.client.exceptions.ProvisionedThroughputExceededException:
```

To review the list of materialized exception types, you can generate the list dynamically:

```
ddb = boto3.client("dynamodb")
print([e for e in dir(ddb.exceptions) if e.endswith('Exception') or e.endswith('Error')])
```

When doing a write operation with a condition expression, you can request that if the expression fails the value of the item should be returned in the error response.

```
try:
    response = table.put_item(
        Item=item,
        ConditionExpression='attribute_not_exists(pk)',
        ReturnValuesOnConditionCheckFailure='ALL_OLD'
    )
except table.meta.client.exceptions.ConditionalCheckFailedException as e:
    print('Item already exists:', e.response['Item'])
```

For further reading on error handling and exceptions:
+ The [boto3 guide on error handling](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/error-handling.html) has more information on error handling techniques. 
+ The [DynamoDB developer guide section on programming errors](Programming.Errors.md) lists what errors you might encounter. 
+ The [Common Errors section in the API reference](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/CommonErrors.html) .
+ The documentation on each API operation lists what errors that call might generate (for example [BatchWriteItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchWriteItem.html)).

## Logging
<a name="programming-with-python-logging"></a>

The boto3 library integrates with Python's built-in logging module for tracking what happens during a session. To control logging levels, you can configure the logging module:

```
import logging

logging.basicConfig(level=logging.INFO)
```

This configures the root logger to log `INFO` and above level messages. Logging messages which are less severe than level will be ignored. Logging levels include `DEBUG`, `INFO`, `WARNING`, `ERROR`, and `CRITICAL`. The default is `WARNING`.

Loggers in boto3 are hierarchical. The library uses a few different loggers, each corresponding to different parts of the library. You can separately control the behavior of each:
+ **boto3**: The main logger for the boto3 module.
+ **botocore**: The main logger for the botocore package.
+ **botocore.auth**: Used for logging AWS signature creation for requests.
+ **botocore.credentials**: Used for logging the process of credential fetching and refresh.
+ **botocore.endpoint**: Used for logging request creation before it's sent over the network.
+ **botocore.hooks**: Used for logging events triggered in the library.
+ **botocore.loaders**: Used for logging when parts of AWS service models are loaded.
+ **botocore.parsers**: Used for logging AWS service responses before they're parsed.
+ **botocore.retryhandler**: Used for logging the processing of AWS service request retries (legacy mode).
+ **botocore.retries.standard**: Used for logging the processing of AWS service request retries (standard or adaptive mode).
+ **botocore.utils**: Used for logging miscellaneous activities in the library.
+ **botocore.waiter**: Used for logging the functionality of waiters, which poll an AWS service until a certain state is reached. 

Other libraries log as well. Internally, boto3 uses the third party urllib3 for HTTP connection handling. When latency is important, you can watch its logs to ensure your pool is being well utilized by seeing when urllib3 establishes a new connection or closes an idle one down.
+ **urllib3.connectionpool:** Use for logging connection pool handling events.

The following code snippet sets most logging to `INFO` with `DEBUG` logging for endpoint and connection pool activity:

```
import logging

logging.getLogger('boto3').setLevel(logging.INFO)
logging.getLogger('botocore').setLevel(logging.INFO)
logging.getLogger('botocore.endpoint').setLevel(logging.DEBUG)
logging.getLogger('urllib3.connectionpool').setLevel(logging.DEBUG)
```

## Event hooks
<a name="programming-with-python-event-hooks"></a>

Botocore emits events during various parts of its execution. You can register handlers for these events so that whenever an event is emitted, your handler will be called. This lets you extend the behavior of botocore without having to modify the internals.

For instance, let's say you want to keep track of every time a `PutItem` operation is called on any DynamoDB table in your application. You might register on the `'provide-client-params.dynamodb.PutItem'` event to catch and log every time a `PutItem` operation is invoked on the associated Session. Here's an example:

```
import boto3
import botocore
import logging

def log_put_params(params, **kwargs):
    if 'TableName' in params and 'Item' in params:
        logging.info(f"PutItem on table {params['TableName']}: {params['Item']}")

logging.basicConfig(level=logging.INFO)

session = boto3.Session()
event_system = session.events

# Register our interest in hooking in when the parameters are provided to PutItem
event_system.register('provide-client-params.dynamodb.PutItem', log_put_params)

# Now, every time you use this session to put an item in DynamoDB,
# it will log the table name and item data.
dynamodb = session.resource('dynamodb')
table = dynamodb.Table('YourTableName')
table.put_item(
    Item={
        'pk': '123',
        'sk': 'cart#123',
        'item_data': 'YourItemData',
        # ... more attributes ...
    }
)
```

Within the handler, you can even manipulate the params programmatically to change behavior:

```
params['TableName'] = "NewTableName"
```

For more information on events, see the [botocore documentation on events](https://botocore.amazonaws.com/v1/documentation/api/latest/topics/events.html) and the [boto3 documentation on events](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/events.html).

## Pagination and the Paginator
<a name="programming-with-python-pagination"></a>

Some requests, such as Query and Scan, limit the size of data returned on a single request and require you to make repeated requests to pull subsequent pages.

You can control the maximum number of items to be read for each page with the `limit` parameter. For example, if you want the last 10 items, you can use `limit` to retrieve only the last 10. Note the limit is how much should be read from the table before any filtering is applied. There's no way to specify you want exactly 10 after filtering; you can only control the pre-filtered count and check client-side when you've actually retrieved 10. Regardless of the limit, every response always has a maximum size of 1 MB.

If the response includes a `LastEvaluatedKey`, it indicates the response ended because it hit a count or size limit. The key is the last key evaluated for the response. You can retrieve this `LastEvaluatedKey` and pass it to a follow-up call as `ExclusiveStartKey` to read the next chunk from that starting point. When there's no `LastEvaluatedKey` returned that, means there are no more items matching the Query or Scan.

Here's a simple example (using the Resource interface, but the Client interface has the same pattern) that reads at most 100 items per page and loops until all items have been read.

```
import boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('YourTableName')

query_params = {
    'KeyConditionExpression': Key('pk').eq('123') & Key('sk').gt(1000),
    'Limit': 100
}

while True:
    response = table.query(**query_params)

    # Process the items however you like
    for item in response['Items']:
        print(item)

    # No LastEvaluatedKey means no more items to retrieve
    if 'LastEvaluatedKey' not in response:
        break

    # If there are possibly more items, update the start key for the next page
    query_params['ExclusiveStartKey'] = response['LastEvaluatedKey']
```

For convenience, boto3 can do this for you with Paginators. However, it only works with the Client interface. Here's the code rewritten to use Paginators:

```
import boto3

dynamodb = boto3.client('dynamodb')

paginator = dynamodb.get_paginator('query')

query_params = {
    'TableName': 'YourTableName',
    'KeyConditionExpression': 'pk = :pk_val AND sk > :sk_val',
    'ExpressionAttributeValues': {
        ':pk_val': {'S': '123'},
        ':sk_val': {'N': '1000'},
    },
    'Limit': 100
}

page_iterator = paginator.paginate(**query_params)

for page in page_iterator:
    # Process the items however you like
    for item in page['Items']:
        print(item)
```

For more information, see the [Guide on Paginators](https://botocore.amazonaws.com/v1/documentation/api/latest/topics/events.html) and the [API reference for DynamoDB.Paginator.Query](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb/paginator/Query.html).

**Note**  
Paginators also have their own configuration settings named `MaxItems`, `StartingToken`, and `PageSize`. For paginating with DynamoDB, you should ignore these settings.

## Waiters
<a name="programming-with-python-waiters"></a>

Waiters provide the ability to wait for something to complete before proceeding. At present, they only support waiting for a table to be created or deleted. In the background, the waiter operation does a check for you every 20 seconds up to 25 times. You could do this yourself, but using a waiter is elegant when writing automation.

This code shows how to wait for a particular table to have been created:

```
# Create a table, wait until it exists, and print its ARN
response = client.create_table(...)
waiter = client.get_waiter('table_exists')
waiter.wait(TableName='YourTableName')
print('Table created:', response['TableDescription']['TableArn']
```

For more information, see the [Guide to Waiters](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/clients.html#waiters) and [Reference on Waiters](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#waiters).

# Programming Amazon DynamoDB with JavaScript
<a name="programming-with-javascript"></a>

This guide provides an orientation to programmers wanting to use Amazon DynamoDB with JavaScript. Learn about the AWS SDK for JavaScript, abstraction layers available, configuring connections, handling errors, defining retry policies, managing keep-alive, and more.

**Topics**
+ [About AWS SDK for JavaScript](#programming-with-javascript-about)
+ [Using the AWS SDK for JavaScript V3](#programming-with-javascript-using-the-sdk)
+ [Accessing JavaScript documentation](#programming-with-javascript-documentation)
+ [Abstraction layers](#programming-with-javascript-abstraction-layers)
+ [Using the marshall utility function](#programming-with-javascript-using-marshall-utility)
+ [Reading items](#programming-with-javascript-reading-items)
+ [Conditional writes](#programming-with-javascript-conditional-writes)
+ [Pagination](#programming-with-javascript-pagination)
+ [Specifying configuration](#programming-with-javascript-config)
+ [Waiters](#programming-with-javascript-waiters)
+ [Error handling](#programming-with-javascript-error-handling)
+ [Logging](#programming-with-javascript-logging)
+ [Considerations](#programming-with-javascript-considerations)

## About AWS SDK for JavaScript
<a name="programming-with-javascript-about"></a>

The AWS SDK for JavaScript provides access to AWS services using either browser scripts or Node.js. This documentation focuses on the latest version of the SDK (V3). The AWS SDK for JavaScript V3 is maintained by AWS as an [open-source project hosted on GitHub](https://github.com/aws/aws-sdk-js-v3). Issues and feature requests are public and you can access them on the issues page for the GitHub repository.

JavaScript V2 is similar to V3, but contains syntax differences. V3 is more modular, making it easier to ship smaller dependencies, and has first-class TypeScript support. We recommend using the latest version of the SDK.

## Using the AWS SDK for JavaScript V3
<a name="programming-with-javascript-using-the-sdk"></a>

You can add the SDK to your Node.js application using the Node Package Manager. The examples below show how to add the most common SDK packages for working with DynamoDB.
+ `npm install @aws-sdk/client-dynamodb`
+ `npm install @aws-sdk/lib-dynamodb`
+ `npm install @aws-sdk/util-dynamodb`

Installing packages adds references to the dependency section of your package.json project file. You have the option to use the newer ECMAScript module syntax. For further details on these two approaches, see the Considerations section.

## Accessing JavaScript documentation
<a name="programming-with-javascript-documentation"></a>

Get started with JavaScript documentation with the following resources:
+ Access the [Developer guide](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/welcome.html) for core JavaScript documentation. Installation instructions are located in the **Setting up** section.
+ Access the [API reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/introduction/) documentation to explore all available classes and methods.
+ The SDK for JavaScript supports many AWS services other than DynamoDB. Use the following procedure to locate specific API coverage for DynamoDB:

  1. From **Services**, choose **DynamoDB and Libraries**. This documents the low-level client.

  1. Choose **lib-dynamodb**. This documents the high-level client. The two clients represent two different abstraction layers that you have the choice to use. See the section below for more information about abstraction layers.

## Abstraction layers
<a name="programming-with-javascript-abstraction-layers"></a>

The SDK for JavaScript V3 has a low-level client (`DynamoDBClient`) and a high-level client (`DynamoDBDocumentClient`).

**Topics**
+ [Low-level client (`DynamoDBClient`)](#programming-with-javascript-low-level-client)
+ [High-level client (`DynamoDBDocumentClient`)](#programming-with-javascript-high-level-client)

### Low-level client (`DynamoDBClient`)
<a name="programming-with-javascript-low-level-client"></a>

The low-level client provides no extra abstractions over the underlying wire protocol. It gives you full control over all aspects of communication, but because there are no abstractions, you must do things like provide item definitions using the DynamoDB JSON format. 

As the example below shows, with this format data types must be stated explicitly. An *S* indicates a string value and an *N* indicates a number value. Numbers on the wire are always sent as strings tagged as number types to ensure no loss in precision. The low-level API calls have a naming pattern such as `PutItemCommand` and `GetItemCommand`.

The following example is using low-level client with `Item` defined using DynamoDB JSON:

```
const { DynamoDBClient, PutItemCommand } = require("@aws-sdk/client-dynamodb");

const client = new DynamoDBClient({});

async function addProduct() {
  const params = {
    TableName: "products",
    Item: {
      "id": { S: "Product01" },
      "description": { S: "Hiking Boots" },
      "category": { S: "footwear" },
      "sku": { S: "hiking-sku-01" },
      "size": { N: "9" }
    }
  };

  try {
    const data = await client.send(new PutItemCommand(params));
    console.log('result : ' + JSON.stringify(data));
  } catch (error) {
    console.error("Error:", error);
  }
}
addProduct();
```

### High-level client (`DynamoDBDocumentClient`)
<a name="programming-with-javascript-high-level-client"></a>

The high-level DynamoDB document client offers built-in convenience features, such as eliminating the need to manually marshal data and allowing for direct reads and writes using standard JavaScript objects. The [documentation for `lib-dynamodb`](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-lib-dynamodb/) provides the list of advantages.

To instantiate the `DynamoDBDocumentClient`, construct a low-level `DynamoDBClient` and then wrap it with a `DynamoDBDocumentClient`. The function naming convention differs slightly between the two packages. For instance, the low-level uses `PutItemCommand` while the high-level uses `PutCommand`. The distinct names allow both sets of functions to coexist in the same context, allowing you to mix both in the same script.

```
const { DynamoDBClient } = require("@aws-sdk/client-dynamodb");
const { DynamoDBDocumentClient, PutCommand } = require("@aws-sdk/lib-dynamodb");

const client = new DynamoDBClient({});

const docClient = DynamoDBDocumentClient.from(client);

async function addProduct() {
  const params = {
    TableName: "products",
    Item: {
      id: "Product01",
      description: "Hiking Boots",
      category: "footwear",
      sku: "hiking-sku-01",
      size: 9,
    },
  };

  try {
    const data = await docClient.send(new PutCommand(params));
    console.log('result : ' + JSON.stringify(data));
  } catch (error) {
    console.error("Error:", error);
  }
}

addProduct();
```

The pattern of usage is consistent when you're reading items using API operations such as `GetItem`, `Query`, or `Scan`.

## Using the marshall utility function
<a name="programming-with-javascript-using-marshall-utility"></a>

You can use the low-level client and marshall or unmarshall the data types on your own. The utility package, [util-dynamodb](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-util-dynamodb/), has a `marshall()` utility function that accepts JSON and produces DynamoDB JSON, as well as an `unmarshall()` function, that does the reverse. The following example uses the low-level client with data marshalling handled by the `marshall()` call.

```
const { DynamoDBClient, PutItemCommand } = require("@aws-sdk/client-dynamodb");
const { marshall } = require("@aws-sdk/util-dynamodb");

const client = new DynamoDBClient({});

async function addProduct() {
  const params = {
    TableName: "products",
    Item: marshall({
      id: "Product01",
      description: "Hiking Boots",
      category: "footwear",
      sku: "hiking-sku-01",
      size: 9,
    }),
  };

  try {
    const data = await client.send(new PutItemCommand(params));
  } catch (error) {
    console.error("Error:", error);
  }
}
addProduct();
```

## Reading items
<a name="programming-with-javascript-reading-items"></a>

To read a single item from DynamoDB, you use the `GetItem` API operation. Similar to the `PutItem` command, you have the choice to use either the low-level client or the high-level Document client. The example below demonstrates using the high-level Document client to retrieve an item.

```
const { DynamoDBClient } = require("@aws-sdk/client-dynamodb");
const { DynamoDBDocumentClient, GetCommand } = require("@aws-sdk/lib-dynamodb");

const client = new DynamoDBClient({});

const docClient = DynamoDBDocumentClient.from(client);

async function getProduct() {
  const params = {
    TableName: "products",
    Key: {
      id: "Product01",
    },
  };

  try {
    const data = await docClient.send(new GetCommand(params));
    console.log('result : ' + JSON.stringify(data));
  } catch (error) {
    console.error("Error:", error);
  }
}

getProduct();
```

Use the `Query` API operation to read multiple items. You can use the low-level client or the Document client. The example below uses the high-level Document client.

```
const { DynamoDBClient } = require("@aws-sdk/client-dynamodb");
const {
  DynamoDBDocumentClient,
  QueryCommand,
} = require("@aws-sdk/lib-dynamodb");

const client = new DynamoDBClient({});

const docClient = DynamoDBDocumentClient.from(client);

async function productSearch() {
  const params = {
    TableName: "products",
    IndexName: "GSI1",
    KeyConditionExpression: "#category = :category and begins_with(#sku, :sku)",
    ExpressionAttributeNames: {
      "#category": "category",
      "#sku": "sku",
    },
    ExpressionAttributeValues: {
      ":category": "footwear",
      ":sku": "hiking",
    },
  };

  try {
    const data = await docClient.send(new QueryCommand(params));
    console.log('result : ' + JSON.stringify(data));
  } catch (error) {
    console.error("Error:", error);
  }
}

productSearch();
```

## Conditional writes
<a name="programming-with-javascript-conditional-writes"></a>

DynamoDB write operations can specify a logical condition expression that must evaluate to true for the write to proceed. If the condition does not evaluate to true, the write operation generates an exception. The condition expression can check if the item already exists or if its attributes match certain constraints.

`ConditionExpression = "version = :ver AND size(VideoClip) < :maxsize" `

When the conditional expression fails, you can use `ReturnValuesOnConditionCheckFailure` to request that the error response include the item that didn't satisfy the conditions to deduce what the problem was. For more details, see [Handle conditional write errors in high concurrency scenarios with Amazon DynamoDB](https://aws.amazon.com/blogs/database/handle-conditional-write-errors-in-high-concurrency-scenarios-with-amazon-dynamodb/).

```
try {
      const response = await client.send(new PutCommand({
          TableName: "YourTableName",
          Item: item,
          ConditionExpression: "attribute_not_exists(pk)",
          ReturnValuesOnConditionCheckFailure: "ALL_OLD"
      }));
  } catch (e) {
      if (e.name === 'ConditionalCheckFailedException') {
          console.log('Item already exists:', e.Item);
      } else {
          throw e;
      }
  }
```

Additional code examples showing other aspects of JavsScript SDK V3 usage are available in the [JavaScript SDK V3 Documentation](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/javascript_dynamodb_code_examples.html) and under the [DynamoDB-SDK-Examples GitHub repository](https://github.com/aws-samples/aws-dynamodb-examples/tree/master/examples/SDK/node.js).

## Pagination
<a name="programming-with-javascript-pagination"></a>

**Topics**
+ [Using the `paginateScan` convenience method](#using-the-paginatescan-convenience-method)

Read requests such as `Scan` or `Query` will likely return multiple items in a dataset. If you perform a `Scan` or `Query` with a `Limit` parameter, then once the system has read that many items, a partial response will be sent, and you'll need to paginate to retrieve additional items.

The system will only read a maximum of 1 megabyte of data per request. If you're including a `Filter` expression, the system will still read a megabyte, at maximum, of data from disk, but will return the items of that megabyte that match the filter. The filter operation could return 0 items for a page, but still require further pagination before the search is exhausted.

You should look for `LastEvaluatedKey` in the response and using it as the `ExclusiveStartKey` parameter in a subsequent request to continue data retrieval. This serves as a bookmark as noted in the following example.

**Note**  
The sample passes a null `lastEvaluatedKey` as the `ExclusiveStartKey` on the first iteration and this is allowed.

Example using the `LastEvaluatedKey`:

```
const { DynamoDBClient, ScanCommand } = require("@aws-sdk/client-dynamodb");

const client = new DynamoDBClient({});

async function paginatedScan() {
  let lastEvaluatedKey;
  let pageCount = 0;

  do {
    const params = {
      TableName: "products",
      ExclusiveStartKey: lastEvaluatedKey,
    };

    const response = await client.send(new ScanCommand(params));
    pageCount++;
    console.log(`Page ${pageCount}, Items:`, response.Items);
    lastEvaluatedKey = response.LastEvaluatedKey;
  } while (lastEvaluatedKey);
}

paginatedScan().catch((err) => {
  console.error(err);
});
```

### Using the `paginateScan` convenience method
<a name="using-the-paginatescan-convenience-method"></a>



The SDK provides convenience methods called `paginateScan` and `paginateQuery` that do this work for you and makes the repeated requests behind the scenes. Specify the max number of items to read per request using the standard `Limit` parameter.

```
const { DynamoDBClient, paginateScan } = require("@aws-sdk/client-dynamodb");

const client = new DynamoDBClient({});

async function paginatedScanUsingPaginator() {
  const params = {
    TableName: "products",
    Limit: 100
  };

  const paginator = paginateScan({client}, params);

  let pageCount = 0;

  for await (const page of paginator) {
    pageCount++;
    console.log(`Page ${pageCount}, Items:`, page.Items);
  }
}

paginatedScanUsingPaginator().catch((err) => {
  console.error(err);
});
```

**Note**  
Performing full table scans regularly is not a recommended access pattern unless the table is small.

## Specifying configuration
<a name="programming-with-javascript-config"></a>

**Topics**
+ [Config for timeouts](#programming-with-javascript-config-timeouts)
+ [Config for keep-alive](#programming-with-javascript-config-keep-alive)
+ [Config for retries](#programming-with-javascript-config-retries)

When setting up the `DynamoDBClient`, you can specify various configuration overrides by passing a configuration object to the constructor. For example, you can specify the Region to connect to if it's not already known to the calling context or the endpoint URL to use. This is useful if you want to target a DynamoDB Local instance for development purposes.

```
const client = new DynamoDBClient({
  region: "eu-west-1",
  endpoint: "http://localhost:8000",
});
```

### Config for timeouts
<a name="programming-with-javascript-config-timeouts"></a>

DynamoDB uses HTTPS for client-server communication. You can control some aspects of the HTTP layer by providing a `NodeHttpHandler` object. For example, you can adjust the key timeout values `connectionTimeout` and `requestTimeout`. The `connectionTimeout` is the maximum duration, in milliseconds, that the client will wait while trying to establish a connection before giving up.

The `requestTimeout` defines how long the client will wait for a response after a request has been sent, also in milliseconds. The defaults for both are zero, meaning the timeout is disabled and there's no limit on how long the client will wait if the response does not arrive. You should set the timeouts to something reasonable so in the event of a network issue the request will error out and a new request can be initiated. For example:

```
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { NodeHttpHandler } from "@smithy/node-http-handler";

const requestHandler = new NodeHttpHandler({
  connectionTimeout: 2000,
  requestTimeout: 2000,
});

const client = new DynamoDBClient({
  requestHandler
});
```

**Note**  
The example provided uses the [Smithy](https://smithy.io/2.0/index.html) import. Smithy is a language for defining services and SDKs, open-source and maintained by AWS.

In addition to configuring timeout values, you can set the maximum number of sockets, which allows for an increased number of concurrent connections per origin. The developer guide includes [details on configuring the `maxSockets` parameter](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-configuring-maxsockets.html).

### Config for keep-alive
<a name="programming-with-javascript-config-keep-alive"></a>

When using HTTPS, the first request always takes some back-and-forth communication to establish a secure connection. HTTP Keep-Alive allows subsequent requests to reuse the already-established connection, making the requests more efficient and lowering latency. HTTP Keep-Alive is enabled by default with JavaScript V3. 

There's a limit to how long an idle connection can be kept alive. Consider sending periodic requests, maybe every minute, if you have an idle connection but want the next request to use an already-established connection.

**Note**  
Note that in the older V2 of the SDK, keep-alive was off by default, meaning each connection would get closed immediately after use. If using V2, you can override this setting.

### Config for retries
<a name="programming-with-javascript-config-retries"></a>

When the SDK receives an error response and the error is resumable as determined by the SDK, such as a throttling exception or a temporary service exception, it will retry again. This happens invisibly to you as the caller, except that you might notice the request took longer to succeed.

The SDK for JavaScript V3 will make 3 total requests, by default, before giving up and passing the error into the calling context. You can adjust the number and frequency of these retries.

The `DynamoDBClient` constructor accepts a `maxAttempts` setting that limits how many attempts will happen. The below example raises the value from the default of 3 to a total of 5. If you set it to 0 or 1, that indicates you don't want any automatic retries and want to handle any resumable errors yourself within your catch block.

```
const client = new DynamoDBClient({
  maxAttempts: 5,
});
```

You can also control the timing of the retries with a custom retry strategy. To do this, import the `util-retry` utility package and create a custom backoff function that calculates the wait time between retries based on the current retry count.

The example below says to make a maximum of 5 attempts with delays of 15, 30, 90, and 360 milliseconds should the first attempt fail. The custom backoff function, ` calculateRetryBackoff`, calculates the delays by accepting the retry attempt number (starts with 1 for the first retry) and returns how many milliseconds to wait for that request.

```
const { ConfiguredRetryStrategy } = require("@aws-sdk/util-retry");

const calculateRetryBackoff = (attempt) => {
  const backoffTimes = [15, 30, 90, 360];
  return backoffTimes[attempt - 1] || 0;
};

const client = new DynamoDBClient({
  retryStrategy: new ConfiguredRetryStrategy(
    5, // max attempts.
    calculateRetryBackoff // backoff function.
  ),
});
```

## Waiters
<a name="programming-with-javascript-waiters"></a>

The DynamoDB client includes two useful [waiter functions](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/wait/index.html#cli-aws-dynamodb-wait) that can be used when creating, modifying, or deleting tables when you want your code to wait to proceed until the table modification has finished. For example, you can deploy a table, call the `waitUntilTableExists` function, and the code will block until the table has been made **ACTIVE**. The waiter internally polls the DynamoDB service with a `describe-table` every 20 seconds.

```
import {waitUntilTableExists, waitUntilTableNotExists} from "@aws-sdk/client-dynamodb";

… <create table details>

const results = await waitUntilTableExists({client: client, maxWaitTime: 180}, {TableName: "products"});
if (results.state == 'SUCCESS') {
  return results.reason.Table
}
console.error(`${results.state} ${results.reason}`);
```

The `waitUntilTableExists` feature returns control only when it can perform a `describe-table` command that shows the table status **ACTIVE**. This ensures that you can use `waitUntilTableExists` to wait for the completion of creation, as well as modifications such as adding a GSI index, which may take some time to apply before the table returns to **ACTIVE** status.

## Error handling
<a name="programming-with-javascript-error-handling"></a>

In the early examples here, we've caught all errors broadly. However, in practical applications, it's important to discern between various error types and implement more precise error handling.

DynamoDB error responses contain metadata, including the name of the error. You can catch errors then match against the possible string names of error conditions to determine how to proceed. For server-side errors, you can leverage the `instanceof` operator with the error types exported by the `@aws-sdk/client-dynamodb` package to manage error handling efficiently.

It's important to note that these errors only manifest after all retries have been exhausted. If an error is retried and is eventually followed by a successful call, from the code's perspective, there's no error just a slightly elevated latency. Retries will show up in Amazon CloudWatch charts as unsuccessful requests, such as throttle or error requests. If the client reaches the maximum retry count, it will give up and generate an exception. This is the client's way of saying it's not going to retry.

Below is a snippet to catch the error and take action based on the type of error that was returned.

```
import {
  ResourceNotFoundException
  ProvisionedThroughputExceededException,
  DynamoDBServiceException,
} from "@aws-sdk/client-dynamodb";

try {
  await client.send(someCommand);
} catch (e) {
    if (e instanceof ResourceNotFoundException) {
      // Handle ResourceNotFoundException
    } else if (e instanceof ProvisionedThroughputExceededException) {
      // Handle ProvisionedThroughputExceededException
    } else if (e instanceof DynamoDBServiceException) {
      // Handle DynamoDBServiceException
    } else {
      // Other errors such as those from the SDK
      if (e.name === "TimeoutError") {
        // Handle SDK TimeoutError.
      } else {
        // Handle other errors.
      }
    }
}
```

See [Error handling with DynamoDB](Programming.Errors.md) for common error strings in the *DynamoDB Developer Guide*. The exact errors possible with any particular API call can be found in the documentation for that API call, such as the [Query API docs](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html).

The metadata of errors include additional properties, depending on the error. For a` TimeoutError`, the metadata includes the number of attempts that were made and the `totalRetryDelay`, as shown below.

```
{
  "name": "TimeoutError",
  "$metadata": {
    "attempts": 3,
    "totalRetryDelay": 199
  }
}
```

If you manage your own retry policy, you'll want to differentiate between throttles and errors:
+ A **throttle** (indicated by a` ProvisionedThroughputExceededException` or `ThrottlingException`) indicates a healthy service that's informing you that you've exceeded your read or write capacity on a DynamoDB table or partition. Every millisecond that passes, a bit more read or write capacity is made available, and so you can retry quickly, such as every 50ms, to attempt to access that newly released capacity.

   With throttles you don't especially need exponential backoff because throttles are lightweight for DynamoDB to return and incur no per-request charge to you. Exponential backoff assigns longer delays to client threads that have already waited the longest, which statistically extends the p50 and p99 outward.
+ An **error** (indicated by an` InternalServerError` or a `ServiceUnavailable`, among others) indicates a transient issue with the service, possibly the whole table or just the partition you're reading from or writing to. With errors, you can pause longer before retries, such as 250ms or 500ms, and use jitter to stagger the retries.

## Logging
<a name="programming-with-javascript-logging"></a>

Turn on logging to get more details about what the SDK is doing. You can set a parameter on the `DynamoDBClient` as shown in the example below. More log information will appear in the console and includes metadata such as the status code and the consumed capacity. If you run the code locally in a terminal window, the logs appear there. If you run the code in AWS Lambda, and you have Amazon CloudWatch logs set up, then the console output will be written there.

```
const client = new DynamoDBClient({
  logger: console
});
```

You can also hook into the internal SDK activities and perform custom logging as certain events happen. The example below uses the client's `middlewareStack` to intercept each request as it's being sent from the SDK and logs it as it's happening.

```
const client = new DynamoDBClient({});

client.middlewareStack.add(
  (next) => async (args) => {
    console.log("Sending request from AWS SDK", { request: args.request });
    return next(args);
  },
  {
    step: "build",
    name: "log-ddb-calls",
  }
);
```

The `MiddlewareStack` provides a powerful hook for observing and controlling SDK behavior. See the blog [Introducing Middleware Stack in Modular AWS SDK for JavaScript](https://aws.amazon.com/blogs/developer/middleware-stack-modular-aws-sdk-js/), for more information.

## Considerations
<a name="programming-with-javascript-considerations"></a>

When implementing the AWS SDK for JavaScript in your project, here are some further factors to consider.

**Module systems**  
The SDK supports two module systems, CommonJS and ES (ECMAScript). CommonJS uses the `require` function, while ES uses the `import` keyword.  

1. **Common JS** – `const { DynamoDBClient, PutItemCommand } = require("@aws-sdk/client-dynamodb");`

1. **ES (ECMAScript** – `import { DynamoDBClient, PutItemCommand } from "@aws-sdk/client-dynamodb";`
The project type dictates the module system to be used and is specified in the type section of your package.json file. The default is CommonJS. Use `"type": "module"` to indicate an ES project. If you have an existing Node.JS project that uses the CommonJS package format, you can still add functions with the more modern SDK V3 Import syntax by naming your function files with the .mjs extension. This will allow the code file to be treated as ES (ECMAScript).

**Asynchronous operations**  
You'll see many code samples using callbacks and promises to handle the result of DynamoDB operations. With modern JavaScript this complexity is no longer needed and developers can take advantage of the more succinct and readable async/await syntax for asynchronous operations.

**Web browser runtime**  
Web and mobile developers building with React or React Native can use the SDK for JavaScript in their projects. With the earlier V2 of the SDK, web developers would have to load the full SDK into the browser, referencing an SDK image hosted at https://sdk.amazonaws.com/js/.   
With V3, it's possible to bundle just the required V3 client modules and all required JavaScript functions into a single JavaScript file using Webpack, and add it in a script tag in the `<head>` of your HTML pages, as explained in the [Getting started in a browser script](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/getting-started-browser.html) section of the SDK documentation.

**DAX data plane operations**  
The Amazon DynamoDB Streams Accelerator (DAX) data plane operations are supported by the SDK for JavaScript V3.

# Programming DynamoDB with the AWS SDK for Java 2.x
<a name="ProgrammingWithJava"></a>

This programming guide provides an orientation for programmers who want to use Amazon DynamoDB with Java. The guide covers different concepts including abstraction layers, configuration management, error handling, controlling retry policies, and managing keep-alive.

**Topics**
+ [About the AWS SDK for Java 2.x](#AboutProgrammingWithJavaSDK)
+ [Getting started](#GetStartedProgrammingWithJavaSDK)
+ [SDK for Java 2.x documentation](#ProgrammingWithJavaUseDoc)
+ [Supported interfaces](#JavaInterfaces)
+ [Additional code examples](#AdditionalCodeEx)
+ [Sync and async programming](#SyncAsyncProgramming)
+ [HTTP clients](#HttpClients)
+ [Config](#ConfigHttpClient)
+ [Error handling](#JavaErrorHandling)
+ [AWS request ID](#JavaRequestID)
+ [Logging](#JavaLogging)
+ [Pagination](#JavaPagination)
+ [Data class annotations](#JavaDataClassAnnotation)

## About the AWS SDK for Java 2.x
<a name="AboutProgrammingWithJavaSDK"></a>

You can access DynamoDB from Java using the official AWS SDK for Java. The SDK for Java has two versions: 1.x and 2.x. The end-of-support for 1.x was [announced](https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-aws-sdk-for-java-v1-x-on-december-31-2025/) on January 12, 2024. It will enter maintenance mode on July 31, 2024 and its end-of-support is due on December 31, 2025. For new development, we highly recommend that you use 2.x, which was first released in 2018. This guide exclusively targets 2.x and focuses only on the parts of the SDK relevant to DynamoDB.

For information about maintenance and support for the AWS SDKs, see [AWS SDK and Tools maintenance policy](https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html) and [AWS SDKs and Tools version support matrix](https://docs.aws.amazon.com/sdkref/latest/guide/version-support-matrix.html) in the *AWS SDKs and Tools Reference Guide*.

The AWS SDK for Java 2.x is a major rewrite of the 1.x code base. The SDK for Java 2.x supports modern Java features, such as the non-blocking I/O introduced in Java 8. The SDK for Java 2.x also adds support for pluggable HTTP client implementations to provide more network connection flexibility and configuration options.

A noticeable change between the SDK for Java 1.x and the SDK for Java 2.x is the use of a new package name. The Java 1.x SDK uses the `com.amazonaws` package name, while the Java 2.x SDK uses `software.amazon.awssdk`. Similarly, Maven artifacts for the Java 1.x SDK use the `com.amazonaws` `groupId`, while Java 2.x SDK artifacts use the `software.amazon.awssdk` `groupId`.

**Important**  
The AWS SDK for Java 1.x has a DynamoDB package named `com.amazonaws.dynamodbv2`. The "v2" in the package name doesn't indicate that it's for Java 2 (J2SE). Rather, "v2" indicates that the package supports the [second version](CurrentAPI.md) of the DynamoDB low-level API instead of the [original version](Appendix.APIv20111205.md) of the low-level API.

### Support for Java versions
<a name="SupportedJavaVersions"></a>

The AWS SDK for Java 2.x provides full support for long-term support (LTS) [Java releases](https://github.com/aws/aws-sdk-java-v2?tab=readme-ov-file#maintenance-and-support-for-java-versions).

## Getting started with the AWS SDK for Java 2.x
<a name="GetStartedProgrammingWithJavaSDK"></a>

The following tutorial shows you how to use [Apache Maven](https://maven.apache.org/) for defining dependencies for the SDK for Java 2.x. This tutorial also shows you how to write the code that connects to DynamoDB for listing the available DynamoDB tables. The tutorial in this guide is based on the tutorial [Get started with the AWS SDK for Java 2.x](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html) in the *AWS SDK for Java 2.x Developer Guide*. We've edited this tutorial to make calls to DynamoDB instead of Amazon S3.

**Topics**
+ [Step 1: Set up for this tutorial](#GetStartedJavaSetup)
+ [Step 2: Create the project](#GetStartedJavaProjectSetup)
+ [Step 3: Write the code](#GetStartedJavaCode)
+ [Step 4: Build and run the application](#GetStartedRunJava)

### Step 1: Set up for this tutorial
<a name="GetStartedJavaSetup"></a>

Before you begin this tutorial, you need the following:
+ Permission to access DynamoDB.
+ A Java development environment that's configured with single sign-on access to AWS services using the AWS access portal.

To set up for this tutorial, follow the instructions in [Setup overview](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/setup.html#setup-overview) in the *AWS SDK for Java 2.x Developer Guide*. After you [configure your development environment with single sign-on access](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/setup.html#setup-credentials) for the Java SDK and you have an [active AWS access portal session](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/setup.html#setup-login-sso), then continue to [Step 2](#GetStartedJavaProjectSetup) of this tutorial.

### Step 2: Create the project
<a name="GetStartedJavaProjectSetup"></a>

To create the project for this tutorial, you run a Maven command that prompts you for input on how to configure the project. After all input is entered and confirmed, Maven finishes building out the project by creating a `pom.xml` file and creating stub Java files.

1. Open a terminal or command prompt window and navigate to a directory of your choice, for example, your `Desktop` or `Home` folder.

1. Enter the following command at the terminal, and then press **Enter**.

   ```
   mvn archetype:generate \
      -DarchetypeGroupId=software.amazon.awssdk \
      -DarchetypeArtifactId=archetype-app-quickstart \
      -DarchetypeVersion=2.22.0
   ```

1. For each prompt, enter the value listed in the second column.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProgrammingWithJava.html)

1. After you enter the last value, Maven lists the choices that you made. To confirm, enter **Y**. Or, enter **N**, and then re-enter your choices.

Maven creates a project folder named `getstarted` based on the `artifactId` value that you entered. Inside the `getstarted` folder, find a file named `README.md` that you can review, a `pom.xml` file, and a `src` directory.

Maven builds the following directory tree.

```
getstarted
 ├── README.md
 ├── pom.xml
 └── src
     ├── main
     │   ├── java
     │   │   └── org
     │   │       └── example
     │   │           ├── App.java
     │   │           ├── DependencyFactory.java
     │   │           └── Handler.java
     │   └── resources
     │       └── simplelogger.properties
     └── test
         └── java
             └── org
                 └── example
                     └── HandlerTest.java
 
 10 directories, 7 files
```

The following shows the contents of the `pom.xml` project file.

#### `pom.xml`
<a name="ProjectSetupCollapse2"></a>

The `dependencyManagement` section contains a dependency to the AWS SDK for Java 2.x, and the `dependencies` section has a dependency for DynamoDB. Specifying these dependencies forces Maven to include the relevant `.jar` files in your Java class path. By default, the AWS SDK doesn't include all the classes for all AWS services. For DynamoDB, if you use the low-level interface, then you should have a dependency on the `dynamodb` artifact. Or, if you use the high-level interface, on the `dynamodb-enhanced` artifact. If you don't include the relevant dependencies, then your code can't compile. The project uses Java 1.8 because of the `1.8` value in the `maven.compiler.source` and `maven.compiler.target` properties.

```
<?xml version="1.0" encoding="UTF-8"?>
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
     <modelVersion>4.0.0</modelVersion>
 
     <groupId>org.example</groupId>
     <artifactId>getstarted</artifactId>
     <version>1.0-SNAPSHOT</version>
     <packaging>jar</packaging>
     <properties>
         <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
         <maven.compiler.source>1.8</maven.compiler.source>
         <maven.compiler.target>1.8</maven.compiler.target>
         <maven.shade.plugin.version>3.2.1</maven.shade.plugin.version>
         <maven.compiler.plugin.version>3.6.1</maven.compiler.plugin.version>
         <exec-maven-plugin.version>1.6.0</exec-maven-plugin.version>
         <aws.java.sdk.version>2.22.0</aws.java.sdk.version> <-------- SDK version picked up from archetype version.
         <slf4j.version>1.7.28</slf4j.version>
         <junit5.version>5.8.1</junit5.version>
     </properties>
 
     <dependencyManagement>
         <dependencies>
             <dependency>
                 <groupId>software.amazon.awssdk</groupId>
                 <artifactId>bom</artifactId>
                 <version>${aws.java.sdk.version}</version>
                 <type>pom</type>
                 <scope>import</scope>
             </dependency>
         </dependencies>
     </dependencyManagement>
 
     <dependencies>
         <dependency>
             <groupId>software.amazon.awssdk</groupId>
             <artifactId>dynamodb</artifactId>  <-------- DynamoDB dependency
             <exclusions>
                 <exclusion>
                     <groupId>software.amazon.awssdk</groupId>
                     <artifactId>netty-nio-client</artifactId>
                 </exclusion>
                 <exclusion>
                     <groupId>software.amazon.awssdk</groupId>
                     <artifactId>apache-client</artifactId>
                 </exclusion>
             </exclusions>
         </dependency>
 
         <dependency>
             <groupId>software.amazon.awssdk</groupId>
             <artifactId>sso</artifactId> <-------- Required for identity center authentication.
         </dependency>
 
         <dependency>
             <groupId>software.amazon.awssdk</groupId>
             <artifactId>ssooidc</artifactId> <-------- Required for identity center authentication.
         </dependency>
 
         <dependency>
             <groupId>software.amazon.awssdk</groupId>
             <artifactId>apache-client</artifactId> <-------- HTTP client specified.
             <exclusions>
                 <exclusion>
                     <groupId>commons-logging</groupId>
                     <artifactId>commons-logging</artifactId>
                 </exclusion>
             </exclusions>
         </dependency>
 
         <dependency>
             <groupId>org.slf4j</groupId>
             <artifactId>slf4j-api</artifactId>
             <version>${slf4j.version}</version>
         </dependency>
 
         <dependency>
             <groupId>org.slf4j</groupId>
             <artifactId>slf4j-simple</artifactId>
             <version>${slf4j.version}</version>
         </dependency>
 
         <!-- Needed to adapt Apache Commons Logging used by Apache HTTP Client to Slf4j to avoid
         ClassNotFoundException: org.apache.commons.logging.impl.LogFactoryImpl during runtime -->
         <dependency>
             <groupId>org.slf4j</groupId>
             <artifactId>jcl-over-slf4j</artifactId>
             <version>${slf4j.version}</version>
         </dependency>
 
         <!-- Test Dependencies -->
         <dependency>
             <groupId>org.junit.jupiter</groupId>
             <artifactId>junit-jupiter</artifactId>
             <version>${junit5.version}</version>
             <scope>test</scope>
         </dependency>
     </dependencies>
 
     <build>
         <plugins>
             <plugin>
                 <groupId>org.apache.maven.plugins</groupId>
                 <artifactId>maven-compiler-plugin</artifactId>
                 <version>${maven.compiler.plugin.version}</version>
             </plugin>
         </plugins>
     </build>
 
 </project>
```

### Step 3: Write the code
<a name="GetStartedJavaCode"></a>

The following code shows the `App` class that Maven creates. The `main` method is the entry point into the application, which creates an instance of the `Handler` class and then calls its `sendRequest` method.

#### `App` class
<a name="projectsetup-collapse2"></a>

```
package org.example;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 public class App {
     private static final Logger logger = LoggerFactory.getLogger(App.class);
 
     public static void main(String... args) {
         logger.info("Application starts");
 
         Handler handler = new Handler();
         handler.sendRequest();
 
         logger.info("Application ends");
     }
 }
```

The `DependencyFactory` class that Maven creates contains the `dynamoDbClient` factory method that builds and returns an [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/DynamoDbClient.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/DynamoDbClient.html) instance. The `DynamoDbClient` instance uses an instance of the Apache-based HTTP client. This is because you specified `apache-client` when Maven prompted you for which HTTP client to use.

The following code shows the `DependencyFactory` class.

#### DependencyFactory class
<a name="code-collapse2"></a>

```
package org.example;
 
 import software.amazon.awssdk.http.apache.ApacheHttpClient;
 import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
 
 /**
  * The module containing all dependencies required by the {@link Handler}.
  */
 public class DependencyFactory {
 
     private DependencyFactory() {}
 
     /**
      * @return an instance of DynamoDbClient
      */
     public static DynamoDbClient dynamoDbClient() {
         return DynamoDbClient.builder()
                        .httpClientBuilder(ApacheHttpClient.builder())
                        .build();
     }
 }
```

The `Handler` class contains the main logic of your program. When an instance of `Handler` is created in the `App` class, the `DependencyFactory` furnishes the `DynamoDbClient` service client. Your code uses the `DynamoDbClient` instance to call DynamoDB.

Maven generates the following `Handler` class with a `TODO` comment. The next step in the tutorial replaces the *`TODO`* comment with code.

#### `Handler` class, Maven-generated
<a name="code-collapsible3"></a>

```
package org.example;
 
 import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
 
 
 public class Handler {
     private final DynamoDbClient dynamoDbClient;
 
     public Handler() {
         dynamoDbClient = DependencyFactory.dynamoDbClient();
     }
 
     public void sendRequest() {
         // TODO: invoking the API calls using dynamoDbClient.
     }
 }
```

To fill in the logic, replace the entire contents of the `Handler` class with the following code. The `sendRequest` method is filled in and the necessary imports are added.

#### `Handler` class, implemented
<a name="code-collapse4"></a>

The following code uses the [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/DynamoDbClient.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/DynamoDbClient.html) instance to retrieve a list of existing tables. If tables exist for a given account and AWS Region, then the code uses the `Logger` instance to log the names of these tables.

```
package org.example;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
 import software.amazon.awssdk.services.dynamodb.model.ListTablesResponse;
 
 
 public class Handler {
     private final DynamoDbClient dynamoDbClient;
 
     public Handler() {
         dynamoDbClient = DependencyFactory.dynamoDbClient();
     }
 
     public void sendRequest() {
         Logger logger = LoggerFactory.getLogger(Handler.class);
 
         logger.info("calling the DynamoDB API to get a list of existing tables");
         ListTablesResponse response = dynamoDbClient.listTables();
 
         if (!response.hasTableNames()) {
             logger.info("No existing tables found for the configured account & region");
         } else {
             response.tableNames().forEach(tableName -> logger.info("Table: " + tableName));
         }
     }
 }
```

### Step 4: Build and run the application
<a name="GetStartedRunJava"></a>

After you create the project and it contains the complete `Handler` class, build and run the application.

1. Make sure that you have an active AWS IAM Identity Center session. To confirm, run the AWS Command Line Interface (AWS CLI) command `aws sts get-caller-identity` and check the response. If you don't have an active session, then see [Sign in using the AWS CLI](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/setup.html#setup-login-sso) for instructions.

1. Open a terminal or command prompt window and navigate to your project directory `getstarted`.

1. To build your project, run the following command:

   ```
   mvn clean package
   ```

1. To run the application, run the following command:

   ```
   mvn exec:java -Dexec.mainClass="org.example.App"
   ```

After you view the file, delete the object, and then delete the bucket.

#### Success
<a name="GetStartedSuccessJava"></a>

If your Maven project built and ran without error, then congratulations\$1 You've successfully built your first Java application using the SDK for Java 2.x.

#### Cleanup
<a name="GetStartedCleanupJava"></a>

To clean up the resources that you created during this tutorial, delete the project folder `getstarted`.

## Reviewing the AWS SDK for Java 2.x documentation
<a name="ProgrammingWithJavaUseDoc"></a>

The [AWS SDK for Java 2.x Developer Guide](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/home.html) covers all aspects of the SDK across all AWS services. We recommend that you review the following topics:
+ [Migrate from version 1.x to 2.x](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration.html) – Includes a detailed explanation of the differences between 1.x and 2.x. This topic also contains instructions about how to use both major versions side-by-side.
+ [DynamoDB guide for Java 2.x SDK](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-dynamodb.html) – Shows you how to perform basic DynamoDB operations: creating a table, manipulating items, and retrieving items. These examples use the low-level interface. Java has several interfaces, as explained in the following section: [Supported interfaces](#JavaInterfaces).

**Tip**  
After you review these topics, bookmark the [AWS SDK for Java 2.x API Reference](https://sdk.amazonaws.com/java/api/latest/). It covers all AWS services, and we recommend that you use it as your main API reference.

## Supported interfaces
<a name="JavaInterfaces"></a>

The AWS SDK for Java 2.x supports the following interfaces, depending on the level of abstraction that you want.

**Topics**
+ [Low-level interface](#LowLevelInterface)
+ [High-level interface](#HighLevelInterface)
+ [Document interface](#DocumentInterface)
+ [Comparing interfaces with a `Query` example](#CompareJavaInterfacesQueryEx)

### Low-level interface
<a name="LowLevelInterface"></a>

The low-level interface provides a one-to-one mapping to the underlying service API. Every DynamoDB API is available through this interface. This means that the low-level interface can provide complete functionality, but it's often more verbose and complex to use. For example, you have to use the `.s()` functions to hold strings and the `.n()` functions to hold numbers. The following example of [PutItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) inserts an item using the low-level interface.

```
import org.slf4j.*;
import software.amazon.awssdk.http.crt.AwsCrtHttpClient;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.*;

import java.util.Map;

public class PutItem {

    // Create a DynamoDB client with the default settings connected to the DynamoDB
    // endpoint in the default region based on the default credentials provider chain.
    private static final DynamoDbClient DYNAMODB_CLIENT = DynamoDbClient.create();
    private static final Logger LOGGER = LoggerFactory.getLogger(PutItem.class);

    private void putItem() {
        PutItemResponse response = DYNAMODB_CLIENT.putItem(PutItemRequest.builder()
                .item(Map.of(
                        "pk", AttributeValue.builder().s("123").build(),
                        "sk", AttributeValue.builder().s("cart#123").build(),
                        "item_data", AttributeValue.builder().s("YourItemData").build(),
                        "inventory", AttributeValue.builder().n("500").build()
                        // ... more attributes ...
                ))
                .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL)
                .tableName("YourTableName")
                .build());
        LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)");
    }
}
```

### High-level interface
<a name="HighLevelInterface"></a>

The high-level interface in the AWS SDK for Java 2.x is called the DynamoDB enhanced client. This interface provides a more idiomatic code authoring experience.

The enhanced client offers a way to map between client-side data classes and DynamoDB tables designed to store that data. You define the relationships between tables and their corresponding model classes in your code. Then, you can rely on the SDK to manage the data type manipulation. For more information about the enhanced client, see [DynamoDB enhanced client API](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/dynamodb-enhanced-client.html) in the *AWS SDK for Java 2.x Developer Guide*.

The following example of [PutItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) uses the high-level interface. In this example, the `DynamoDbBean` named `YourItem` creates a `TableSchema` that enables its direct use as input for the `putItem()` call.

```
import org.slf4j.*;
import software.amazon.awssdk.enhanced.dynamodb.*;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.*;
import software.amazon.awssdk.enhanced.dynamodb.model.*;
import software.amazon.awssdk.services.dynamodb.model.ReturnConsumedCapacity;

public class DynamoDbEnhancedClientPutItem {
    private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build();
    private static final DynamoDbTable<YourItem> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.fromBean(YourItem.class));
    private static final Logger LOGGER = LoggerFactory.getLogger(PutItem.class);

    private void putItem() {
        PutItemEnhancedResponse<YourItem> response = DYNAMODB_TABLE.putItemWithResponse(PutItemEnhancedRequest.builder(YourItem.class)
                .item(new YourItem("123", "cart#123", "YourItemData", 500))
                .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL)
                .build());
        LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)");
    }

    @DynamoDbBean
    public static class YourItem {

        public YourItem() {}

        public YourItem(String pk, String sk, String itemData, int inventory) {
            this.pk = pk;
            this.sk = sk;
            this.itemData = itemData;
            this.inventory = inventory;
        }

        private String pk;
        private String sk;
        private String itemData;

        private int inventory;

        @DynamoDbPartitionKey
        public void setPk(String pk) {
            this.pk = pk;
        }

        public String getPk() {
            return pk;
        }

        @DynamoDbSortKey
        public void setSk(String sk) {
            this.sk = sk;
        }

        public String getSk() {
            return sk;
        }

        public void setItemData(String itemData) {
            this.itemData = itemData;
        }

        public String getItemData() {
            return itemData;
        }

        public void setInventory(int inventory) {
            this.inventory = inventory;
        }

        public int getInventory() {
            return inventory;
        }
    }
}
```

The AWS SDK for Java 1.x has its own high-level interface, which is often referred to by its main class `DynamoDBMapper`. The AWS SDK for Java 2.x is published in a separate package (and Maven artifact) named `software.amazon.awssdk.enhanced.dynamodb`. The Java 2.x SDK is often referred to by its main class `DynamoDbEnhancedClient`.

#### High-level interface using immutable data classes
<a name="HighLevelInterfaceImmutableDataClasses"></a>

The mapping feature of the DynamoDB enhanced client API also works with immutable data classes. An immutable class has only getters and requires a builder class that the SDK uses to create instances of the class. Immutability in Java is a commonly used style that developers can use to create classes that have no side-effects. These classes are more predictable in their behavior in complex multi-threaded applications. Instead of using the `@DynamoDbBean` annotation as shown in the [High-level interface example](#highleveleg), immutable classes use the `@DynamoDbImmutable` annotation, which takes the builder class as its input.

The following example takes the builder class `DynamoDbEnhancedClientImmutablePutItem` as input to create a table schema. The example then provides the schema as input for the [PutItem](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) API call.

```
import org.slf4j.*;
import software.amazon.awssdk.enhanced.dynamodb.*;
import software.amazon.awssdk.enhanced.dynamodb.model.*;
import software.amazon.awssdk.services.dynamodb.model.ReturnConsumedCapacity;

public class DynamoDbEnhancedClientImmutablePutItem {
    private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build();
    private static final DynamoDbTable<YourImmutableItem> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.fromImmutableClass(YourImmutableItem.class));
    private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedClientImmutablePutItem.class);

    private void putItem() {
        PutItemEnhancedResponse<YourImmutableItem> response = DYNAMODB_TABLE.putItemWithResponse(PutItemEnhancedRequest.builder(YourImmutableItem.class)
                .item(YourImmutableItem.builder()
                                        .pk("123")
                                        .sk("cart#123")
                                        .itemData("YourItemData")
                                        .inventory(500)
                                        .build())
                .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL)
                .build());
        LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)");
    }
}
```

The following example shows the immutable data class.

```
@DynamoDbImmutable(builder = YourImmutableItem.YourImmutableItemBuilder.class)
class YourImmutableItem {
    private final String pk;
    private final String sk;
    private final String itemData;
    private final int inventory;
    public YourImmutableItem(YourImmutableItemBuilder builder) {
        this.pk = builder.pk;
        this.sk = builder.sk;
        this.itemData = builder.itemData;
        this.inventory = builder.inventory;
    }

    public static YourImmutableItemBuilder builder() { return new YourImmutableItemBuilder(); }

    @DynamoDbPartitionKey
    public String getPk() {
        return pk;
    }

    @DynamoDbSortKey
    public String getSk() {
        return sk;
    }

    public String getItemData() {
        return itemData;
    }

    public int getInventory() {
        return inventory;
    }

    static final class YourImmutableItemBuilder {
        private String pk;
        private String sk;
        private String itemData;
        private int inventory;

        private YourImmutableItemBuilder() {}

        public YourImmutableItemBuilder pk(String pk) { this.pk = pk; return this; }
        public YourImmutableItemBuilder sk(String sk) { this.sk = sk; return this; }
        public YourImmutableItemBuilder itemData(String itemData) { this.itemData = itemData; return this; }
        public YourImmutableItemBuilder inventory(int inventory) { this.inventory = inventory; return this; }

        public YourImmutableItem build() { return new YourImmutableItem(this); }
    }
}
```

#### High-level interface using immutable data classes and third-party boilerplate generation libraries
<a name="ImmutableDataClassesThirdPartyBoilerplateGenLib"></a>

Immutable data classes (shown in the previous example) require some boilerplate code. For example, the getter and setter logic on the data classes, in addition to the `Builder` classes. Third-party libraries, such as [Project Lombok](https://projectlombok.org/), can help you generate that type of boilerplate code. Reducing most of the boilerplate code helps you limit the amount of code needed for working with immutable data classes and the AWS SDK. This further results in improved productivity and readability of your code. For more information, see [Use third-party libraries, such as Lombok](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/ddb-en-client-use-immut.html#ddb-en-client-use-immut-lombok) in the *AWS SDK for Java 2.x Developer Guide*.

The following example demonstrates how Project Lombok simplifies the code needed to use the DynamoDB enhanced client API.

```
import org.slf4j.*;
import software.amazon.awssdk.enhanced.dynamodb.*;
import software.amazon.awssdk.enhanced.dynamodb.model.*;
import software.amazon.awssdk.services.dynamodb.model.ReturnConsumedCapacity;

public class DynamoDbEnhancedClientImmutableLombokPutItem {

    private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build();
    private static final DynamoDbTable<YourImmutableLombokItem> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.fromImmutableClass(YourImmutableLombokItem.class));
    private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedClientImmutableLombokPutItem.class);

    private void putItem() {
        PutItemEnhancedResponse<YourImmutableLombokItem> response = DYNAMODB_TABLE.putItemWithResponse(PutItemEnhancedRequest.builder(YourImmutableLombokItem.class)
                .item(YourImmutableLombokItem.builder()
                        .pk("123")
                        .sk("cart#123")
                        .itemData("YourItemData")
                        .inventory(500)
                        .build())
                .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL)
                .build());
        LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)");
    }
}
```

The following example shows the immutable data object of the immutable data class.

```
import lombok.*;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.*;

@Builder
@DynamoDbImmutable(builder = YourImmutableLombokItem.YourImmutableLombokItemBuilder.class)
@Value
public class YourImmutableLombokItem {

    @Getter(onMethod_=@DynamoDbPartitionKey)
    String pk;
    @Getter(onMethod_=@DynamoDbSortKey)
    String sk;
    String itemData;
    int inventory;
}
```

The `YourImmutableLombokItem` class uses the following annotations that Project Lombok and the AWS SDK provide:
+ [@Builder](https://projectlombok.org/features/Builder) – Produces complex builder APIs for data classes that Project Lombok provides.
+ [@DynamoDbImmutable](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/enhanced/dynamodb/mapper/annotations/DynamoDbImmutable.html) – Identifies the `DynamoDbImmutable` class as a DynamoDB mappable entity annotation that the AWS SDK provides.
+ [@Value](https://projectlombok.org/features/Value) – The immutable variant of `@Data`. By default, all fields are made private and final, and setters are not generated. Project Lombok provides this annotation.

### Document interface
<a name="DocumentInterface"></a>

The AWS SDK for Java 2.x Document interface avoids the need to specify data type descriptors. The data types are implied by the semantics of the data itself. This Document interface is similar to the AWS SDK for Java 1.x, Document interface, but with a redesigned interface.

The following [Document interface example](#DocInterfaceEg) shows the `PutItem` call expressed using the Document interface. The example also uses EnhancedDocument. To run commands against a DynamoDB table using the enhanced document API, you must first associate the table with your document table schema to create a `DynamoDBTable` resource object. The Document table schema builder requires the primary index key and attribute converter providers.

You can use `AttributeConverterProvider.defaultProvider()` to convert document attributes of default types. You can change the overall default behavior with a custom `AttributeConverterProvider` implementation. You can also change the converter for a single attribute. The [AWS SDKs and Tools Reference Guide](https://docs.aws.amazon.com/sdkref/latest/guide/version-support-matrix.html) provides more details and examples about how to use custom converters. Their primary use is for attributes of your domain classes that don't have a default converter available. Using a custom converter, you can provide the SDK with the needed information to write or read to DynamoDB.

```
import org.slf4j.*;
import software.amazon.awssdk.enhanced.dynamodb.*;
import software.amazon.awssdk.enhanced.dynamodb.document.EnhancedDocument;
import software.amazon.awssdk.enhanced.dynamodb.model.*;
import software.amazon.awssdk.services.dynamodb.model.ReturnConsumedCapacity;

public class DynamoDbEnhancedDocumentClientPutItem {
    private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build();
    private static final DynamoDbTable<EnhancedDocument> DYNAMODB_TABLE =
            ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.documentSchemaBuilder()
                            .addIndexPartitionKey(TableMetadata.primaryIndexName(),"pk", AttributeValueType.S)
                            .addIndexSortKey(TableMetadata.primaryIndexName(), "sk", AttributeValueType.S)
                            .attributeConverterProviders(AttributeConverterProvider.defaultProvider())
                            .build());

    private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedDocumentClientPutItem.class);

    private void putItem() {
        PutItemEnhancedResponse<EnhancedDocument> response = DYNAMODB_TABLE.putItemWithResponse(
                        PutItemEnhancedRequest.builder(EnhancedDocument.class)
                                .item(
                                    EnhancedDocument.builder()
                                            .attributeConverterProviders(AttributeConverterProvider.defaultProvider())
                                            .putString("pk", "123")
                                            .putString("sk", "cart#123")
                                            .putString("item_data", "YourItemData")
                                            .putNumber("inventory", 500)
                                            .build())
                                .returnConsumedCapacity(ReturnConsumedCapacity.TOTAL)
                                .build());
        LOGGER.info("PutItem call consumed [" + response.consumedCapacity().capacityUnits() + "] Write Capacity Unites (WCU)");
    }

}
```

To convert JSON documents to and from the native Amazon DynamoDB data types, you can use the following utility methods:
+ [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/enhanced/dynamodb/document/EnhancedDocument.html#fromJson(java.lang.String)](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/enhanced/dynamodb/document/EnhancedDocument.html#fromJson(java.lang.String)) – Creates a new EnhancedDocument instance from a JSON string.
+ [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/enhanced/dynamodb/document/EnhancedDocument.html#toJson()](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/enhanced/dynamodb/document/EnhancedDocument.html#toJson()) – Creates a JSON string representation of the document that you can use in your application like any other JSON object.

### Comparing interfaces with a `Query` example
<a name="CompareJavaInterfacesQueryEx"></a>

This section shows the same [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) call expressed using the various interfaces. To fine tune the results of these queries, note the following:
+ DynamoDB targets one specific partition key value, so you must specify the partition key completely.
+ To have the query target only cart items, the sort key has a key condition expression that uses `begins_with`.
+ We use `limit()` to limit the query to a maximum of 100 returned items.
+ We set the `scanIndexForward` to false. The results are returned in order of UTF-8 bytes, which usually means the cart item with the lowest number is returned first. By setting the `scanIndexForward` to false, we reverse the order and the cart item with the highest number is returned first.
+ We apply a filter to remove any result that does not match the criteria. The data being filtered consumes read capacity whether the item matches the filter.

**Example `Query` using the low-level interface**  
The following example queries a table named `YourTableName` using a `keyConditionExpression`. This limits the query to a specific partition key value and sort key value that begin with a specific prefix value. These key conditions limit the amount of data read from DynamoDB. Finally, the query applies a filter on the data retrieved from DynamoDB using a `filterExpression`.  

```
import org.slf4j.*;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.*;

import java.util.Map;

public class Query {

    // Create a DynamoDB client with the default settings connected to the DynamoDB 
    // endpoint in the default region based on the default credentials provider chain.
    private static final DynamoDbClient DYNAMODB_CLIENT = DynamoDbClient.builder().build();
    private static final Logger LOGGER = LoggerFactory.getLogger(Query.class);

    private static void query() {
        QueryResponse response = DYNAMODB_CLIENT.query(QueryRequest.builder()
                .expressionAttributeNames(Map.of("#name", "name"))
                .expressionAttributeValues(Map.of(
                    ":pk_val", AttributeValue.fromS("id#1"),
                    ":sk_val", AttributeValue.fromS("cart#"),
                    ":name_val", AttributeValue.fromS("SomeName")))
                .filterExpression("#name = :name_val")
                .keyConditionExpression("pk = :pk_val AND begins_with(sk, :sk_val)")
                .limit(100)
                .scanIndexForward(false)
                .tableName("YourTableName")
                .build());

        LOGGER.info("nr of items: " + response.count());
        LOGGER.info("First item pk: " + response.items().get(0).get("pk"));
        LOGGER.info("First item sk: " + response.items().get(0).get("sk"));
    }
}
```

**Example `Query` using the Document interface**  
The following example queries a table named `YourTableName` using the Document interface.  

```
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import software.amazon.awssdk.enhanced.dynamodb.*;
import software.amazon.awssdk.enhanced.dynamodb.document.EnhancedDocument;
import software.amazon.awssdk.enhanced.dynamodb.model.*;

import java.util.Map;

public class DynamoDbEnhancedDocumentClientQuery {

    // Create a DynamoDB client with the default settings connected to the DynamoDB 
    // endpoint in the default region based on the default credentials provider chain.
    private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build();
    private static final DynamoDbTable<EnhancedDocument> DYNAMODB_TABLE =
            ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.documentSchemaBuilder()
                    .addIndexPartitionKey(TableMetadata.primaryIndexName(),"pk", AttributeValueType.S)
                    .addIndexSortKey(TableMetadata.primaryIndexName(), "sk", AttributeValueType.S)
                    .attributeConverterProviders(AttributeConverterProvider.defaultProvider())
                    .build());
    private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedDocumentClientQuery.class);

    private void query() {
        PageIterable<EnhancedDocument> response = DYNAMODB_TABLE.query(QueryEnhancedRequest.builder()
                .filterExpression(Expression.builder()
                        .expression("#name = :name_val")
                        .expressionNames(Map.of("#name", "name"))
                        .expressionValues(Map.of(":name_val", AttributeValue.fromS("SomeName")))
                        .build())
                .limit(100)
                .queryConditional(QueryConditional.sortBeginsWith(Key.builder()
                        .partitionValue("id#1")
                        .sortValue("cart#")
                        .build()))
                .scanIndexForward(false)
                .build());

        LOGGER.info("nr of items: " + response.items().stream().count());
        LOGGER.info("First item pk: " + response.items().iterator().next().getString("pk"));
        LOGGER.info("First item sk: " + response.items().iterator().next().getString("sk"));

    }
}
```

**Example `Query` using the high-level interface**  
The following example queries a table named `YourTableName` using the DynamoDB enhanced client API.  

```
import org.slf4j.*;
import software.amazon.awssdk.enhanced.dynamodb.*;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.*;
import software.amazon.awssdk.enhanced.dynamodb.model.*;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;

import java.util.Map;

public class DynamoDbEnhancedClientQuery {

    private static final DynamoDbEnhancedClient ENHANCED_DYNAMODB_CLIENT = DynamoDbEnhancedClient.builder().build();
    private static final DynamoDbTable<YourItem> DYNAMODB_TABLE = ENHANCED_DYNAMODB_CLIENT.table("YourTableName", TableSchema.fromBean(DynamoDbEnhancedClientQuery.YourItem.class));
    private static final Logger LOGGER = LoggerFactory.getLogger(DynamoDbEnhancedClientQuery.class);

    private void query() {
        PageIterable<YourItem> response = DYNAMODB_TABLE.query(QueryEnhancedRequest.builder()
                .filterExpression(Expression.builder()
                        .expression("#name = :name_val")
                        .expressionNames(Map.of("#name", "name"))
                        .expressionValues(Map.of(":name_val", AttributeValue.fromS("SomeName")))
                        .build())
                .limit(100)
                .queryConditional(QueryConditional.sortBeginsWith(Key.builder()
                        .partitionValue("id#1")
                        .sortValue("cart#")
                        .build()))
                .scanIndexForward(false)
                .build());

        LOGGER.info("nr of items: " + response.items().stream().count());
        LOGGER.info("First item pk: " + response.items().iterator().next().getPk());
        LOGGER.info("First item sk: " + response.items().iterator().next().getSk());
    }

    @DynamoDbBean
    public static class YourItem {

        public YourItem() {}

        public YourItem(String pk, String sk, String name) {
            this.pk = pk;
            this.sk = sk;
            this.name = name;
        }

        private String pk;
        private String sk;
        private String name;

        @DynamoDbPartitionKey
        public void setPk(String pk) {
            this.pk = pk;
        }

        public String getPk() {
            return pk;
        }

        @DynamoDbSortKey
        public void setSk(String sk) {
            this.sk = sk;
        }

        public String getSk() {
            return sk;
        }

        public void setName(String name) {
            this.name = name;
        }

        public String getName() {
            return name;
        }
    }
}
```
**High-level interface using immutable data classes**  
When you perform a `Query` with the high-level immutable data classes, the code is the same as the high-level interface example except for the construction of the entity class `YourItem` or `YourImmutableItem`. For more information, see the [PutItem](#HighLevelImmutableDataClassEg) example.
**High-level interface using immutable data classes and third-party boilerplate generation libraries**  
When you perform a `Query` with the high-level immutable data classes, the code is the same as the high-level interface example except for the construction of the entity class `YourItem` or `YourImmutableLombokItem`. For more information, see the [PutItem](#HighLevelImmutableDataClassEg) example.

## Additional code examples
<a name="AdditionalCodeEx"></a>

For additional examples of how to use DynamoDB with the SDK for Java 2.x, refer to the following code example repositories:
+ [Official AWS single-action code examples](https://docs.aws.amazon.com/code-library/latest/ug/java_2_dynamodb_code_examples.html)
+ [Community-maintained single-action code examples](https://github.com/aws-samples/aws-dynamodb-examples/tree/master/examples/SDK/java)
+ [Official AWS scenario-oriented code examples](https://github.com/aws-samples/aws-dynamodb-examples/tree/master/examples/SDK/java)

## Synchronous and asynchronous programming
<a name="SyncAsyncProgramming"></a>

The AWS SDK for Java 2.x provides both *synchronous* and *asynchronous* clients for AWS services, such as DynamoDB.

The `DynamoDbClient` and `DynamoDbEnhancedClient` classes provide synchronous methods that block your thread's execution until the client receives a response from the service. This client is the most straightforward way of interacting with DynamoDB if you have no need for asynchronous operations.

The `DynamoDbAsyncClient` and `DynamoDbEnhancedAsyncClient` classes provide asynchronous methods that return immediately, and give control back to the calling thread without waiting for a response. The non-blocking client has an advantage that it uses for high concurrency across a few threads, which provides efficient handling of I/O requests with minimal compute resources. This improves throughput and responsiveness.

The AWS SDK for Java 2.x uses the native support for non-blocking I/O. The AWS SDK for Java 1.x had to simulate non-blocking I/O.

The synchronous methods return before a response is available, so you need a way to get the response when it's ready. The asynchronous methods in the AWS SDK for Java return a [https://docs.oracle.com/javase/8/docs/api/index.html?java/util/concurrent/CompletableFuture.html](https://docs.oracle.com/javase/8/docs/api/index.html?java/util/concurrent/CompletableFuture.html) object that contains the results of the asynchronous operation in the future. When you call `get()` or `join()` on these `CompletableFuture` objects, your code blocks until the result is available. If you call these at the same time that you make the request, then the behavior is similar to a plain synchronous call.

For more information about asynchronous programming, see [Use asynchronous programming](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/asynchronous.html) in the *AWS SDK for Java 2.x Developer Guide*.

## HTTP clients
<a name="HttpClients"></a>

For supporting every client, there exists an HTTP client that handles communication with the AWS services. You can plug in alternative HTTP clients, choosing one that has the characteristics that best fit your application. Some are more lightweight; some have more configuration options.

Some HTTP clients support only synchronous use, while others support only asynchronous use. For a flowchart that can help you select the optimal HTTP client for your workload, see [HTTP client recommendations](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/http-configuration.html#http-clients-recommend) in the *AWS SDK for Java 2.x Developer Guide*.

The following list presents some of the possible HTTP clients:

**Topics**
+ [Apache-based HTTP client](#ApacheHttpClient)
+ [`URLConnection`-based HTTP client](#URLConnHttpClient)
+ [Netty-based HTTP client](#NettyHttpClient)
+ [AWS CRT-based HTTP client](#AWSCRTHttpClient)

### Apache-based HTTP client
<a name="ApacheHttpClient"></a>

The [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/apache/ApacheHttpClient.html) class supports synchronous service clients. It's the default HTTP client for synchronous use. For information about configuring the `ApacheHttpClient` class, see [Configure the Apache-based HTTP client](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/http-configuration-apache.html) in the *AWS SDK for Java 2.x Developer Guide*.

### `URLConnection`-based HTTP client
<a name="URLConnHttpClient"></a>

The [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/urlconnection/UrlConnectionHttpClient.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/urlconnection/UrlConnectionHttpClient.html) class is another option for synchronous clients. It loads more quickly than the Apache-based HTTP client, but has fewer features. For information about configuring the `UrlConnectionHttpClient` class, see [Configure the URLConnection-based HTTP client](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/http-configuration-url.html) in the *AWS SDK for Java 2.x Developer Guide*.

### Netty-based HTTP client
<a name="NettyHttpClient"></a>

The `NettyNioAsyncHttpClient` class supports async clients. It's the default choice for async use. For information about configuring the `NettyNioAsyncHttpClient` class, see [Configure the Netty-based HTTP client](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/http-configuration-netty.html) in the *AWS SDK for Java 2.x Developer Guide*.

### AWS CRT-based HTTP client
<a name="AWSCRTHttpClient"></a>

The newer `AwsCrtHttpClient` and `AwsCrtAsyncHttpClient` classes from the AWS Common Runtime (CRT) libraries are more options that support synchronous and asynchronous clients. Compared to other HTTP clients, AWS CRT offers:
+ Faster SDK startup time
+ Smaller memory footprint
+ Reduced latency time
+ Connection health management
+ DNS load balancing

For information about configuring the `AwsCrtHttpClient` and `AwsCrtAsyncHttpClient` classes, see [Configure the AWS CRT-based HTTP clients](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/http-configuration-crt.html) in the *AWS SDK for Java 2.x Developer Guide*.

The AWS CRT-based HTTP client isn't the default because that would break backward compatibility for existing applications. However, for DynamoDB we recommend that you use the AWS CRT-based HTTP client for both sync and async uses.

For an introduction to the AWS CRT-based HTTP client, see [Announcing availability of the AWS CRT HTTP Client in the AWS SDK for Java 2.x](https://aws.amazon.com/blogs/developer/announcing-availability-of-the-aws-crt-http-client-in-the-aws-sdk-for-java-2-x/) on the *AWS Developer Tools Blog*.

## Configuring an HTTP client
<a name="ConfigHttpClient"></a>

When configuring a client, you can provide various configuration options, including:
+ Setting timeouts for different aspects of API calls.
+ Enabling TCP Keep-Alive.
+ Controlling the retry policy when encountering errors.
+ Specifying execution attributes that [Execution interceptor](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/interceptors.html) instances can modify. Execution interceptors can write code that intercept the execution of your API requests and responses. This enables you to perform tasks such as publishing metrics and modifying requests in-flight.
+ Adding or manipulating HTTP headers.
+ Enabling the tracking of [client-side performance metrics](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/metrics.html). Using this feature helps you to collect metrics about the service clients in your application and analyze the output in Amazon CloudWatch.
+ Specifying an alternate executor service to be used for scheduling tasks, such as async retry attempts and timeout tasks.

You control the configuration by providing a [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/client/config/ClientOverrideConfiguration.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/client/config/ClientOverrideConfiguration.html) object to the service client `Builder` class. You'll see this in some code examples in the following sections.

The `ClientOverrideConfiguration` provides standard configuration choices. The different pluggable HTTP clients have implementation-specific configuration possibilities as well.

**Topics**
+ [Timeout configuration](#TimeoutConfig)
+ [RetryMode](#RetryMode)
+ [DefaultsMode](#DefaultsMode)
+ [Keep-Alive configuration](#KeepAliveConfig)

### Timeout configuration
<a name="TimeoutConfig"></a>

You can adjust the client configuration to control various timeouts related to the service calls. DynamoDB provides lower latencies compared to other AWS services. Therefore, you might want to adjust these properties to lower timeout values so that you can fail fast if there's a networking issue.

You can customize the latency related behavior using `ClientOverrideConfiguration` on the DynamoDB client or by changing detailed configuration options on the underlying HTTP client implementation.

You can configure the following impactful properties using `ClientOverrideConfiguration`:
+ `apiCallAttemptTimeout` – The amount of time to wait for a single attempt for an HTTP request to complete before giving up and timing out.
+ `apiCallTimeout` – The amount of time that the client has to completely execute an API call. This includes the request handler execution that consists of all the HTTP requests, including retries.

The AWS SDK for Java 2.x provides [default values](https://github.com/aws/aws-sdk-java-v2/blob/a0c8a0af1fa572b16b5bd78f310594d642324156/http-client-spi/src/main/java/software/amazon/awssdk/http/SdkHttpConfigurationOption.java#L134) for some timeout options, such as connection timeout and socket timeouts. The SDK doesn't provide default values for API call timeouts or individual API call attempt timeouts. If these timeouts aren't set in the `ClientOverrideConfiguration`, then the SDK uses the socket timeout value instead for the overall API call timeout. The socket timeout has a default value of 30 seconds.

### RetryMode
<a name="RetryMode"></a>

Another configuration related to the timeout configuration that you should consider is the `RetryMode` configuration object. This configuration object contains a collection of retry behaviors.

The SDK for Java 2.x supports the following retry modes:
+ `legacy` – The default retry mode if you don't explicitly change it. This retry mode is specific to the Java SDK. It's characterized by up to three retries, or more for services such as DynamoDB, which has up to eight retries.
+ `standard` – Named "standard" because it's more consistent with other AWS SDKs. This mode waits for a random amount of time ranging from 0ms to 1,000ms for the first retry. If another retry is necessary, then this mode picks another random time from 0ms to 1,000ms and multiplies it by two. If an additional retry is necessary, then it does the same random pick multiplied by four, and so on. Each wait is capped at 20 seconds. This mode performs retries on more detected failure conditions than the `legacy` mode. For DynamoDB, it performs up to three total max attempts unless you override with [numRetries](#numRetries).
+ `adaptive` – Builds on `standard` mode and dynamically limits the rate of AWS requests to maximize success rate. This can occur at the expense of request latency. We don't recommend adaptive retry mode when predictable latency is important.

You can find an expanded definition of these retry modes in the [Retry behavior](https://docs.aws.amazon.com/sdkref/latest/guide/feature-retry-behavior.html) topic in the *AWS SDKs and Tools Reference Guide*.

#### Retry policies
<a name="RetryPolicies"></a>

All `RetryMode` configurations have a [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/RetryPolicy.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/RetryPolicy.html), which is built based on one or more [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/conditions/RetryCondition.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/conditions/RetryCondition.html) configurations. The [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/conditions/TokenBucketRetryCondition.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/conditions/TokenBucketRetryCondition.html) is especially important to the retry behavior of the DynamoDB SDK client implementation. This condition limits the number of retries that the SDK makes using a token bucket algorithm. Depending on the selected retry mode, throttling exceptions may or may not subtract tokens from the `TokenBucket`.

When a client encounters a retryable error, such as a throttling exception or a temporary server error, then the SDK automatically retries the request. You can control how many times and how quickly these retries happen.

When configuring a client, you can provide a `RetryPolicy` that supports the following parameters:
+ `numRetries` – The maximum number of retries that should be applied before a request is considered to be failed. The default value is 8 regardless of the retry mode that you use.
**Warning**  
Make sure that you change this default value after due consideration.
+ `backoffStrategy` – The [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/backoff/BackoffStrategy.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/backoff/BackoffStrategy.html) to apply to the retries, with [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/backoff/FullJitterBackoffStrategy.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/backoff/FullJitterBackoffStrategy.html) being the default strategy. This strategy performs an exponential delay between additional retries based on the current number or retries, a base delay, and a maximum backoff time. It then adds jitter to provide a bit of randomness. The base delay used in the exponential delay is 25 ms regardless of the retry mode.
+ `retryCondition` – The [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/conditions/RetryCondition.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/retry/conditions/RetryCondition.html) determines whether to retry a request at all. By default, it retries a specific set of HTTP status codes and exceptions that it believes are retryable. For most situations, the default configuration should be sufficient.

The following code provides an alternative retry policy. It specifies a total of five retries (six total requests). The first retry should occur after a delay of approximately 100ms, with each additional retry doubling that time exponentially, up to a maximum delay of one second.

```
DynamoDbClient client = DynamoDbClient.builder()
    .overrideConfiguration(ClientOverrideConfiguration.builder()
        .retryPolicy(RetryPolicy.builder()
            .backoffStrategy(FullJitterBackoffStrategy.builder()
                .baseDelay(Duration.ofMillis(100))
                .maxBackoffTime(Duration.ofSeconds(1))
                .build())
            .numRetries(5)
            .build())
        .build())
    .build();
```

### DefaultsMode
<a name="DefaultsMode"></a>

The timeout properties that `ClientOverrideConfiguration` and the `RetryMode` don't manage are typically configured implicitly by specifying a `DefaultsMode`.

The AWS SDK for Java 2.x (version 2.17.102 or later) introduced support for `DefaultsMode`. This feature provides a set of default values for common configurable settings, such as HTTP communication settings, retry behavior, service Regional endpoint settings, and potentially any SDK-related configuration. When you use this feature, you can get new configuration defaults tailored to common usage scenarios.

The default modes are standardized across all of the AWS SDKs. The SDK for Java 2.x supports the following default modes:
+ `legacy` – Provides default settings that vary by AWS SDK and that existed before `DefaultsMode` was established.
+ `standard` – Provides default non-optimized settings for most scenarios.
+ `in-region` – Builds on the standard mode and includes settings tailored for applications that call AWS services from within the same AWS Region.
+ `cross-region` – Builds on the standard mode and includes settings with high timeouts for applications that call AWS services in a different Region.
+ `mobile` – Builds on the standard mode and includes settings with high timeouts tailored for mobile applications with higher latencies.
+ `auto` – Builds on the standard mode and includes experimental features. The SDK attempts to discover the runtime environment to determine the appropriate settings automatically. The auto-detection is heuristics-based and does not provide 100% accuracy. If the runtime environment can't be determined, then standard mode is used. The auto-detection might query [Instance metadata and user data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html), which might introduce latency. If startup latency is critical to your application, we recommend choosing an explicit `DefaultsMode` instead.

You can configure the defaults mode in the following ways:
+ Directly on a client, through `AwsClientBuilder.Builder#defaultsMode(DefaultsMode)`.
+ On a configuration profile, through the `defaults_mode` profile file property.
+ Globally, through the `aws.defaultsMode` system property.
+ Globally, through the `AWS_DEFAULTS_MODE` environment variable.

**Note**  
For any mode other than `legacy`, the vended default values might change as best practices evolve. Therefore, if you're using a mode other than `legacy`, then we encourage you to perform testing when upgrading the SDK.

The [Smart configuration defaults](https://docs.aws.amazon.com/sdkref/latest/guide/feature-smart-config-defaults.html) in the *AWS SDKs and Tools Reference Guide* provides a list of configuration properties and their default values in the different default modes.

You choose the defaults mode value based on your application's characteristics and the AWS service that the application interacts with.

These values are configured with a broad selection of AWS services in mind. For a typical DynamoDB deployment in which both your DynamoDB tables and application are deployed in one Region, the `in-region` defaults mode is most relevant among the `standard` default modes.

**Example DynamoDB SDK client configuration tuned for low-latency calls**  
The following example adjusts the timeouts to lower values for an expected low-latency DynamoDB call.  

```
DynamoDbAsyncClient asyncClient = DynamoDbAsyncClient.builder()
    .defaultsMode(DefaultsMode.IN_REGION)
    .httpClientBuilder(AwsCrtAsyncHttpClient.builder())
    .overrideConfiguration(ClientOverrideConfiguration.builder()
        .apiCallTimeout(Duration.ofSeconds(3))
        .apiCallAttemptTimeout(Duration.ofMillis(500))
        .build())
    .build();
```
The individual HTTP client implementation may provide you with even more granular control over the timeout and connection usage behavior. For example, for the AWS CRT-based client, you can enable `ConnectionHealthConfiguration`, which enables the client to actively monitor the health of the used connections. For more information, see [Advanced configuration of AWS CRT-based HTTP clients](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/http-configuration-crt.html#configuring-the-crt-based-http-client) in the *AWS SDK for Java 2.x Developer Guide*.

### Keep-Alive configuration
<a name="KeepAliveConfig"></a>

Enabling keep-alive can reduce latencies by reusing connections. There are two different kinds of keep-alive: HTTP Keep-Alive and TCP Keep-Alive.
+ HTTP Keep-Alive attempts to maintain the HTTPS connection between the client and server so later requests can reuse that connection. This skips the heavyweight HTTPS authentication on later requests. HTTP Keep-Alive is enabled by default on all clients.
+ TCP Keep-Alive requests that the underlying operating system sends small packets over the socket connection to provide extra assurance that the socket is kept alive and to immediately detect any drops. This ensures that a later request won't spend time trying to use a dropped socket. By default, TCP Keep-Alive is disabled on all clients. The following code examples show how to enable it on each HTTP client. When enabled for all non-CRT based HTTP clients, the actual Keep-Alive mechanism is dependent on the operating system. Therefore, you must configure additional TCP Keep-Alive values, such as timeout and number of packets, through the operating system. You can do this using `sysctl` on Linux or macOS, or using registry values on Windows.

**Example to enable TCP Keep-Alive on an Apache-based HTTP client**  

```
DynamoDbClient client = DynamoDbClient.builder()
    .httpClientBuilder(ApacheHttpClient.builder().tcpKeepAlive(true))
    .build();
```

**`URLConnection`-based HTTP client**  
Any synchronous client that uses the `URLConnection`-based HTTP client [https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html](https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html) doesn't have a [mechanism](https://docs.oracle.com/javase/8/docs/api/java/net/doc-files/net-properties.html) to enable keep-alive.

**Example to enable TCP Keep-Alive on a Netty-based HTTP client**  

```
DynamoDbAsyncClient client = DynamoDbAsyncClient.builder()
    .httpClientBuilder(NettyNioAsyncHttpClient.builder().tcpKeepAlive(true))
    .build();
```

**Example to enable TCP Keep-Alive on an AWS CRT-based HTTP client**  
With the AWS CRT-based HTTP client, you can enable TCP keep-alive and control the duration.  

```
DynamoDbClient client = DynamoDbClient.builder()
    .httpClientBuilder(AwsCrtHttpClient.builder()
    .tcpKeepAliveConfiguration(TcpKeepAliveConfiguration.builder()
        .keepAliveInterval(Duration.ofSeconds(50))
        .keepAliveTimeout(Duration.ofSeconds(5))
        .build()))
    .build();
```
When using the asynchronous DynamoDB client, you can enable TCP Keep-Alive as shown in the following code.  

```
DynamoDbAsyncClient client = DynamoDbAsyncClient.builder()
    .httpClientBuilder(AwsCrtAsyncHttpClient.builder()
    .tcpKeepAliveConfiguration(TcpKeepAliveConfiguration.builder()
        .keepAliveInterval(Duration.ofSeconds(50))
        .keepAliveTimeout(Duration.ofSeconds(5))
        .build()))
    .build();
```

## Error handling
<a name="JavaErrorHandling"></a>

When it comes to exception handling, the AWS SDK for Java 2.x uses runtime (unchecked) exceptions.

The base exception, covering all SDK exceptions, is [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/exception/SdkServiceException.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/exception/SdkServiceException.html), which extends from the Java unchecked `RuntimeException`. If you catch this, you'll catch all exceptions that the SDK throws.

`SdkServiceException` has a subclass called [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/awscore/exception/AwsServiceException.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/awscore/exception/AwsServiceException.html). This subclass indicates any issue in communication with the AWS service. It has a subclass called [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/DynamoDbException.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/DynamoDbException.html), which indicates an issue in communication with DynamoDB. If you catch this, you'll catch all exceptions related to DynamoDB, but no other SDK exceptions.

There are more specific [exception types](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/DynamoDbException.html) under `DynamoDbException`. Some of these exception types apply to control-plane operations such as [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/TableAlreadyExistsException.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/TableAlreadyExistsException.html). Others apply to data-plane operations. The following is an example of a common data-plane exception:
+ [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/ConditionalCheckFailedException.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/ConditionalCheckFailedException.html) – You specified a condition in the request that evaluated to false. For example, you might have tried to perform a conditional update on an item, but the actual value of the attribute did not match the expected value in the condition. A request that fails in this manner isn't retried.

Other situations don't have a specific exception defined. For example, when your requests get throttled the specific `ProvisionedThroughputExceededException` might get thrown, while in other cases the more generic `DynamoDbException` is thrown. In either case, you can determine if throttling caused the exception by checking if `isThrottlingException()` returns `true`.

Depending on your application needs, you can catch all `AwsServiceException` or `DynamoDbException` instances. However, you often need different behavior in different situations. The logic to deal with a condition check failure is different than that to handle throttling. Define which exceptional paths you want to deal with and make sure to test the alternative paths. This helps you make sure that you can deal with all relevant scenarios.

For lists of common errors that you might encounter, see [Error handling with DynamoDB](Programming.Errors.md). Also see [Common Errors](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/CommonErrors.html) in the *Amazon DynamoDB API Reference*. The API Reference also provides the exact errors possible for each API operation, such as for the [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) operation. For information about handling exceptions, see [Exception handling for the AWS SDK for Java 2.x](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/handling-exceptions.html) in the *AWS SDK for Java 2.x Developer Guide*.

## AWS request ID
<a name="JavaRequestID"></a>

Each request includes a request ID, which can be useful to pull if you're working with AWS Support to diagnose an issue. Each exception derived from `SdkServiceException` has a [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/exception/SdkServiceException.html#requestId()](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/exception/SdkServiceException.html#requestId()) method available to retrieve the request ID.

## Logging
<a name="JavaLogging"></a>

Using the logging provided that the SDK provides can be useful both for catching any important messages from the client libraries and for more in-depth debugging purposes. Loggers are hierarchical and the SDK uses `software.amazon.awssdk` as its root logger. You can configure the level with one of `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, `ALL`, or `OFF`. The configured level applies to that logger and down into the logger hierarchy.

For its logging, the AWS SDK for Java 2.x uses the Simple Logging Façade for Java (SLF4J). This acts as an abstraction layer around other loggers, and you can use it to plug in the logger that you prefer. For instructions about plugging in loggers, see the [SLF4J user manual](https://www.slf4j.org/manual.html).

Each logger has a particular behavior. By default, the Log4j 2.x logger creates a `ConsoleAppender`, which appends log events to `System.out` and defaults to the `ERROR` log level.

The SimpleLogger logger included in SLF4J outputs by default to `System.err` and defaults to the `INFO` log level.

We recommend that you set the level to `WARN` for `software.amazon.awssdk` for any production deployments to catch any important messages from the SDK's client libraries while limiting the output quantity.

If SLF4J can't find a supported logger on the class path (no SLF4J binding), then it defaults to a [no operation implementation](https://www.slf4j.org/codes.html#noProviders). This implementation results in logging messages to `System.err` explaining that SLF4J could not find a logger implementation on the classpath. To prevent this situation, you must add a logger implementation. To do this, you can add a dependency in your Apache Maven `pom.xml` on artifacts, such as `org.slf4j.slf4j-simple` or `org.apache.logging.log4j.log4j-slf4j2-imp`.

For information about how to configure the logging in the SDK, including adding logging dependencies to your application configuration, see [Logging with the SDK for Java 2.x](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/logging-slf4j.html) in the *AWS SDK for Java Developer Guide*.

The following configuration in the `Log4j2.xml` file shows how to adjust the logging behavior if you use the Apache Log4j 2 logger. This configuration sets the root logger level to `WARN`. All loggers in the hierarchy inherit this log level, including the `software.amazon.awssdk` logger.

By default, the output goes to `System.out`. In the following example, we still override the default output Log4j appender to apply a tailored Log4j `PatternLayout`.

**Example of a `Log4j2.xml` configuration file**  
The following configuration logs messages to the console at the `ERROR` and `WARN` levels for all logger hierarchies.

```
<Configuration status="WARN">
  <Appenders>
    <Console name="ConsoleAppender" target="SYSTEM_OUT">
      <PatternLayout pattern="%d{YYYY-MM-dd HH:mm:ss} [%t] %-5p %c:%L - %m%n" />
    </Console>
  </Appenders>

  <Loggers>
    <Root level="WARN">
      <AppenderRef ref="ConsoleAppender"/>
    </Root>
  </Loggers>
</Configuration>
```

### AWS request ID logging
<a name="JavaReqIDLogging"></a>

When something goes wrong, you can find request IDs within exceptions. However, if you want the request IDs for requests that aren't generating exceptions, then you can use logging.

The `software.amazon.awssdk.request` logger outputs request IDs at the `DEBUG` level. The following example extends the previous [configuration example](#Log4j2ConfigEg) to keep the root logger level at `ERROR`, the `software.amazon.awssdk` at level `WARN`, and the `software.amazon.awssdk.request` at level `DEBUG`. Setting these levels helps to catch the request IDs and other request-related details, such as the endpoint and status code.

```
<Configuration status="WARN">
  <Appenders>
    <Console name="ConsoleAppender" target="SYSTEM_OUT">
      <PatternLayout pattern="%d{YYYY-MM-dd HH:mm:ss} [%t] %-5p %c:%L - %m%n" />
    </Console>
  </Appenders>

  <Loggers>
    <Root level="ERROR">
      <AppenderRef ref="ConsoleAppender"/>
    </Root>
    <Logger name="software.amazon.awssdk" level="WARN" />
    <Logger name="software.amazon.awssdk.request" level="DEBUG" />
  </Loggers>
</Configuration>
```

Here is an example of the log output:

```
2022-09-23 16:02:08 [main] DEBUG software.amazon.awssdk.request:85 - Sending Request: DefaultSdkHttpFullRequest(httpMethod=POST, protocol=https, host=dynamodb.us-east-1.amazonaws.com, encodedPath=/, headers=[amz-sdk-invocation-id, Content-Length, Content-Type, User-Agent, X-Amz-Target], queryParameters=[])
 2022-09-23 16:02:08 [main] DEBUG software.amazon.awssdk.request:85 - Received successful response: 200, Request ID: QS9DUMME2NHEDH8TGT9N5V53OJVV4KQNSO5AEMVJF66Q9ASUAAJG, Extended Request ID: not available
```

## Pagination
<a name="JavaPagination"></a>

Some requests, such as [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) and [https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html), limit the size of data returned on a single request and require you make repeated requests to pull subsequent pages.

You can control the maximum number of items to read for each page with the `Limit` parameter. For example, you can use the `Limit` parameter to retrieve only the last 10 items. This limit specifies how many items to read from the table before any filtering is applied. If you want exactly 10 items after filtering, there's no way to specify that. You can control only the pre-filtered count and check client-side when you've actually retrieved 10 items. Regardless of the limit, responses always have a maximum size of 1 MB.

A `LastEvaluatedKey` might be included in the API response. This indicates that the response ended because it reached a count limit or a size limit. This key is the last key evaluated for that response. By interacting directly with the API, you can retrieve this `LastEvaluatedKey` and pass it to a follow-up call as `ExclusiveStartKey` to read the next chunk from that starting point. If no `LastEvaluatedKey` is returned, it means that there are no more items that match the `Query` or `Scan` API call.

The following example uses the low-level interface to limit the items to 100 based on the `keyConditionExpression` parameter.

```
QueryRequest.Builder queryRequestBuilder = QueryRequest.builder()
        .expressionAttributeValues(Map.of(
                ":pk_val", AttributeValue.fromS("123"),
                ":sk_val", AttributeValue.fromN("1000")))
        .keyConditionExpression("pk = :pk_val AND sk > :sk_val")
        .limit(100)
        .tableName(TABLE_NAME);

while (true) {
    QueryResponse queryResponse = DYNAMODB_CLIENT.query(queryRequestBuilder.build());

    queryResponse.items().forEach(item -> {
        LOGGER.info("item PK: [" + item.get("pk") + "] and SK: [" + item.get("sk") + "]");
    });

    if (!queryResponse.hasLastEvaluatedKey()) {
        break;
    }
    queryRequestBuilder.exclusiveStartKey(queryResponse.lastEvaluatedKey());
}
```

The AWS SDK for Java 2.x can simplify this interaction with DynamoDB by providing auto-pagination methods that make multiple service calls to automatically get the next pages of results for you. This simplifies your code, but it takes away some control of resource usage that you would keep by manually reading pages.

By using the `Iterable` methods available in the DynamoDB client, such as [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/DynamoDbClient.html#queryPaginator(software.amazon.awssdk.services.dynamodb.model.QueryRequest)](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/DynamoDbClient.html#queryPaginator(software.amazon.awssdk.services.dynamodb.model.QueryRequest)) and [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/DynamoDbClient.html#scanPaginator(software.amazon.awssdk.services.dynamodb.model.ScanRequest)](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/DynamoDbClient.html#scanPaginator(software.amazon.awssdk.services.dynamodb.model.ScanRequest)), the SDK takes care of the pagination. The return type of these methods is a custom iterable that you can use to iterate through all the pages. The SDK internally handles service calls for you. Using the Java Stream API, you can handle the result of `QueryPaginator` as shown in the following example.

```
QueryPublisher queryPublisher =
    DYNAMODB_CLIENT.queryPaginator(QueryRequest.builder()
        .expressionAttributeValues(Map.of(
            ":pk_val", AttributeValue.fromS("123"),
            ":sk_val", AttributeValue.fromN("1000")))
        .keyConditionExpression("pk = :pk_val AND sk > :sk_val")
        .limit(100)
        .tableName("YourTableName")
        .build());

queryPublisher.items().subscribe(item ->
    System.out.println(item.get("itemData"))).join();
```

## Data class annotations
<a name="JavaDataClassAnnotation"></a>

The Java SDK provides several annotations that you can put on the attributes of your data class. These annotations influence how the SDK interacts with the attributes. By adding an annotation, you can have an attribute behave as an implicit atomic counter, maintain an auto-generated timestamp value, or track an item version number. For more information, see [Data class annotations](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/ddb-en-client-anno-index.html).

# Error handling with DynamoDB
<a name="Programming.Errors"></a>

 This section describes runtime errors and how to handle them. It also describes error messages and codes that are specific to Amazon DynamoDB. For a list of common errors that apply to all AWS services, see [Access Management](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/CommonErrors.html)

**Topics**
+ [Error components](#Programming.Errors.Components)
+ [Transactional errors](#Programming.Errors.TransactionalErrors)
+ [Error messages and codes](#Programming.Errors.MessagesAndCodes)
+ [Error handling in your application](#Programming.Errors.Handling)
+ [Error retries and exponential backoff](#Programming.Errors.RetryAndBackoff)
+ [Batch operations and error handling](#Programming.Errors.BatchOperations)

## Error components
<a name="Programming.Errors.Components"></a>

When your program sends a request, DynamoDB attempts to process it. If the request is successful, DynamoDB returns an HTTP success status code (`200 OK`), along with the results from the requested operation.

If the request is unsuccessful, DynamoDB returns an error. Each error has three components: 
+ An HTTP status code (such as `400`).
+ An exception name (such as `ResourceNotFoundException`).
+ An error message (such as `Requested resource not found: Table: tablename not found`).

The AWS SDKs take care of propagating errors to your application so that you can take appropriate action. For example, in a Java program, you can write `try-catch` logic to handle a `ResourceNotFoundException`.

If you are not using an AWS SDK, you need to parse the content of the low-level response from DynamoDB. The following is an example of such a response.

```
HTTP/1.1 400 Bad Request
x-amzn-RequestId: LDM6CJP8RMQ1FHKSC1RBVJFPNVV4KQNSO5AEMF66Q9ASUAAJG
Content-Type: application/x-amz-json-1.0
Content-Length: 240
Date: Thu, 15 Mar 2012 23:56:23 GMT

{"__type":"com.amazonaws.dynamodb.v20120810#ResourceNotFoundException",
"message":"Requested resource not found: Table: tablename not found"}
```

## Transactional errors
<a name="Programming.Errors.TransactionalErrors"></a>

For information on transactional errors, please see [Transaction conflict handling in DynamoDB](transaction-apis.md#transaction-conflict-handling)

## Error messages and codes
<a name="Programming.Errors.MessagesAndCodes"></a>

The following is a list of exceptions returned by DynamoDB, grouped by HTTP status code. If *OK to retry?* is *Yes*, you can submit the same request again. If *OK to retry?* is *No*, you need to fix the problem on the client side before you submit a new request.

### HTTP status code 400
<a name="Programming.Errors.MessagesAndCodes.http400"></a>

An HTTP `400` status code indicates a problem with your request, such as authentication failure, missing required parameters, or exceeding a table's provisioned throughput. You have to fix the issue in your application before submitting the request again.

**AccessDeniedException **  
Message: *Access denied.*  
The client did not correctly sign the request. If you are using an AWS SDK, requests are signed for you automatically; otherwise, go to the [Signature version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the *AWS General Reference*.  
OK to retry? No

**ConditionalCheckFailedException**  
Message: *The conditional request failed. *  
You specified a condition that evaluated to false. For example, you might have tried to perform a conditional update on an item, but the actual value of the attribute did not match the expected value in the condition.  
OK to retry? No

**IncompleteSignatureException**  
Message: *The request signature does not conform to AWS standards. *  
The request signature did not include all of the required components. If you are using an AWS SDK, requests are signed for you automatically; otherwise, go to the [Signature Version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the *AWS General Reference*.  
OK to retry? No

**ItemCollectionSizeLimitExceededException**  
Message: *Collection size exceeded.*  
For a table with a local secondary index, a group of items with the same partition key value has exceeded the maximum size limit of 10 GB. For more information on item collections, see [Item collections in Local Secondary Indexes](LSI.md#LSI.ItemCollections).  
OK to retry? Yes

**LimitExceededException**  
Message: *Too many operations for a given subscriber.*  
There are too many concurrent control plane operations. The cumulative number of tables and indexes in the `CREATING`, `DELETING`, or `UPDATING` state cannot exceed 500.  
OK to retry? Yes

**MissingAuthenticationTokenException**  
Message: *Request must contain a valid (registered) AWS Access Key ID.*  
The request did not include the required authorization header, or it was malformed. See [DynamoDB low-level API](Programming.LowLevelAPI.md).  
OK to retry? No

**ProvisionedThroughputExceededException**  
Message: *You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes. To view performance metrics for provisioned throughput vs. consumed throughput, open the [Amazon CloudWatch console](https://console.aws.amazon.com/cloudwatch/home).*  
The error includes a list of `ThrottlingReason` fields that provides specific context about why throttling occurred, following the format `ResourceType+OperationType+LimitType` (e.g., `TableReadProvisionedThroughputExceeded`) and the ARN of the affected resource. This helps you identify which resource is being throttled (table or index), what operation type triggered the throttling (read or write), and the specific limit that was exceeded (in this case: provisioned capacity). For more information about diagnosing and resolving throttling issues, see [Diagnosing throttling](throttling-diagnosing-workflow.md).  
Example: Your request rate is too high. The AWS SDKs for DynamoDB automatically retry requests that receive this exception. Your request is eventually successful, unless your retry queue is too large to finish. Reduce the frequency of requests using [Error retries and exponential backoff](#Programming.Errors.RetryAndBackoff).   
OK to retry? Yes

**ReplicatedWriteConflictException**  
Message: *One or more items in this request are being modified by a request in another Region.*  
Example: A write operation was requested for an item in a multi-Region strongly consistent (MRSC) global table that is currently being modified by a request in another Region.   
OK to retry? Yes

**RequestLimitExceeded**  
Message: *Throughput exceeds the current throughput limit for your account. To request a limit increase, contact AWS Support at [https://aws.amazon.com/support](https://aws.amazon.com/support)*.  
The error includes a list of `ThrottlingReason` fields that provides specific context about why throttling occurred, following the format `ResourceType+OperationType+LimitType` (e.g., `TableWriteAccountLimitExceeded` or `IndexReadAccountLimitExceeded`) and the ARN of the affected resource. This helps you identify which resource is being throttled (table or index), what operation type triggered the throttling (read or write), and that you've exceeded account-level service quotas. For more information about diagnosing and resolving throttling issues, see [Diagnosing throttling](throttling-diagnosing-workflow.md).   
Example: Rate of on-demand requests exceeds the allowed account throughput and the table cannot be scaled further.  
OK to retry? Yes

**ResourceInUseException**  
Message: *The resource which you are attempting to change is in use. *  
Example: You tried to re-create an existing table, or delete a table currently in the `CREATING` state.   
OK to retry? No

**ResourceNotFoundException **  
Message: *Requested resource not found.*  
Example: The table that is being requested does not exist, or is too early in the `CREATING` state.  
OK to retry? No

**ThrottlingException**  
Message: *Rate of requests exceeds the allowed throughput.*  
This exception is returned as an AmazonServiceException response with a THROTTLING\$1EXCEPTION status code. This exception might be returned if you perform [control plane](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.API.html#HowItWorks.API.ControlPlane) API operations too rapidly.  
For tables using on-demand mode, this exception might be returned for any [data plane](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.API.html#HowItWorks.API.DataPlane) API operation if your request rate is too high. To learn more about on-demand scaling, see [Initial throughput and scaling properties](on-demand-capacity-mode.md#on-demand-capacity-mode-initial).   
The error includes a list of `ThrottlingReason` fields that provides specific context about why throttling occurred, following the format `ResourceType+OperationType+LimitType` (e.g., `TableReadKeyRangeThroughputExceeded` or `IndexWriteMaxOnDemandThroughputExceeded`) and the ARN of the affected resource. This information helps you identify which resource is being throttled (table or index), what operation type triggered the throttling (read or write), and the specific limit that was exceeded (partition limits or on-demand maximum throughput). For more information about diagnosing and resolving throttling issues, see [Diagnosing throttling](throttling-diagnosing-workflow.md).  
OK to retry? Yes

**UnrecognizedClientException**  
Message: *The Access Key ID or security token is invalid.*  
The request signature is incorrect. The most likely cause is an invalid AWS access key ID or secret key.  
OK to retry? Yes

**ValidationException**  
Message: Varies, depending upon the specific error(s) encountered  
This error can occur for several reasons, such as a required parameter that is missing, a value that is out of range, or mismatched data types. The error message contains details about the specific part of the request that caused the error.  
OK to retry? No

### HTTP status code 5xx
<a name="Programming.Errors.MessagesAndCodes.http5xx"></a>

An HTTP `5xx` status code indicates a problem that must be resolved by AWS. This might be a transient error, in which case you can retry your request until it succeeds. Otherwise, go to the [AWS Service Health Dashboard](http://status.aws.amazon.com/) to see if there are any operational issues with the service.

For more information, see [How do I resolve HTTP 5xx errors in Amazon DynamoDB?](https://aws.amazon.com/premiumsupport/knowledge-center/dynamodb-http-5xx-errors/)

**InternalServerError (HTTP 500)**  
DynamoDB could not process your request.  
OK to retry? Yes  
You might encounter internal server errors while working with items. These are expected during the lifetime of a table. Any failed requests can be retried immediately.  
When you receive a status code 500 on a write operation, the operation may have succeeded or failed. If the write operation is a `TransactWriteItem` request, then it is OK to retry the operation. If the write operation is a single-item write request such as `PutItem`, `UpdateItem`, or `DeleteItem`, then your application should read the state of the item before retrying the operation, and/or use [DynamoDB condition expression CLI example](Expressions.ConditionExpressions.md) to ensure that the item remains in a correct state after retrying regardless of whether the prior operation succeeded or failed. If idempotency is a requirement for the write operation, please use [`TransactWriteItem`](transaction-apis.md#transaction-apis-txwriteitems), which supports idempotent requests by automatically specifying a `ClientRequestToken` to disambiguate multiple attempts to perform the same action.

**ServiceUnavailable (HTTP 503)**  
DynamoDB is currently unavailable. (This should be a temporary state.)  
OK to retry? Yes

## Error handling in your application
<a name="Programming.Errors.Handling"></a>

For your application to run smoothly, you need to add logic to catch and respond to errors. Typical approaches include using `try-catch` blocks or `if-then` statements.

The AWS SDKs perform their own retries and error checking. If you encounter an error while using one of the AWS SDKs, the error code and description can help you troubleshoot it.

You should also see a `Request ID` in the response. The `Request ID` can be helpful if you need to work with AWS Support to diagnose an issue.

## Error retries and exponential backoff
<a name="Programming.Errors.RetryAndBackoff"></a>

Numerous components on a network, such as DNS servers, switches, load balancers, and others, can generate errors anywhere in the life of a given request. The usual technique for dealing with these error responses in a networked environment is to implement retries in the client application. This technique increases the reliability of the application.

Each AWS SDK implements retry logic automatically. You can modify the retry parameters to your needs. For example, consider a Java application that requires a fail-fast strategy, with no retries allowed in case of an error. With the AWS SDK for Java, you could use the `ClientConfiguration` class and provide a `maxErrorRetry` value of `0` to turn off the retries. For more information, see the AWS SDK documentation for your programming language.

If you're not using an AWS SDK, you should retry original requests that receive server errors (5xx). However, client errors (4xx, other than a `ThrottlingException` or a `ProvisionedThroughputExceededException`) indicate that you need to revise the request itself to correct the problem before trying again. For detailed recommendations to address specific throttling scenarios, refer to the [DynamoDB throttling troubleshooting](TroubleshootingThrottling.md) section.

In addition to simple retries, each AWS SDK implements an exponential backoff algorithm for better flow control. The concept behind exponential backoff is to use progressively longer waits between retries for consecutive error responses. For example, up to 50 milliseconds before the first retry, up to 100 milliseconds before the second, up to 200 milliseconds before third, and so on. However, after a minute, if the request has not succeeded, the problem might be the request size exceeding your provisioned throughput, and not the request rate. Set the maximum number of retries to stop around one minute. If the request is not successful, investigate your provisioned throughput options. 

**Note**  
The AWS SDKs implement automatic retry logic and exponential backoff.

Most exponential backoff algorithms use jitter (randomized delay) to prevent successive collisions. Because you aren't trying to avoid such collisions in these cases, you do not need to use this random number. However, if you use concurrent clients, jitter can help your requests succeed faster. For more information, see the blog post about [Exponential backoff and jitter](http://www.awsarchitectureblog.com/2015/03/backoff.html).

## Batch operations and error handling
<a name="Programming.Errors.BatchOperations"></a>

The DynamoDB low-level API supports batch operations for reads and writes. `BatchGetItem` reads items from one or more tables, and `BatchWriteItem` puts or deletes items in one or more tables. These batch operations are implemented as wrappers around other non-batch DynamoDB operations. In other words, `BatchGetItem` invokes `GetItem` once for each item in the batch. Similarly,`BatchWriteItem` invokes `DeleteItem` or `PutItem`, as appropriate, for each item in the batch.

A batch operation can tolerate the failure of individual requests in the batch. For example, consider a `BatchGetItem` request to read five items. Even if some of the underlying `GetItem` requests fail, this does not cause the entire `BatchGetItem` operation to fail. However, if all five read operations fail, then the entire `BatchGetItem` fails.

The batch operations return information about individual requests that fail so that you can diagnose the problem and retry the operation. For `BatchGetItem`, the tables and primary keys in question are returned in the `UnprocessedKeys` value of the response. For `BatchWriteItem`, similar information is returned in `UnprocessedItems`. 

The most likely cause of a failed read or a failed write is *throttling*. For `BatchGetItem`, one or more of the tables in the batch request does not have enough provisioned read capacity to support the operation. For `BatchWriteItem`, one or more of the tables does not have enough provisioned write capacity.

If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, *we strongly recommend that you use an exponential backoff algorithm*. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.

**Note**  
Some AWS SDKs provide higher-level clients that handle unprocessed item retries automatically, so you don't need to implement this retry logic yourself. For example:  
**Java** – The [DynamoDB Enhanced Client](DynamoDBEnhanced.md) in the AWS SDK for Java v2 and the [DynamoDBMapper](DynamoDBMapper.md) in v1 both automatically retry unprocessed items when performing batch operations.
**Python** – The boto3 Table resource `batch_writer` handles unprocessed item retries implicitly for batch write operations. For more information, see [Using the table resource batch\$1writer](programming-with-python.md#programming-with-python-batch-writer).
If you are using a low-level client or an SDK that does not provide this behavior, you must implement the retry logic yourself as described above.

# Using DynamoDB with an AWS SDK
<a name="sdk-general-information-section"></a>

AWS software development kits (SDKs) are available for many popular programming languages. Each SDK provides an API, code examples, and documentation that make it easier for developers to build applications in their preferred language.


| SDK documentation | Code examples | 
| --- | --- | 
| [AWS SDK for C\$1\$1](https://docs.aws.amazon.com/sdk-for-cpp) | [AWS SDK for C\$1\$1 code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/cpp) | 
| [AWS CLI](https://docs.aws.amazon.com/cli) | [AWS CLI code examples](https://docs.aws.amazon.com/code-library/latest/ug/cli_2_code_examples.html) | 
| [AWS SDK for Go](https://docs.aws.amazon.com/sdk-for-go) | [AWS SDK for Go code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/gov2) | 
| [AWS SDK for Java](https://docs.aws.amazon.com/sdk-for-java) | [AWS SDK for Java code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2) | 
| [AWS SDK for JavaScript](https://docs.aws.amazon.com/sdk-for-javascript) | [AWS SDK for JavaScript code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javascriptv3) | 
| [AWS SDK for Kotlin](https://docs.aws.amazon.com/sdk-for-kotlin) | [AWS SDK for Kotlin code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/kotlin) | 
| [AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net) | [AWS SDK for .NET code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/dotnetv3) | 
| [AWS SDK for PHP](https://docs.aws.amazon.com/sdk-for-php) | [AWS SDK for PHP code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php) | 
| [AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell) | [AWS Tools for PowerShell code examples](https://docs.aws.amazon.com/code-library/latest/ug/powershell_5_code_examples.html) | 
| [AWS SDK for Python (Boto3)](https://docs.aws.amazon.com/pythonsdk) | [AWS SDK for Python (Boto3) code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/python) | 
| [AWS SDK for Ruby](https://docs.aws.amazon.com/sdk-for-ruby) | [AWS SDK for Ruby code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/ruby) | 
| [AWS SDK for Rust](https://docs.aws.amazon.com/sdk-for-rust) | [AWS SDK for Rust code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/rustv1) | 
| [AWS SDK for SAP ABAP](https://docs.aws.amazon.com/sdk-for-sapabap) | [AWS SDK for SAP ABAP code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/sap-abap) | 
| [AWS SDK for Swift](https://docs.aws.amazon.com/sdk-for-swift) | [AWS SDK for Swift code examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/swift) | 

For examples specific to DynamoDB, see [Code examples for DynamoDB using AWS SDKs](service_code_examples.md).

**Example availability**  
Can't find what you need? Request a code example by using the **Provide feedback** link at the bottom of this page.