

# VTL resolver tutorials for AWS AppSync
<a name="tutorials"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

Data sources and resolvers are used by AWS AppSync to translate GraphQL requests and fetch information from your AWS resources. AWS AppSync supports automatic provisioning and connections with certain data source types. AWS AppSync also supports AWS Lambda, Amazon DynamoDB, relational databases (Amazon Aurora Serverless), Amazon OpenSearch Service, and HTTP endpoints as data sources. You can use a GraphQL API with your existing AWS resources or build data sources and resolvers from scratch. The following sections are meant to elucidate some of the more common GraphQL use cases in the form of tutorials.

AWS AppSync uses *mapping templates* written in Apache Velocity Template Language (VTL) for resolvers. For more information about using mapping templates, see the [Resolver mapping template reference](resolver-mapping-template-reference.md#aws-appsync-resolver-mapping-template-reference). More information about working with VTL is available in the [Resolver mapping template programming guide](resolver-mapping-template-reference-programming-guide.md#aws-appsync-resolver-mapping-template-reference-programming-guide).

AWS AppSync supports the automatic provisioning of DynamoDB tables from a GraphQL schema as described in Provision from schema (optional) and Launch a sample schema. You can also import from an existing DynamoDB table which will create schema and connect resolvers. This is outlined in Import from Amazon DynamoDB (optional).

**Topics**
+ [

# Creating a simple post application using DynamoDB resolvers
](tutorial-dynamodb-resolvers.md)
+ [Using AWS Lambda resolvers](tutorial-lambda-resolvers.md)
+ [Using OpenSearch Service resolvers](tutorial-elasticsearch-resolvers.md)
+ [Using local resolvers](tutorial-local-resolvers.md)
+ [Combining GraphQL resolvers](tutorial-combining-graphql-resolvers.md)
+ [Using DynamoDB batch operations](tutorial-dynamodb-batch.md)
+ [Performing DynamoDB transactions](tutorial-dynamodb-transact.md)
+ [Using HTTP resolvers](tutorial-http-resolvers.md)
+ [Using Aurora Serverless v2 resolvers](tutorial-rds-resolvers.md)
+ [Using pipeline resolvers](tutorial-pipeline-resolvers.md)
+ [Using Delta Sync operations on versioned data sources](tutorial-delta-sync.md)

# Creating a simple post application using DynamoDB resolvers
<a name="tutorial-dynamodb-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

This tutorial shows how you can bring your own Amazon DynamoDB tables to AWS AppSync and connect them to a GraphQL API.

You can let AWS AppSync provision DynamoDB resources on your behalf. Or, if you prefer, you can connect your existing tables to a GraphQL schema by creating a data source and a resolver. In either case, you’ll be able to read and write to your DynamoDB database through GraphQL statements and subscribe to real-time data.

There are specific configuration steps that need to be completed in order for GraphQL statements to be translated to DynamoDB operations, and for responses to be translated back into GraphQL. This tutorial outlines the configuration process through several real-world scenarios and data access patterns.

## Setting up your DynamoDB tables
<a name="setting-up-your-ddb-tables"></a>

To begin this tutorial, first you need to follow the steps below to provision AWS resources.

1. Provision AWS resources using the following AWS CloudFormation template in the CLI:

   ```
   aws cloudformation create-stack \
       --stack-name AWSAppSyncTutorialForAmazonDynamoDB \
       --template-url https://s3.us-west-2.amazonaws.com/awsappsync/resources/dynamodb/AmazonDynamoDBCFTemplate.yaml \
       --capabilities CAPABILITY_NAMED_IAM
   ```

   Alternatively, you can launch the following CloudFormation stack in the US-West 2 (Oregon) region in your AWS account.

   [https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/dynamodb/AmazonDynamoDBCFTemplate.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/dynamodb/AmazonDynamoDBCFTemplate.yaml)

   This creates the following:
   + A DynamoDB table called `AppSyncTutorial-Post` that will hold `Post` data.
   + An IAM role and associated IAM managed policy to allow AWS AppSync to interact with the `Post` table.

1. To see more details about the stack and the created resources, run the following CLI command:

   ```
   aws cloudformation describe-stacks --stack-name AWSAppSyncTutorialForAmazonDynamoDB
   ```

1. To delete the resources later, you can run the following:

   ```
   aws cloudformation delete-stack --stack-name AWSAppSyncTutorialForAmazonDynamoDB
   ```

## Creating your GraphQL API
<a name="creating-your-graphql-api"></a>

To create the GraphQL API in AWS AppSync:

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose **Create API**.

1. Under the **Customize your API or import from Amazon DynamoDB** window, choose **Build from scratch**.

   1. Choose **Start** to the right of the same window.

1. In the **API name** field, set the name of the API to `AWSAppSyncTutorial`.

1. Choose **Create**.

The AWS AppSync console creates a new GraphQL API for you using the API key authentication mode. You can use the console to set up the rest of the GraphQL API and run queries against it for the rest of this tutorial.

## Defining a basic post API
<a name="defining-a-basic-post-api"></a>

Now that you have created an AWS AppSync GraphQL API, you can set up a basic schema that allows the basic creation, retrieval, and deletion of post data.

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose the API you just created.

1. In the **Sidebar**, choose **Schema**.

   1. In the **Schema** pane, replace the contents with the following code:

     ```
     schema {
         query: Query
         mutation: Mutation
     }
     
     type Query {
         getPost(id: ID): Post
     }
     
     type Mutation {
         addPost(
             id: ID!
             author: String!
             title: String!
             content: String!
             url: String!
         ): Post!
     }
     
     type Post {
         id: ID!
         author: String
         title: String
         content: String
         url: String
         ups: Int!
         downs: Int!
         version: Int!
     }
     ```

1. Choose **Save**.

This schema defines a `Post` type and operations to add and get `Post` objects.

## Configuring the Data Source for the DynamoDB Tables
<a name="configuring-the-data-source-for-the-ddb-tables"></a>

Next, link the queries and mutations defined in the schema to the `AppSyncTutorial-Post`DynamoDB table.

First, AWS AppSync needs to be aware of your tables. You do this by setting up a data source in AWS AppSync:

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Data Sources**.

1. Choose **Create data source**.

   1. For **Data source name**, enter in `PostDynamoDBTable`. 

   1. For **Data source type**, choose **Amazon DynamoDB table**.

   1. For **Region**, choose **US-WEST-2**.

   1. For **Table name**, choose the **AppSyncTutorial-Post** DynamoDB table.

   1. Create a new IAM role (recommended) or choose an existing role that has the `lambda:invokeFunction` IAM permission. Existing roles need a trust policy, as explained in the [Attaching a data source](attaching-a-data-source.md) section. 

      The following is an example IAM policy that has the required permissions to perform operations on the resource:

------
#### [ JSON ]

****  

      ```
      { 
           "Version":"2012-10-17",		 	 	  
           "Statement": [ 
               { 
                   "Effect": "Allow", 
                   "Action": [ "lambda:invokeFunction" ], 
                   "Resource": [ 
                       "arn:aws:lambda:us-east-1:111122223333:function:myFunction", 
                       "arn:aws:lambda:us-east-1:111122223333:function:myFunction:*" 
                   ] 
               } 
           ] 
       }
      ```

------

1. Choose **Create**.

## Setting up the addPost resolver (DynamoDB PutItem)
<a name="setting-up-the-addpost-resolver-dynamodb-putitem"></a>

After AWS AppSync is aware of the DynamoDB table, you can link it to individual queries and mutations by defining **Resolvers**. The first resolver you create is the `addPost` resolver, which enables you to create a post in the `AppSyncTutorial-Post` DynamoDB table.

A resolver has the following components:
+ The location in the GraphQL schema to attach the resolver. In this case, you are setting up a resolver on the `addPost` field on the `Mutation` type. This resolver will be invoked when the caller calls `mutation { addPost(...){...} }`.
+ The data source to use for this resolver. In this case, you want to use the `PostDynamoDBTable` data source you defined earlier, so you can add entries into the `AppSyncTutorial-Post` DynamoDB table.
+ The request mapping template. The purpose of the request mapping template is to take the incoming request from the caller and translate it into instructions for AWS AppSync to perform against DynamoDB.
+ The response mapping template. The job of the response mapping template is to take the response from DynamoDB and translate it back into something that GraphQL expects. This is useful if the shape of the data in DynamoDB is different to the `Post` type in GraphQL, but in this case they have the same shape, so you just pass the data through.

To set up the resolver:

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Data Sources**.

1. Choose **Create data source**.

   1. For **Data source name**, enter in `PostDynamoDBTable`. 

   1. For **Data source type**, choose **Amazon DynamoDB table**.

   1. For **Region**, choose **US-WEST-2**.

   1. For **Table name**, choose the **AppSyncTutorial-Post** DynamoDB table.

   1. Create a new IAM role (recommended) or choose an existing role that has the `lambda:invokeFunction` IAM permission. Existing roles need a trust policy, as explained in the [Attaching a data source](attaching-a-data-source.md) section. 

      The following is an example IAM policy that has the required permissions to perform operations on the resource:

------
#### [ JSON ]

****  

      ```
      { 
           "Version":"2012-10-17",		 	 	  
           "Statement": [ 
               { 
                   "Effect": "Allow", 
                   "Action": [ "lambda:invokeFunction" ], 
                   "Resource": [ 
                       "arn:aws:lambda:us-west-2:123456789012:function:myFunction", 
                       "arn:aws:lambda:us-west-2:123456789012:function:myFunction:*" 
                   ] 
               } 
           ] 
       }
      ```

------

1. Choose **Create**.

1. Choose the **Schema** tab.

1. In the **Data types** pane on the right, find the **addPost** field on the **Mutation** type, and then choose **Attach**.

1. In the **Action menu**, choose **Update runtime**, then choose **Unit Resolver (VTL only)**.

1. In **Data source name**, choose **PostDynamoDBTable**.

1. In **Configure the request mapping template**, paste the following:

   ```
   {
       "version" : "2017-02-28",
       "operation" : "PutItem",
       "key" : {
           "id" : $util.dynamodb.toDynamoDBJson($context.arguments.id)
       },
       "attributeValues" : {
           "author" : $util.dynamodb.toDynamoDBJson($context.arguments.author),
           "title" : $util.dynamodb.toDynamoDBJson($context.arguments.title),
           "content" : $util.dynamodb.toDynamoDBJson($context.arguments.content),
           "url" : $util.dynamodb.toDynamoDBJson($context.arguments.url),
           "ups" : { "N" : 1 },
           "downs" : { "N" : 0 },
           "version" : { "N" : 1 }
       }
   }
   ```

   **Note:** A *type* is specified on all the keys and attribute values. For example, you set the `author` field to `{ "S" : "${context.arguments.author}" }`. The `S` part indicates to AWS AppSync and DynamoDB that the value will be a string value. The actual value gets populated from the `author` argument. Similarly, the `version` field is a number field because it uses `N` for the type. Finally, you’re also initializing the `ups`, `downs` and `version` field.

   For this tutorial you’ve specified that the GraphQL `ID!` type, which indexes the new item that is inserted to DynamoDB, comes as part of the client arguments. AWS AppSync comes with a utility for automatic ID generation called `$utils.autoId()` which you could have also used in the form of `"id" : { "S" : "${$utils.autoId()}" }`. Then you could simply leave the `id: ID!` out of the schema definition of `addPost()` and it would be inserted automatically. You won’t use this technique for this tutorial, but you should consider it as a good practice when writing to DynamoDB tables.

   For more information about mapping templates, see the [Resolver Mapping Template Overview](resolver-mapping-template-reference-overview.md#aws-appsync-resolver-mapping-template-reference-overview) reference documentation. For more information about GetItem request mapping, see the [GetItem](aws-appsync-resolver-mapping-template-reference-dynamodb-getitem.md) reference documentation. For more information about types, see the [Type System (Request Mapping)](aws-appsync-resolver-mapping-template-reference-dynamodb-typed-values-request.md) reference documentation.

1. In **Configure the response mapping template**, paste the following:

   ```
   $utils.toJson($context.result)
   ```

    **Note:** Because the shape of the data in the `AppSyncTutorial-Post` table exactly matches the shape of the `Post` type in GraphQL, the response mapping template just passes the results straight through. Also note that all of the examples in this tutorial use the same response mapping template, so you only create one file.

1. Choose **Save**.

### Call the API to Add a Post
<a name="call-the-api-to-add-a-post"></a>

Now that the resolver is set up, AWS AppSync can translate an incoming `addPost` mutation to a DynamoDB PutItem operation. You can now run a mutation to put something in the table.
+ Choose the **Queries** tab.
+ In the **Queries** pane, paste the following mutation:

  ```
  mutation addPost {
    addPost(
      id: 123
      author: "AUTHORNAME"
      title: "Our first post!"
      content: "This is our first post."
      url: "https://aws.amazon.com/appsync/"
    ) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The results of the newly created post should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "addPost": {
        "id": "123",
        "author": "AUTHORNAME",
        "title": "Our first post!",
        "content": "This is our first post.",
        "url": "https://aws.amazon.com/appsync/",
        "ups": 1,
        "downs": 0,
        "version": 1
      }
    }
  }
  ```

Here’s what happened:
+ AWS AppSync received an `addPost` mutation request.
+ AWS AppSync took the request, and the request mapping template, and generated a request mapping document. This would have looked like:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "PutItem",
      "key" : {
          "id" : { "S" : "123" }
      },
      "attributeValues" : {
          "author": { "S" : "AUTHORNAME" },
          "title": { "S" : "Our first post!" },
          "content": { "S" : "This is our first post." },
          "url": { "S" : "https://aws.amazon.com/appsync/" },
          "ups" : { "N" : 1 },
          "downs" : { "N" : 0 },
          "version" : { "N" : 1 }
      }
  }
  ```
+ AWS AppSync used the request mapping document to generate and execute a DynamoDB`PutItem` request.
+ AWS AppSync took the results of the `PutItem` request and converted them back to GraphQL types.

  ```
  {
      "id" : "123",
      "author": "AUTHORNAME",
      "title": "Our first post!",
      "content": "This is our first post.",
      "url": "https://aws.amazon.com/appsync/",
      "ups" : 1,
      "downs" : 0,
      "version" : 1
  }
  ```
+ Passed it through the response mapping document, which just passed it through unchanged.
+ Returned the newly created object in the GraphQL response.

## Setting Up the getPost Resolver (DynamoDB GetItem)
<a name="setting-up-the-getpost-resolver-ddb-getitem"></a>

Now that you’re able to add data to the `AppSyncTutorial-Post`DynamoDB table, you need to set up the `getPost` query so it can retrieve that data from the `AppSyncTutorial-Post` table. To do this, you set up another resolver.
+ Choose the **Schema** tab.
+ In the **Data types** pane on the right, find the **getPost** field on the **Query** type, and then choose **Attach**.
+ In the **Action menu**, choose **Update runtime**, then choose **Unit Resolver (VTL only)**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "GetItem",
      "key" : {
          "id" : $util.dynamodb.toDynamoDBJson($ctx.args.id)
      }
  }
  ```
+ In **Configure the response mapping template**, paste the following:

  ```
  $utils.toJson($context.result)
  ```
+ Choose **Save**.

### Call the API to Get a Post
<a name="call-the-api-to-get-a-post"></a>

Now the resolver has been set up, AWS AppSync knows how to translate an incoming `getPost` query to a DynamoDB`GetItem` operation. You can now run a query to retrieve the post you created earlier.
+ Choose the **Queries** tab.
+ In the **Queries** pane, paste the following:

  ```
  query getPost {
    getPost(id:123) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The post retrieved from DynamoDB should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "getPost": {
        "id": "123",
        "author": "AUTHORNAME",
        "title": "Our first post!",
        "content": "This is our first post.",
        "url": "https://aws.amazon.com/appsync/",
        "ups": 1,
        "downs": 0,
        "version": 1
      }
    }
  }
  ```

Here’s what happened:
+ AWS AppSync received a `getPost` query request.
+ AWS AppSync took the request, and the request mapping template, and generated a request mapping document. This would have looked like:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "GetItem",
      "key" : {
          "id" : { "S" : "123" }
      }
  }
  ```
+ AWS AppSync used the request mapping document to generate and execute a DynamoDB GetItem request.
+ AWS AppSync took the results of the `GetItem` request and converted it back to GraphQL types.

  ```
  {
      "id" : "123",
      "author": "AUTHORNAME",
      "title": "Our first post!",
      "content": "This is our first post.",
      "url": "https://aws.amazon.com/appsync/",
      "ups" : 1,
      "downs" : 0,
      "version" : 1
  }
  ```
+ Passed it through the response mapping document, which just passed it through unchanged.
+ Returned the retrieved object in the response.

Alternatively, take the following example:

```
query getPost {
  getPost(id:123) {
    id
    author
    title
  }
}
```

If your `getPost` query only needs the `id`, `author`, and `title`, you can change your request mapping template to use projection expressions to specify only the attributes that you want from your DynamoDB table to avoid unnecessary data transfer from DynamoDB to AWS AppSync. For example, the request mapping template may look like the snippet below:

```
{
    "version" : "2017-02-28",
    "operation" : "GetItem",
    "key" : {
        "id" : $util.dynamodb.toDynamoDBJson($ctx.args.id)
    },
    "projection" : {
     "expression" : "#author, id, title",
     "expressionNames" : { "#author" : "author"}
    }
}
```

## Create an updatePost Mutation (DynamoDB UpdateItem)
<a name="create-an-updatepost-mutation-ddb-updateitem"></a>

So far you can create and retrieve `Post` objects in DynamoDB. Next, you’ll set up a new mutation to allow us to update object. You’ll do this using the UpdateItem DynamoDB operation.
+ Choose the **Schema** tab.
+ In the **Schema** pane, modify the `Mutation` type to add a new `updatePost` mutation as follows:

  ```
  type Mutation {
      updatePost(
          id: ID!,
          author: String!,
          title: String!,
          content: String!,
          url: String!
      ): Post
      addPost(
          author: String!
          title: String!
          content: String!
          url: String!
      ): Post!
  }
  ```
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created **updatePost** field on the **Mutation** type and then choose **Attach**.
+ In the **Action menu**, choose **Update runtime**, then choose **Unit Resolver (VTL only)**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "UpdateItem",
      "key" : {
          "id" : $util.dynamodb.toDynamoDBJson($context.arguments.id)
      },
      "update" : {
          "expression" : "SET author = :author, title = :title, content = :content, #url = :url ADD version :one",
          "expressionNames": {
              "#url" : "url"
          },
          "expressionValues": {
              ":author" : $util.dynamodb.toDynamoDBJson($context.arguments.author),
              ":title" : $util.dynamodb.toDynamoDBJson($context.arguments.title),
              ":content" : $util.dynamodb.toDynamoDBJson($context.arguments.content),
              ":url" : $util.dynamodb.toDynamoDBJson($context.arguments.url),
              ":one" : { "N": 1 }
          }
      }
  }
  ```

   **Note:** This resolver is using the DynamoDB UpdateItem, which is significantly different from the PutItem operation. Instead of writing the entire item, you’re just asking DynamoDB to update certain attributes. This is done using DynamoDB Update Expressions. The expression itself is specified in the `expression` field in the `update` section. It says to set the `author`, `title`, `content` and url attributes, and then increment the `version` field. The values to use do not appear in the expression itself; the expression has placeholders that have names starting with a colon, which are then defined in the `expressionValues` field. Finally, DynamoDB has reserved words that cannot appear in the `expression`. For example, `url` is a reserved word, so to update the `url` field you can use name placeholders and define them in the `expressionNames` field.

  For more info about `UpdateItem` request mapping, see the [UpdateItem](aws-appsync-resolver-mapping-template-reference-dynamodb-updateitem.md) reference documentation. For more information about how to write update expressions, see the [DynamoDB UpdateExpressions documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html).
+ In **Configure the response mapping template**, paste the following:

  ```
  $utils.toJson($context.result)
  ```

### Call the API to Update a Post
<a name="call-the-api-to-update-a-post"></a>

Now the resolver has been set up, AWS AppSync knows how to translate an incoming `update` mutation to a DynamoDB`Update` operation. You can now run a mutation to update the item you wrote earlier.
+ Choose the **Queries** tab.
+ In **Queries** pane, paste the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier.

  ```
  mutation updatePost {
    updatePost(
      id:"123"
      author: "A new author"
      title: "An updated author!"
      content: "Now with updated content!"
      url: "https://aws.amazon.com/appsync/"
    ) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The updated post in DynamoDB should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "updatePost": {
        "id": "123",
        "author": "A new author",
        "title": "An updated author!",
        "content": "Now with updated content!",
        "url": "https://aws.amazon.com/appsync/",
        "ups": 1,
        "downs": 0,
        "version": 2
      }
    }
  }
  ```

In this example, the `ups` and `downs` fields were not modified because the request mapping template did not ask AWS AppSync and DynamoDB to do anything with those fields. Also, the `version` field was incremented by 1 because you asked AWS AppSync and DynamoDB to add 1 to the `version` field.

## Modifying the updatePost Resolver (DynamoDB UpdateItem)
<a name="modifying-the-updatepost-resolver-dynamodb-updateitem"></a>

This is a good start to the `updatePost` mutation, but it has two main problems:
+ If you want to update just a single field, you have to update all of the fields.
+ If two people are modifying the object, you could potentially lose information.

To address these issues, you’re going to modify the `updatePost` mutation to only modify arguments that were specified in the request, and then add a condition to the `UpdateItem` operation.

1. Choose the **Schema** tab.

1. In the **Schema** pane, modify the `updatePost` field in the `Mutation` type to remove the exclamation marks from the `author`, `title`, `content`, and `url` arguments, making sure to leave the `id` field as is. This will make them optional argument. Also, add a new, required `expectedVersion` argument.

   ```
   type Mutation {
       updatePost(
           id: ID!,
           author: String,
           title: String,
           content: String,
           url: String,
           expectedVersion: Int!
       ): Post
       addPost(
           author: String!
           title: String!
           content: String!
           url: String!
       ): Post!
   }
   ```

1. Choose **Save**.

1. In the **Data types** pane on the right, find the **updatePost** field on the **Mutation** type.

1. Choose **PostDynamoDBTable** to open the existing resolver.

1. In **Configure the request mapping template**, modify the request mapping template as follows:

   ```
   {
       "version" : "2017-02-28",
       "operation" : "UpdateItem",
       "key" : {
           "id" : $util.dynamodb.toDynamoDBJson($context.arguments.id)
       },
   
       ## Set up some space to keep track of things you're updating **
       #set( $expNames  = {} )
       #set( $expValues = {} )
       #set( $expSet = {} )
       #set( $expAdd = {} )
       #set( $expRemove = [] )
   
       ## Increment "version" by 1 **
       $!{expAdd.put("version", ":one")}
       $!{expValues.put(":one", { "N" : 1 })}
   
       ## Iterate through each argument, skipping "id" and "expectedVersion" **
       #foreach( $entry in $context.arguments.entrySet() )
           #if( $entry.key != "id" && $entry.key != "expectedVersion" )
               #if( (!$entry.value) && ("$!{entry.value}" == "") )
                   ## If the argument is set to "null", then remove that attribute from the item in DynamoDB **
   
                   #set( $discard = ${expRemove.add("#${entry.key}")} )
                   $!{expNames.put("#${entry.key}", "$entry.key")}
               #else
                   ## Otherwise set (or update) the attribute on the item in DynamoDB **
   
                   $!{expSet.put("#${entry.key}", ":${entry.key}")}
                   $!{expNames.put("#${entry.key}", "$entry.key")}
                   $!{expValues.put(":${entry.key}", { "S" : "${entry.value}" })}
               #end
           #end
       #end
   
       ## Start building the update expression, starting with attributes you're going to SET **
       #set( $expression = "" )
       #if( !${expSet.isEmpty()} )
           #set( $expression = "SET" )
           #foreach( $entry in $expSet.entrySet() )
               #set( $expression = "${expression} ${entry.key} = ${entry.value}" )
               #if ( $foreach.hasNext )
                   #set( $expression = "${expression}," )
               #end
           #end
       #end
   
       ## Continue building the update expression, adding attributes you're going to ADD **
       #if( !${expAdd.isEmpty()} )
           #set( $expression = "${expression} ADD" )
           #foreach( $entry in $expAdd.entrySet() )
               #set( $expression = "${expression} ${entry.key} ${entry.value}" )
               #if ( $foreach.hasNext )
                   #set( $expression = "${expression}," )
               #end
           #end
       #end
   
       ## Continue building the update expression, adding attributes you're going to REMOVE **
       #if( !${expRemove.isEmpty()} )
           #set( $expression = "${expression} REMOVE" )
   
           #foreach( $entry in $expRemove )
               #set( $expression = "${expression} ${entry}" )
               #if ( $foreach.hasNext )
                   #set( $expression = "${expression}," )
               #end
           #end
       #end
   
       ## Finally, write the update expression into the document, along with any expressionNames and expressionValues **
       "update" : {
           "expression" : "${expression}"
           #if( !${expNames.isEmpty()} )
               ,"expressionNames" : $utils.toJson($expNames)
           #end
           #if( !${expValues.isEmpty()} )
               ,"expressionValues" : $utils.toJson($expValues)
           #end
       },
   
       "condition" : {
           "expression"       : "version = :expectedVersion",
           "expressionValues" : {
               ":expectedVersion" : $util.dynamodb.toDynamoDBJson($context.arguments.expectedVersion)
           }
       }
   }
   ```

1. Choose **Save**.

This template is one of the more complex examples. It demonstrates the power and flexibility of mapping templates. It loops through all of the arguments, skipping over `id` and `expectedVersion`. If the argument is set to something, it asks AWS AppSync and DynamoDB to update that attribute on the object in DynamoDB. If the attribute is set to null, it asks AWS AppSync and DynamoDB to remove that attribute from the post object. If an argument wasn’t specified, it leaves the attribute alone. It also increments the `version` field.

Also, there is a new `condition` section. A condition expression enables you tell AWS AppSync and DynamoDB whether or not the request should succeed based on the state of the object already in DynamoDB before the operation is performed. In this case, you only want the `UpdateItem` request to succeed if the `version` field of the item currently in DynamoDB exactly matches the `expectedVersion` argument.

For more information about condition expressions, see the [Condition Expressions](aws-appsync-resolver-mapping-template-reference-dynamodb-condition-expressions.md) reference documentation.

### Call the API to Update a Post
<a name="id1"></a>

Let’s try updating the `Post` object with the new resolver:
+ Choose the **Queries** tab.
+ In the **Queries** pane, paste the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier.

  ```
  mutation updatePost {
    updatePost(
      id:123
      title: "An empty story"
      content: null
      expectedVersion: 2
    ) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The updated post in DynamoDB should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "updatePost": {
        "id": "123",
        "author": "A new author",
        "title": "An empty story",
        "content": null,
        "url": "https://aws.amazon.com/appsync/",
        "ups": 1,
        "downs": 0,
        "version": 3
      }
    }
  }
  ```

In this request, you asked AWS AppSync and DynamoDB to update the `title` and `content` field only. It left all the other fields alone (other than incrementing the `version` field). You set the `title` attribute to a new value, and removed the `content` attribute from the post. The `author`, `url`, `ups`, and `downs` fields were left untouched.

Try executing the mutation request again, leaving the request exactly as is. You should see a response similar to the following:

```
{
  "data": {
    "updatePost": null
  },
  "errors": [
    {
      "path": [
        "updatePost"
      ],
      "data": {
        "id": "123",
        "author": "A new author",
        "title": "An empty story",
        "content": null,
        "url": "https://aws.amazon.com/appsync/",
        "ups": 1,
        "downs": 0,
        "version": 3
      },
      "errorType": "DynamoDB:ConditionalCheckFailedException",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "message": "The conditional request failed (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ConditionalCheckFailedException; Request ID: ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ)"
    }
  ]
}
```

The request fails because the condition expression evaluates to false:
+ The first time you ran the request, the value of the `version` field of the post in DynamoDB was `2`, which matched the `expectedVersion` argument. The request succeeded, which meant the `version` field was incremented in DynamoDB to `3`.
+ The second time you ran the request, the value of the `version` field of the post in DynamoDB was `3`, which did not match the `expectedVersion` argument.

This pattern is typically called *optimistic locking*.

A feature of an AWS AppSync DynamoDB resolver is that it returns the current value of the post object in DynamoDB. You can find this in the `data` field in the `errors` section of the GraphQL response. Your application can use this information to decide how it should proceed. In this case, you can see the `version` field of the object in DynamoDB is set to `3`, so you could just update the `expectedVersion` argument to `3` and the request would succeed again.

For more information about handling condition check failures, see the [Condition Expressions](aws-appsync-resolver-mapping-template-reference-dynamodb-condition-expressions.md) mapping template reference documentation.

## Create upvotePost and downvotePost Mutations (DynamoDB UpdateItem)
<a name="create-upvotepost-and-downvotepost-mutations-ddb-updateitem"></a>

The `Post` type has `ups` and `downs` fields to enable record upvotes and downvotes, but so far the API doesn’t let us do anything with them. Let’s add some mutations to let us upvote and downvote the posts.
+ Choose the **Schema** tab.
+ In the **Schema** pane, modify the `Mutation` type to add new `upvotePost` and `downvotePost` mutations as follows:

  ```
  type Mutation {
      upvotePost(id: ID!): Post
      downvotePost(id: ID!): Post
      updatePost(
          id: ID!,
          author: String,
          title: String,
          content: String,
          url: String,
          expectedVersion: Int!
      ): Post
      addPost(
          author: String!,
          title: String!,
          content: String!,
          url: String!
      ): Post!
  }
  ```
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created **upvotePost** field on the **Mutation** type, and then choose **Attach**.
+ In the **Action menu**, choose **Update runtime**, then choose **Unit Resolver (VTL only)**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "UpdateItem",
      "key" : {
          "id" : $util.dynamodb.toDynamoDBJson($context.arguments.id)
      },
      "update" : {
          "expression" : "ADD ups :plusOne, version :plusOne",
          "expressionValues" : {
              ":plusOne" : { "N" : 1 }
          }
      }
  }
  ```
+ In **Configure the response mapping template**, paste the following:

  ```
  $utils.toJson($context.result)
  ```
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created `downvotePost` field on the **Mutation** type, and then choose **Attach**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "UpdateItem",
      "key" : {
          "id" : $util.dynamodb.toDynamoDBJson($context.arguments.id)
      },
      "update" : {
          "expression" : "ADD downs :plusOne, version :plusOne",
          "expressionValues" : {
              ":plusOne" : { "N" : 1 }
          }
      }
  }
  ```
+ In **Configure the response mapping template**, paste the following:

  ```
  $utils.toJson($context.result)
  ```
+ Choose **Save**.

### Call the API to upvote and downvote a Post
<a name="call-the-api-to-upvote-and-downvote-a-post"></a>

Now the new resolvers have been set up, AWS AppSync knows how to translate an incoming `upvotePost` or `downvote` mutation to DynamoDB UpdateItem operation. You can now run mutations to upvote or downvote the post you created earlier.
+ Choose the **Queries** tab.
+ In the **Queries** pane, paste the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier.

  ```
  mutation votePost {
    upvotePost(id:123) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The post is updated in DynamoDB and should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "upvotePost": {
        "id": "123",
        "author": "A new author",
        "title": "An empty story",
        "content": null,
        "url": "https://aws.amazon.com/appsync/",
        "ups": 6,
        "downs": 0,
        "version": 4
      }
    }
  }
  ```
+ Choose **Execute query** a few more times. You should see the `ups` and `version` field incrementing by 1 each time you execute the query.
+ Change the query to call the `downvotePost` mutation as follows:

  ```
  mutation votePost {
    downvotePost(id:123) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button). This time, you should see the `downs` and `version` field incrementing by 1 each time you execute the query.

  ```
  {
    "data": {
      "downvotePost": {
        "id": "123",
        "author": "A new author",
        "title": "An empty story",
        "content": null,
        "url": "https://aws.amazon.com/appsync/",
        "ups": 6,
        "downs": 4,
        "version": 12
      }
    }
  }
  ```

## Setting Up the deletePost Resolver (DynamoDB DeleteItem)
<a name="setting-up-the-deletepost-resolver-ddb-deletepost"></a>

The next mutation you want to set up is to delete a post. You’ll do this using the `DeleteItem` DynamoDB operation.
+ Choose the **Schema** tab.
+ In the **Schema** pane, modify the `Mutation` type to add a new `deletePost` mutation as follows:

  ```
  type Mutation {
      deletePost(id: ID!, expectedVersion: Int): Post
      upvotePost(id: ID!): Post
      downvotePost(id: ID!): Post
      updatePost(
          id: ID!,
          author: String,
          title: String,
          content: String,
          url: String,
          expectedVersion: Int!
      ): Post
      addPost(
          author: String!,
          title: String!,
          content: String!,
          url: String!
      ): Post!
  }
  ```

  This time you made the `expectedVersion` field optional, which is explained later when you add the request mapping template.
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created **delete** field on the **Mutation** type, and then choose **Attach**.
+ In the **Action menu**, choose **Update runtime**, then choose **Unit Resolver (VTL only)**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "DeleteItem",
      "key": {
          "id": $util.dynamodb.toDynamoDBJson($context.arguments.id)
      }
      #if( $context.arguments.containsKey("expectedVersion") )
          ,"condition" : {
              "expression"       : "attribute_not_exists(id) OR version = :expectedVersion",
              "expressionValues" : {
                  ":expectedVersion" : $util.dynamodb.toDynamoDBJson($context.arguments.expectedVersion)
              }
          }
      #end
  }
  ```

   **Note:** The `expectedVersion` argument is an optional argument. If the caller set an `expectedVersion` argument in the request, the template adds a condition that only allows the `DeleteItem` request to succeed if the item is already deleted or if the `version` attribute of the post in DynamoDB exactly matches the `expectedVersion`. If left out, no condition expression is specified on the `DeleteItem` request. It succeeds regardless of the value of `version`, or whether or not the item exists in DynamoDB.
+ In **Configure the response mapping template**, paste the following:

  ```
  $utils.toJson($context.result)
  ```

   **Note:** Even though you’re deleting an item, you can return the item that was deleted, if it was not already deleted.
+ Choose **Save**.

For more info about `DeleteItem` request mapping, see the [DeleteItem](aws-appsync-resolver-mapping-template-reference-dynamodb-deleteitem.md) reference documentation.

### Call the API to Delete a Post
<a name="call-the-api-to-delete-a-post"></a>

Now the resolver has been set up, AWS AppSync knows how to translate an incoming `delete` mutation to a DynamoDB`DeleteItem` operation. You can now run a mutation to delete something in the table.
+ Choose the **Queries** tab.
+ In the **Queries** pane, paste the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier.

  ```
  mutation deletePost {
    deletePost(id:123) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The post is deleted from DynamoDB. Note that AWS AppSync returns the value of the item that was deleted from DynamoDB, which should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "deletePost": {
        "id": "123",
        "author": "A new author",
        "title": "An empty story",
        "content": null,
        "url": "https://aws.amazon.com/appsync/",
        "ups": 6,
        "downs": 4,
        "version": 12
      }
    }
  }
  ```

The value is only returned if this call to `deletePost` was the one that actually deleted it from DynamoDB.
+ Choose **Execute query** again.
+ The call still succeeds, but no value is returned.

  ```
  {
    "data": {
      "deletePost": null
    }
  }
  ```

Now let’s try deleting a post, but this time specifying an `expectedValue`. First though, you’ll need to create a new post because you’ve just deleted the one you’ve been working with so far.
+ In the **Queries** pane, paste the following mutation:

  ```
  mutation addPost {
    addPost(
      id:123
      author: "AUTHORNAME"
      title: "Our second post!"
      content: "A new post."
      url: "https://aws.amazon.com/appsync/"
    ) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The results of the newly created post should appear in the results pane to the right of the query pane. Note down the `id` of the newly created object because you need it in just a moment. It should look similar to the following:

  ```
  {
    "data": {
      "addPost": {
        "id": "123",
        "author": "AUTHORNAME",
        "title": "Our second post!",
        "content": "A new post.",
        "url": "https://aws.amazon.com/appsync/",
        "ups": 1,
        "downs": 0,
        "version": 1
      }
    }
  }
  ```

Now let’s try to delete that post, but put in the wrong value for `expectedVersion`:
+ In the **Queries** pane, paste the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier.

  ```
  mutation deletePost {
    deletePost(
      id:123
      expectedVersion: 9999
    ) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button).

  ```
  {
    "data": {
      "deletePost": null
    },
    "errors": [
      {
        "path": [
          "deletePost"
        ],
        "data": {
          "id": "123",
          "author": "AUTHORNAME",
          "title": "Our second post!",
          "content": "A new post.",
          "url": "https://aws.amazon.com/appsync/",
          "ups": 1,
          "downs": 0,
          "version": 1
        },
        "errorType": "DynamoDB:ConditionalCheckFailedException",
        "locations": [
          {
            "line": 2,
            "column": 3
          }
        ],
        "message": "The conditional request failed (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ConditionalCheckFailedException; Request ID: ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ)"
      }
    ]
  }
  ```

  The request failed because the condition expression evaluates to false: the value for `version` of the post in DynamoDB does not match the `expectedValue` specified in the arguments. The current value of the object is returned in the `data` field in the `errors` section of the GraphQL response.
+ Retry the request, but correct the `expectedVersion`:

  ```
  mutation deletePost {
    deletePost(
      id:123
      expectedVersion: 1
    ) {
      id
      author
      title
      content
      url
      ups
      downs
      version
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ This time the request succeeds, and the value that was deleted from DynamoDB is returned:

  ```
  {
    "data": {
      "deletePost": {
        "id": "123",
        "author": "AUTHORNAME",
        "title": "Our second post!",
        "content": "A new post.",
        "url": "https://aws.amazon.com/appsync/",
        "ups": 1,
        "downs": 0,
        "version": 1
      }
    }
  }
  ```
+ Choose **Execute query** again.
+ The call still succeeds, but this time no value is returned because the post was already deleted in DynamoDB.

```
{
  "data": {
    "deletePost": null
  }
}
```

## Setting Up the allPost Resolver (DynamoDB Scan)
<a name="setting-up-the-allpost-resolver-dynamodb-scan"></a>

So far the API is only useful if you know the `id` of each post you want to look at. Let’s add a new resolver that returns all the posts in the table.
+ Choose the **Schema** tab.
+ In the **Schema** pane, modify the `Query` type to add a new `allPost` query as follows:

  ```
  type Query {
      allPost(count: Int, nextToken: String): PaginatedPosts!
      getPost(id: ID): Post
  }
  ```
+ Add a new `PaginationPosts` type:

  ```
  type PaginatedPosts {
      posts: [Post!]!
      nextToken: String
  }
  ```
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created **allPost** field on the **Query** type, and then choose **Attach**.
+ In the **Action menu**, choose **Update runtime**, then choose **Unit Resolver (VTL only)**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "Scan"
      #if( ${context.arguments.count} )
          ,"limit": $util.toJson($context.arguments.count)
      #end
      #if( ${context.arguments.nextToken} )
          ,"nextToken": $util.toJson($context.arguments.nextToken)
      #end
  }
  ```

  This resolver has two optional arguments: `count`, which specifies the maximum number of items to return in a single call, and `nextToken`, which can be used to retrieve the next set of results (you’ll show where the value for `nextToken` comes from later).
+ In **Configure the response mapping template**, paste the following:

  ```
  {
      "posts": $utils.toJson($context.result.items)
      #if( ${context.result.nextToken} )
          ,"nextToken": $util.toJson($context.result.nextToken)
      #end
  }
  ```

   **Note:** This response mapping template is different from all the others so far. The result of the `allPost` query is a `PaginatedPosts`, which contains a list of posts and a pagination token. The shape of this object is different to what is returned from the AWS AppSync DynamoDB Resolver: the list of posts is called `items` in the AWS AppSync DynamoDB Resolver results, but is called `posts` in `PaginatedPosts`.
+ Choose **Save**.

For more information about `Scan` request mapping, see the [Scan](aws-appsync-resolver-mapping-template-reference-dynamodb-scan.md) reference documentation.

### Call the API to Scan All Posts
<a name="call-the-api-to-scan-all-posts"></a>

Now the resolver has been set up, AWS AppSync knows how to translate an incoming `allPost` query to a DynamoDB`Scan` operation. You can now scan the table to retrieve all the posts.

Before you can try it out though, you need to populate the table with some data because you’ve deleted everything you’ve worked with so far.
+ Choose the **Queries** tab.
+ In the **Queries** pane, paste the following mutation:

  ```
  mutation addPost {
    post1: addPost(id:1 author: "AUTHORNAME" title: "A series of posts, Volume 1" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
    post2: addPost(id:2 author: "AUTHORNAME" title: "A series of posts, Volume 2" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
    post3: addPost(id:3 author: "AUTHORNAME" title: "A series of posts, Volume 3" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
    post4: addPost(id:4 author: "AUTHORNAME" title: "A series of posts, Volume 4" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
    post5: addPost(id:5 author: "AUTHORNAME" title: "A series of posts, Volume 5" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
    post6: addPost(id:6 author: "AUTHORNAME" title: "A series of posts, Volume 6" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
    post7: addPost(id:7 author: "AUTHORNAME" title: "A series of posts, Volume 7" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
    post8: addPost(id:8 author: "AUTHORNAME" title: "A series of posts, Volume 8" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
    post9: addPost(id:9 author: "AUTHORNAME" title: "A series of posts, Volume 9" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
  }
  ```
+ Choose **Execute query** (the orange play button).

Now, let’s scan the table, returning five results at a time.
+ In the **Queries** pane, paste the following query:

  ```
  query allPost {
    allPost(count: 5) {
      posts {
        id
        title
      }
      nextToken
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The first five posts should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "allPost": {
        "posts": [
          {
            "id": "5",
            "title": "A series of posts, Volume 5"
          },
          {
            "id": "1",
            "title": "A series of posts, Volume 1"
          },
          {
            "id": "6",
            "title": "A series of posts, Volume 6"
          },
          {
            "id": "9",
            "title": "A series of posts, Volume 9"
          },
          {
            "id": "7",
            "title": "A series of posts, Volume 7"
          }
        ],
        "nextToken": "eyJ2ZXJzaW9uIjoxLCJ0b2tlbiI6IkFRSUNBSGo4eHR0RG0xWXhUa1F0cEhXMEp1R3B0M1B3eThOSmRvcG9ad2RHYjI3Z0lnRkJEdXdUK09hcnovRGhNTGxLTGdMUEFBQUI1akNDQWVJR0NTcUdTSWIzRFFFSEJxQ0NBZE13Z2dIUEFnRUFNSUlCeUFZSktvWklodmNOQVFjQk1CNEdDV0NHU0FGbEF3UUJMakFSQkF6ajFodkhKU1paT1pncTRaUUNBUkNBZ2dHWnJiR1dQWGxkMDB1N0xEdGY4Z2JsbktzRjRua1VCcks3TFJLcjZBTFRMeGFwVGJZMDRqOTdKVFQyYVRwSzdzbVdtNlhWWFVCTnFIOThZTzBWZHVkdDI2RlkxMHRqMDJ2QTlyNWJTUWpTbWh6NE5UclhUMG9KZWJSQ2JJbXBlaDRSVlg0Tis0WTVCN1IwNmJQWWQzOVhsbTlUTjBkZkFYMVErVCthaXZoNE5jMk50RitxVmU3SlJ5WmpzMEFkSGduM3FWd2VrOW5oeFVVd3JlK1loUks5QkRzemdiMDlmZmFPVXpzaFZ4cVJRbC93RURlOTcrRmVJdXZNby9NZ1F6dUdNbFRyalpNR3FuYzZBRnhwa0VlZTFtR0FwVDFISElUZlluakptYklmMGUzUmcxbVlnVHVSbDh4S0trNmR0QVoraEhLVDhuNUI3VnF4bHRtSnlNUXBrZGl6KzkyL3VzNDl4OWhrMnVxSW01ZFFwMjRLNnF0dm9ZK1BpdERuQTc5djhzb0grVytYT3VuQ2NVVDY4TVZ1Wk5KYkRuSEFSSEVlaTlVNVBTelU5RGZ6d2pPdmhqWDNJMWhwdWUrWi83MDVHVjlPQUxSTGlwZWZPeTFOZFhwZTdHRDZnQW00bUJUK2c1eC9Ec3ZDbWVnSDFDVXRTdHVuU1ZFa2JpZytQRC9oMUwyRTNqSHhVQldaa28yU256WUc0cG0vV1RSWkFVZHZuQT09In0="
      }
    }
  }
  ```

You got five results and a `nextToken` that you can use to get the next set of results.
+ Update the `allPost` query to include the `nextToken` from the previous set of results:

  ```
  query allPost {
    allPost(
      count: 5
      nextToken: "eyJ2ZXJzaW9uIjoxLCJ0b2tlbiI6IkFRSUNBSGo4eHR0RG0xWXhUa1F0cEhXMEp1R3B0M1B3eThOSmRvcG9ad2RHYjI3Z0lnRlluNktJRWl6V0ZlR3hJOVJkaStrZUFBQUI1akNDQWVJR0NTcUdTSWIzRFFFSEJxQ0NBZE13Z2dIUEFnRUFNSUlCeUFZSktvWklodmNOQVFjQk1CNEdDV0NHU0FGbEF3UUJMakFSQkF5cW8yUGFSZThnalFpemRCTUNBUkNBZ2dHWk1JODhUNzhIOFVUZGtpdFM2ZFluSWRyVDg4c2lkN1RjZzB2d1k3VGJTTWpSQ2U3WjY3TkUvU2I1dWNETUdDMmdmMHErSGJSL0pteGRzYzVEYnE1K3BmWEtBdU5jSENJdWNIUkJ0UHBPWVdWdCtsS2U5L1pNcWdocXhrem1RaXI1YnIvQkt6dU5hZmJCdE93NmtoM2Jna1BKM0RjWWhpMFBGbmhMVGg4TUVGSjBCcXg3RTlHR1V5N0tUS0JLZlV3RjFQZ0JRREdrNzFYQnFMK2R1S2IrVGtZZzVYMjFrc3NyQmFVTmNXZmhTeXE0ZUJHSWhqZWQ5c3VKWjBSSTc2ZnVQdlZkR3FLNENjQmxHYXhpekZnK2pKK1FneEU1SXduRTNYYU5TR0I4QUpmamR2bU1wbUk1SEdvWjlMUUswclczbG14RDRtMlBsaTNLaEVlcm9pem5zcmdINFpvcXIrN2ltRDN3QkJNd3BLbGQzNjV5Nnc4ZnMrK2FnbTFVOUlKOFFrOGd2bEgySHFROHZrZXBrMWlLdWRIQ25LaS9USnBlMk9JeEVPazVnRFlzRTRUU09HUlVJTkxYY2MvdW1WVEpBMUthV2hWTlAvdjNlSnlZQUszbWV6N2h5WHVXZ1BkTVBNWERQdTdjVnVRa3EwK3NhbGZOd2wvSUx4bHNyNDVwTEhuVFpyRWZvVlV1bXZ5S2VKY1RUU1lET05hM1NwWEd2UT09In0="
    ) {
      posts {
        id
        author
      }
      nextToken
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The remaining four posts should appear in the results pane to the right of the query pane. There is no `nextToken` in this set of results because you’ve paged through all nine posts, with none remaining. It should look similar to the following:

  ```
  {
    "data": {
      "allPost": {
        "posts": [
          {
            "id": "2",
            "title": "A series of posts, Volume 2"
          },
          {
            "id": "3",
            "title": "A series of posts, Volume 3"
          },
          {
            "id": "4",
            "title": "A series of posts, Volume 4"
          },
          {
            "id": "8",
            "title": "A series of posts, Volume 8"
          }
        ],
        "nextToken": null
      }
    }
  }
  ```

## Setting Up the allPostsByAuthor Resolver (DynamoDB Query)
<a name="setting-up-the-allpostsbyauthor-resolver-ddb-query"></a>

In addition to scanning DynamoDB for all posts, you can also query DynamoDB to retrieve posts created by a specific author. The DynamoDB table you created earlier already has a `GlobalSecondaryIndex` called `author-index` you can use with a DynamoDB`Query` operation to retrieve all posts created by a specific author.
+ Choose the **Schema** tab.
+ In the **Schema** pane, modify the `Query` type to add a new `allPostsByAuthor` query as follows:

  ```
  type Query {
      allPostsByAuthor(author: String!, count: Int, nextToken: String): PaginatedPosts!
      allPost(count: Int, nextToken: String): PaginatedPosts!
      getPost(id: ID): Post
  }
  ```

   **Note:** This uses the same `PaginatedPosts` type that you used with the `allPost` query.
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created **allPostsByAuthor** field on the **Query** type, and then choose **Attach**.
+ In the **Action menu**, choose **Update runtime**, then choose **Unit Resolver (VTL only)**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "Query",
      "index" : "author-index",
      "query" : {
        "expression": "author = :author",
          "expressionValues" : {
            ":author" : $util.dynamodb.toDynamoDBJson($context.arguments.author)
          }
      }
      #if( ${context.arguments.count} )
          ,"limit": $util.toJson($context.arguments.count)
      #end
      #if( ${context.arguments.nextToken} )
          ,"nextToken": "${context.arguments.nextToken}"
      #end
  }
  ```

  Like the `allPost` resolver, this resolver has two optional arguments: `count`, which specifies the maximum number of items to return in a single call, and `nextToken`, which can be used to retrieve the next set of results (the value for `nextToken` can be obtained from a previous call).
+ In **Configure the response mapping template**, paste the following:

  ```
  {
      "posts": $utils.toJson($context.result.items)
      #if( ${context.result.nextToken} )
          ,"nextToken": $util.toJson($context.result.nextToken)
      #end
  }
  ```

   **Note:** This is the same response mapping template that you used in the `allPost` resolver.
+ Choose **Save**.

For more information about `Query` request mapping, see the [Query](aws-appsync-resolver-mapping-template-reference-dynamodb-query.md) reference documentation.

### Call the API to Query All Posts by an Author
<a name="call-the-api-to-query-all-posts-by-an-author"></a>

Now the resolver has been set up, AWS AppSync knows how to translate an incoming `allPostsByAuthor` mutation to a DynamoDB`Query` operation against the `author-index` index. You can now query the table to retrieve all the posts by a specific author.

Before you do that, however, let’s populate the table with some more posts, because every post so far has the same author.
+ Choose the **Queries** tab.
+ In the **Queries** pane, paste the following mutation:

  ```
  mutation addPost {
    post1: addPost(id:10 author: "Nadia" title: "The cutest dog in the world" content: "So cute. So very, very cute." url: "https://aws.amazon.com/appsync/" ) { author, title }
    post2: addPost(id:11 author: "Nadia" title: "Did you know...?" content: "AppSync works offline?" url: "https://aws.amazon.com/appsync/" ) { author, title }
    post3: addPost(id:12 author: "Steve" title: "I like GraphQL" content: "It's great" url: "https://aws.amazon.com/appsync/" ) { author, title }
  }
  ```
+ Choose **Execute query** (the orange play button).

Now, let’s query the table, returning all posts authored by `Nadia`.
+ In the **Queries** pane, paste the following query:

  ```
  query allPostsByAuthor {
    allPostsByAuthor(author: "Nadia") {
      posts {
        id
        title
      }
      nextToken
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ All the posts authored by `Nadia` should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "allPostsByAuthor": {
        "posts": [
          {
            "id": "10",
            "title": "The cutest dog in the world"
          },
          {
            "id": "11",
            "title": "Did you know...?"
          }
        ],
        "nextToken": null
      }
    }
  }
  ```

Pagination works for `Query` just the same as it does for `Scan`. For example, let’s look for all posts by `AUTHORNAME`, getting five at a time.
+ In the **Queries** pane, paste the following query:

  ```
  query allPostsByAuthor {
    allPostsByAuthor(
      author: "AUTHORNAME"
      count: 5
    ) {
      posts {
        id
        title
      }
      nextToken
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ All the posts authored by `AUTHORNAME` should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "allPostsByAuthor": {
        "posts": [
          {
            "id": "6",
            "title": "A series of posts, Volume 6"
          },
          {
            "id": "4",
            "title": "A series of posts, Volume 4"
          },
          {
            "id": "2",
            "title": "A series of posts, Volume 2"
          },
          {
            "id": "7",
            "title": "A series of posts, Volume 7"
          },
          {
            "id": "1",
            "title": "A series of posts, Volume 1"
          }
        ],
        "nextToken": "eyJ2ZXJzaW9uIjoxLCJ0b2tlbiI6IkFRSUNBSGo4eHR0RG0xWXhUa1F0cEhXMEp1R3B0M1B3eThOSmRvcG9ad2RHYjI3Z0lnSExqRnVhVUR3ZUhEZ2QzNGJ2QlFuY0FBQUNqekNDQW9zR0NTcUdTSWIzRFFFSEJxQ0NBbnd3Z2dKNEFnRUFNSUlDY1FZSktvWklodmNOQVFjQk1CNEdDV0NHU0FGbEF3UUJMakFSQkF5Qkg4Yk1obW9LVEFTZHM3SUNBUkNBZ2dKQ3dISzZKNlJuN3pyYUVKY1pWNWxhSkNtZW1KZ0F5N1dhZkc2UEdTNHpNQzJycTkwZHFJTFV6Z25wck9Gd3pMS3VOQ2JvUXc3VDI5eCtnVExIbGg4S3BqbzB1YjZHQ3FwcDhvNDVmMG9JbDlmdS9JdjNXcFNNSXFKTXZ1MEVGVWs1VzJQaW5jZGlUaVRtZFdYWlU1bkV2NkgyRFBRQWZYYlNnSmlHSHFLbmJZTUZZM0FTdmRIL0hQaVZBb1RCMk1YZkg0eGJOVTdEbjZtRFNhb2QwbzdHZHJEWDNtODQ1UXBQUVNyUFhHemY0WDkyajhIdlBCSWE4Smcrb0RxbHozUVQ5N2FXUXdYWWU2S0h4emI1ejRITXdEdXEyRDRkYzhoMi9CbW10MzRMelVGUVIyaExSZGRaZ0xkdzF5cHJZdFZwY3dEc1d4UURBTzdOcjV2ZEp4VVR2TVhmODBRSnp1REhXREpTVlJLdDJwWmlpaXhXeGRwRmNod1BzQ3d2aVBqMGwrcWFFWU1jMXNQbENkVkFGem43VXJrSThWbS8wWHlwR2xZb3BSL2FkV0xVekgrbGMrYno1ZEM2SnVLVXdtY1EyRXlZeDZiS0Izbi9YdUViWGdFeU5PMWZTdE1rRlhyWmpvMVpzdlYyUFRjMzMrdEs0ZDhkNkZrdjh5VVR6WHhJRkxIaVNsOUx6VVdtT3BCaWhrTFBCT09jcXkyOHh1UmkzOEM3UFRqMmN6c3RkOUo1VUY0azBJdUdEbVZzM2xjdWg1SEJjYThIeXM2aEpvOG1HbFpMNWN6R2s5bi8vRE1EbDY3RlJraG5QNFNhSDBpZGI5VFEvMERLeFRBTUdhcWpPaEl5ekVqd2ZDQVJleFdlbldyOGlPVkhScDhGM25WZVdvbFRGK002N0xpdi9XNGJXdDk0VEg3b0laUU5lYmZYKzVOKy9Td25Hb1dyMTlWK0pEb2lIRVFLZ1cwMWVuYjZKUXo5Slh2Tm95ZzF3RnJPVmxGc2xwNlRHa1BlN2Rnd2IrWT0ifQ=="
      }
    }
  }
  ```
+ Update the `nextToken` argument with the value returned from the previous query as follows:

  ```
  query allPostsByAuthor {
    allPostsByAuthor(
      author: "AUTHORNAME"
      count: 5
      nextToken: "eyJ2ZXJzaW9uIjoxLCJ0b2tlbiI6IkFRSUNBSGo4eHR0RG0xWXhUa1F0cEhXMEp1R3B0M1B3eThOSmRvcG9ad2RHYjI3Z0lnSExqRnVhVUR3ZUhEZ2QzNGJ2QlFuY0FBQUNqekNDQW9zR0NTcUdTSWIzRFFFSEJxQ0NBbnd3Z2dKNEFnRUFNSUlDY1FZSktvWklodmNOQVFjQk1CNEdDV0NHU0FGbEF3UUJMakFSQkF5Qkg4Yk1obW9LVEFTZHM3SUNBUkNBZ2dKQ3dISzZKNlJuN3pyYUVKY1pWNWxhSkNtZW1KZ0F5N1dhZkc2UEdTNHpNQzJycTkwZHFJTFV6Z25wck9Gd3pMS3VOQ2JvUXc3VDI5eCtnVExIbGg4S3BqbzB1YjZHQ3FwcDhvNDVmMG9JbDlmdS9JdjNXcFNNSXFKTXZ1MEVGVWs1VzJQaW5jZGlUaVRtZFdYWlU1bkV2NkgyRFBRQWZYYlNnSmlHSHFLbmJZTUZZM0FTdmRIL0hQaVZBb1RCMk1YZkg0eGJOVTdEbjZtRFNhb2QwbzdHZHJEWDNtODQ1UXBQUVNyUFhHemY0WDkyajhIdlBCSWE4Smcrb0RxbHozUVQ5N2FXUXdYWWU2S0h4emI1ejRITXdEdXEyRDRkYzhoMi9CbW10MzRMelVGUVIyaExSZGRaZ0xkdzF5cHJZdFZwY3dEc1d4UURBTzdOcjV2ZEp4VVR2TVhmODBRSnp1REhXREpTVlJLdDJwWmlpaXhXeGRwRmNod1BzQ3d2aVBqMGwrcWFFWU1jMXNQbENkVkFGem43VXJrSThWbS8wWHlwR2xZb3BSL2FkV0xVekgrbGMrYno1ZEM2SnVLVXdtY1EyRXlZeDZiS0Izbi9YdUViWGdFeU5PMWZTdE1rRlhyWmpvMVpzdlYyUFRjMzMrdEs0ZDhkNkZrdjh5VVR6WHhJRkxIaVNsOUx6VVdtT3BCaWhrTFBCT09jcXkyOHh1UmkzOEM3UFRqMmN6c3RkOUo1VUY0azBJdUdEbVZzM2xjdWg1SEJjYThIeXM2aEpvOG1HbFpMNWN6R2s5bi8vRE1EbDY3RlJraG5QNFNhSDBpZGI5VFEvMERLeFRBTUdhcWpPaEl5ekVqd2ZDQVJleFdlbldyOGlPVkhScDhGM25WZVdvbFRGK002N0xpdi9XNGJXdDk0VEg3b0laUU5lYmZYKzVOKy9Td25Hb1dyMTlWK0pEb2lIRVFLZ1cwMWVuYjZKUXo5Slh2Tm95ZzF3RnJPVmxGc2xwNlRHa1BlN2Rnd2IrWT0ifQ=="
    ) {
      posts {
        id
        title
      }
      nextToken
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The remaining posts authored by `AUTHORNAME` should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "allPostsByAuthor": {
        "posts": [
          {
            "id": "8",
            "title": "A series of posts, Volume 8"
          },
          {
            "id": "5",
            "title": "A series of posts, Volume 5"
          },
          {
            "id": "3",
            "title": "A series of posts, Volume 3"
          },
          {
            "id": "9",
            "title": "A series of posts, Volume 9"
          }
        ],
        "nextToken": null
      }
    }
  }
  ```

## Using Sets
<a name="using-sets"></a>

Up to this point the `Post` type has been a flat key/value object. You can also model complex objects with the AWS AppSyncDynamoDB resolver, such as sets, lists, and maps.

Let’s update the `Post` type to include tags. A post can have 0 or more tags, which are stored in DynamoDB as a String Set. You’ll also set up some mutations to add and remove tags, and a new query to scan for posts with a specific tag.
+ Choose the **Schema** tab.
+ In the **Schema** pane, modify the `Post` type to add a new `tags` field as follows:

  ```
  type Post {
    id: ID!
    author: String
    title: String
    content: String
    url: String
    ups: Int!
    downs: Int!
    version: Int!
    tags: [String!]
  }
  ```
+ In the **Schema** pane, modify the `Query` type to add a new `allPostsByTag` query as follows:

  ```
  type Query {
    allPostsByTag(tag: String!, count: Int, nextToken: String): PaginatedPosts!
    allPostsByAuthor(author: String!, count: Int, nextToken: String): PaginatedPosts!
    allPost(count: Int, nextToken: String): PaginatedPosts!
    getPost(id: ID): Post
  }
  ```
+ In the **Schema** pane, modify the `Mutation` type to add new `addTag` and `removeTag` mutations as follows:

  ```
  type Mutation {
    addTag(id: ID!, tag: String!): Post
    removeTag(id: ID!, tag: String!): Post
    deletePost(id: ID!, expectedVersion: Int): Post
    upvotePost(id: ID!): Post
    downvotePost(id: ID!): Post
    updatePost(
      id: ID!,
      author: String,
      title: String,
      content: String,
      url: String,
      expectedVersion: Int!
    ): Post
    addPost(
      author: String!,
      title: String!,
      content: String!,
      url: String!
    ): Post!
  }
  ```
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created **allPostsByTag** field on the **Query** type, and then choose **Attach**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "Scan",
      "filter": {
        "expression": "contains (tags, :tag)",
          "expressionValues": {
            ":tag": $util.dynamodb.toDynamoDBJson($context.arguments.tag)
          }
      }
      #if( ${context.arguments.count} )
          ,"limit": $util.toJson($context.arguments.count)
      #end
      #if( ${context.arguments.nextToken} )
          ,"nextToken": $util.toJson($context.arguments.nextToken)
      #end
  }
  ```
+ In **Configure the response mapping template**, paste the following:

  ```
  {
      "posts": $utils.toJson($context.result.items)
      #if( ${context.result.nextToken} )
          ,"nextToken": $util.toJson($context.result.nextToken)
      #end
  }
  ```
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created **addTag** field on the **Mutation** type, and then choose **Attach**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "UpdateItem",
      "key" : {
          "id" : $util.dynamodb.toDynamoDBJson($context.arguments.id)
      },
      "update" : {
          "expression" : "ADD tags :tags, version :plusOne",
          "expressionValues" : {
              ":tags" : { "SS": [ $util.toJson($context.arguments.tag) ] },
              ":plusOne" : { "N" : 1 }
          }
      }
  }
  ```
+ In **Configure the response mapping template**, paste the following:

  ```
  $utils.toJson($context.result)
  ```
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created **removeTag** field on the **Mutation** type, and then choose **Attach**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
      "version" : "2017-02-28",
      "operation" : "UpdateItem",
      "key" : {
          "id" : $util.dynamodb.toDynamoDBJson($context.arguments.id)
      },
      "update" : {
          "expression" : "DELETE tags :tags ADD version :plusOne",
          "expressionValues" : {
              ":tags" : { "SS": [ $util.toJson($context.arguments.tag) ] },
              ":plusOne" : { "N" : 1 }
          }
      }
  }
  ```
+ In **Configure the response mapping template**, paste the following:

  ```
  $utils.toJson($context.result)
  ```
+ Choose **Save**.

### Call the API to Work with Tags
<a name="call-the-api-to-work-with-tags"></a>

Now that you’ve set up the resolvers, AWS AppSync knows how to translate incoming `addTag`, `removeTag`, and `allPostsByTag` requests into DynamoDB`UpdateItem` and `Scan` operations.

To try it out, let’s select one of the posts you created earlier. For example, let’s use a post authored by `Nadia`.
+ Choose the **Queries** tab.
+ In the **Queries** pane, paste the following query:

  ```
  query allPostsByAuthor {
    allPostsByAuthor(
      author: "Nadia"
    ) {
      posts {
        id
        title
      }
      nextToken
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ All of Nadia’s posts should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "allPostsByAuthor": {
        "posts": [
          {
            "id": "10",
            "title": "The cutest dog in the world"
          },
          {
            "id": "11",
            "title": "Did you known...?"
          }
        ],
        "nextToken": null
      }
    }
  }
  ```
+ Let’s use the one with the title `"The cutest dog in the world"`. Note down its `id` because you’ll use it later.

Now let’s try adding a `dog` tag.
+ In the **Queries** pane, paste the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier.

  ```
  mutation addTag {
    addTag(id:10 tag: "dog") {
      id
      title
      tags
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The post is updated with the new tag.

  ```
  {
    "data": {
      "addTag": {
        "id": "10",
        "title": "The cutest dog in the world",
        "tags": [
          "dog"
        ]
      }
    }
  }
  ```

You can add more tags as follows:
+ Update the mutation to change the `tag` argument to `puppy`.

  ```
  mutation addTag {
    addTag(id:10 tag: "puppy") {
      id
      title
      tags
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The post is updated with the new tag.

  ```
  {
    "data": {
      "addTag": {
        "id": "10",
        "title": "The cutest dog in the world",
        "tags": [
          "dog",
          "puppy"
        ]
      }
    }
  }
  ```

You can also delete tags:
+ In the **Queries** pane, paste the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier.

  ```
  mutation removeTag {
    removeTag(id:10 tag: "puppy") {
      id
      title
      tags
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ The post is updated and the `puppy` tag is deleted.

  ```
  {
    "data": {
      "addTag": {
        "id": "10",
        "title": "The cutest dog in the world",
        "tags": [
          "dog"
        ]
      }
    }
  }
  ```

You can also search for all posts that have a tag:
+ In the **Queries** pane, paste the following query:

  ```
  query allPostsByTag {
    allPostsByTag(tag: "dog") {
      posts {
        id
        title
        tags
      }
      nextToken
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ All posts that have the `dog` tag are returned as follows:

  ```
  {
    "data": {
      "allPostsByTag": {
        "posts": [
          {
            "id": "10",
            "title": "The cutest dog in the world",
            "tags": [
              "dog",
              "puppy"
            ]
          }
        ],
        "nextToken": null
      }
    }
  }
  ```

## Using Lists and Maps
<a name="using-lists-and-maps"></a>

In addition to using DynamoDB sets, you can also use DynamoDB lists and maps to model complex data in a single object.

Let’s add the ability to add comments to posts. This will be modeled as a list of map objects on the `Post` object in DynamoDB.

 **Note:** in a real application, you would model comments in their own table. For this tutorial, you’ll just add them in the `Post` table.
+ Choose the **Schema** tab.
+ In the **Schema** pane, add a new `Comment` type as follows:

  ```
  type Comment {
      author: String!
      comment: String!
  }
  ```
+ In the **Schema** pane, modify the `Post` type to add a new `comments` field as follows:

  ```
  type Post {
    id: ID!
    author: String
    title: String
    content: String
    url: String
    ups: Int!
    downs: Int!
    version: Int!
    tags: [String!]
    comments: [Comment!]
  }
  ```
+ In the **Schema** pane, modify the `Mutation` type to add a new `addComment` mutation as follows:

  ```
  type Mutation {
    addComment(id: ID!, author: String!, comment: String!): Post
    addTag(id: ID!, tag: String!): Post
    removeTag(id: ID!, tag: String!): Post
    deletePost(id: ID!, expectedVersion: Int): Post
    upvotePost(id: ID!): Post
    downvotePost(id: ID!): Post
    updatePost(
      id: ID!,
      author: String,
      title: String,
      content: String,
      url: String,
      expectedVersion: Int!
    ): Post
    addPost(
      author: String!,
      title: String!,
      content: String!,
      url: String!
    ): Post!
  }
  ```
+ Choose **Save**.
+ In the **Data types** pane on the right, find the newly created **addComment** field on the **Mutation** type, and then choose **Attach**.
+ In **Data source name**, choose **PostDynamoDBTable**.
+ In **Configure the request mapping template**, paste the following:

  ```
  {
    "version" : "2017-02-28",
    "operation" : "UpdateItem",
    "key" : {
      "id" : $util.dynamodb.toDynamoDBJson($context.arguments.id)
    },
    "update" : {
      "expression" : "SET comments = list_append(if_not_exists(comments, :emptyList), :newComment) ADD version :plusOne",
      "expressionValues" : {
        ":emptyList": { "L" : [] },
        ":newComment" : { "L" : [
          { "M": {
            "author": $util.dynamodb.toDynamoDBJson($context.arguments.author),
            "comment": $util.dynamodb.toDynamoDBJson($context.arguments.comment)
            }
          }
        ] },
        ":plusOne" : $util.dynamodb.toDynamoDBJson(1)
      }
    }
  }
  ```

  This update expression will append a list containing our new comment to the existing `comments` list. If the list doesn’t already exist, it will be created.
+ In **Configure the response mapping template**, paste the following:

  ```
  $utils.toJson($context.result)
  ```
+ Choose **Save**.

### Call the API to Add a Comment
<a name="call-the-api-to-add-a-comment"></a>

Now that you’ve set up the resolvers, AWS AppSync knows how to translate incoming `addComment` requests into DynamoDB`UpdateItem` operations.

Let’s try it out by adding a comment to the same post you added the tags to.
+ Choose the **Queries** tab.
+ In the **Queries** pane, paste the following query:

  ```
  mutation addComment {
    addComment(
      id:10
      author: "Steve"
      comment: "Such a cute dog."
    ) {
      id
      comments {
        author
        comment
      }
    }
  }
  ```
+ Choose **Execute query** (the orange play button).
+ All of Nadia’s posts should appear in the results pane to the right of the query pane. It should look similar to the following:

  ```
  {
    "data": {
      "addComment": {
        "id": "10",
        "comments": [
          {
            "author": "Steve",
            "comment": "Such a cute dog."
          }
        ]
      }
    }
  }
  ```

If you execute the request multiple times, multiple comments will be appended to the list.

## Conclusion
<a name="conclusion"></a>

In this tutorial, you’ve built an API that lets us manipulate Post objects in DynamoDB using AWS AppSync and GraphQL. For more information, see the [Resolver Mapping Template Reference](resolver-mapping-template-reference.md#aws-appsync-resolver-mapping-template-reference).

To clean up, you can delete the AppSync GraphQL API from the console.

To delete the DynamoDB table and the IAM role you created for this tutorial, you can run the following to delete the `AWSAppSyncTutorialForAmazonDynamoDB` stack, or visit the CloudFormation console and delete the stack:

```
aws cloudformation delete-stack \
    --stack-name AWSAppSyncTutorialForAmazonDynamoDB
```

# Using AWS Lambda resolvers in AWS AppSync
<a name="tutorial-lambda-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

You can use AWS Lambda with AWS AppSync to resolve any GraphQL field. For example, a GraphQL query might send a call to an Amazon Relational Database Service (Amazon RDS) instance, and a GraphQL mutation might write to an Amazon Kinesis stream. In this section, we'll show you how to write a Lambda function that performs business logic based on the invocation of a GraphQL field operation.

## Create a Lambda function
<a name="create-a-lam-function"></a>

The following example shows a Lambda function written in `Node.js` that performs different operations on blog posts as part of a blog post application.

```
exports.handler = (event, context, callback) => {
    console.log("Received event {}", JSON.stringify(event, 3));
    var posts = {
         "1": {"id": "1", "title": "First book", "author": "Author1", "url": "https://amazon.com/", "content": "SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1", "ups": "100", "downs": "10"},
         "2": {"id": "2", "title": "Second book", "author": "Author2", "url": "https://amazon.com", "content": "SAMPLE TEXT AUTHOR 2 SAMPLE TEXT AUTHOR 2 SAMPLE TEXT", "ups": "100", "downs": "10"},
         "3": {"id": "3", "title": "Third book", "author": "Author3", "url": null, "content": null, "ups": null, "downs": null },
         "4": {"id": "4", "title": "Fourth book", "author": "Author4", "url": "https://www.amazon.com/", "content": "SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4", "ups": "1000", "downs": "0"},
         "5": {"id": "5", "title": "Fifth book", "author": "Author5", "url": "https://www.amazon.com/", "content": "SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT", "ups": "50", "downs": "0"} };

    var relatedPosts = {
        "1": [posts['4']],
        "2": [posts['3'], posts['5']],
        "3": [posts['2'], posts['1']],
        "4": [posts['2'], posts['1']],
        "5": []
    };

    console.log("Got an Invoke Request.");
    switch(event.field) {
        case "getPost":
            var id = event.arguments.id;
            callback(null, posts[id]);
            break;
        case "allPosts":
            var values = [];
            for(var d in posts){
                values.push(posts[d]);
            }
            callback(null, values);
            break;
        case "addPost":
            // return the arguments back
            callback(null, event.arguments);
            break;
        case "addPostErrorWithData":
            var id = event.arguments.id;
            var result = posts[id];
            // attached additional error information to the post
            result.errorMessage = 'Error with the mutation, data has changed';
            result.errorType = 'MUTATION_ERROR';
            callback(null, result);
            break;
        case "relatedPosts":
            var id = event.source.id;
            callback(null, relatedPosts[id]);
            break;
        default:
            callback("Unknown field, unable to resolve" + event.field, null);
            break;
    }
};
```

This Lambda function retrieves a post by ID, adds a post, retrieves a list of posts, and fetches related posts for a given post.

 **Note:** The Lambda function uses the `switch` statement on `event.field` to determine which field is currently being resolved.

Create this Lambda function using the AWS Management Console or an AWS CloudFormation stack. To create the function from a CloudFormation stack, you can use the following AWS Command Line Interface (AWS CLI) command:

```
aws cloudformation create-stack --stack-name AppSyncLambdaExample \
--template-url https://s3.us-west-2.amazonaws.com/awsappsync/resources/lambda/LambdaCFTemplate.yaml \
--capabilities CAPABILITY_NAMED_IAM
```

You can also launch the CloudFormation stack in the US West (Oregon) AWS Region in your AWS account from here:

[https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/lambda/LambdaCFTemplate.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/lambda/LambdaCFTemplate.yaml)

## Configure a data source for Lambda
<a name="configure-data-source-for-lamlong"></a>

After you create the Lambda function, navigate to your GraphQL API in the AWS AppSync console, and then choose the **Data Sources** tab.

Choose **Create data source**, enter a friendly **Data source name** (for example, **Lambda**), and then for **Data source type**, choose **AWS Lambda function**. For **Region**, choose the same Region as your function. (If you created the function from the provided CloudFormation stack, the function is probably in **US-WEST-2**.) For **Function ARN**, choose the Amazon Resource Name (ARN) of your Lambda function.

After choosing your Lambda function, you can either create a new AWS Identity and Access Management (IAM) role (for which AWS AppSync assigns the appropriate permissions) or choose an existing role that has the following inline policy:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:InvokeFunction"
            ],
            "Resource": "arn:aws:lambda:us-east-1:111122223333:function:LAMBDA_FUNCTION"
        }
    ]
}
```

------

You must also set up a trust relationship with AWS AppSync for the IAM role as follows:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "appsync.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

------

## Create a GraphQL schema
<a name="creating-a-graphql-schema"></a>

Now that the data source is connected to your Lambda function, create a GraphQL schema.

From the schema editor in the AWS AppSync console, make sure that your schema matches the following schema:

```
schema {
    query: Query
    mutation: Mutation
}

type Query {
    getPost(id:ID!): Post
    allPosts: [Post]
}

type Mutation {
    addPost(id: ID!, author: String!, title: String, content: String, url: String): Post!
}

type Post {
    id: ID!
    author: String!
    title: String
    content: String
    url: String
    ups: Int
    downs: Int
    relatedPosts: [Post]
}
```

## Configure resolvers
<a name="configuring-resolvers"></a>

Now that you've registered a Lambda data source and a valid GraphQL schema, you can connect your GraphQL fields to your Lambda data source using resolvers.

To create a resolver, you'll need mapping templates. To learn more about mapping templates, see [Resolver Mapping Template Overview](resolver-mapping-template-reference-overview.md#aws-appsync-resolver-mapping-template-reference-overview).

For more information about Lambda mapping templates, see [Resolver mapping template reference for Lambda](resolver-mapping-template-reference-lambda.md#aws-appsync-resolver-mapping-template-reference-lambda).

In this step, you attach a resolver to the Lambda function for the following fields: `getPost(id:ID!): Post`, `allPosts: [Post]`, `addPost(id: ID!, author: String!, title: String, content: String, url: String): Post!`, and `Post.relatedPosts: [Post]`.

From the schema editor in the AWS AppSync console, on the right side, choose **Attach Resolver** for `getPost(id:ID!): Post`.

Then, in the **Action menu**, choose **Update runtime**, then choose **Unit Resolver (VTL only)**.

Afterward, choose your Lambda data source. In the **request mapping template** section, choose **Invoke And Forward Arguments**.

Modify the `payload` object to add the field name. Your template should look like the following:

```
{
    "version": "2017-02-28",
    "operation": "Invoke",
    "payload": {
        "field": "getPost",
        "arguments":  $utils.toJson($context.arguments)
    }
}
```

In the **response mapping template** section, choose **Return Lambda Result**.

In this case, use the base template as-is. It should look like the following:

```
$utils.toJson($context.result)
```

Choose **Save**. You have successfully attached your first resolver. Repeat this operation for the remaining fields as follows:

For `addPost(id: ID!, author: String!, title: String, content: String, url: String): Post!` request mapping template:

```
{
    "version": "2017-02-28",
    "operation": "Invoke",
    "payload": {
        "field": "addPost",
        "arguments":  $utils.toJson($context.arguments)
    }
}
```

For `addPost(id: ID!, author: String!, title: String, content: String, url: String): Post!` response mapping template:

```
$utils.toJson($context.result)
```

For `allPosts: [Post]` request mapping template:

```
{
    "version": "2017-02-28",
    "operation": "Invoke",
    "payload": {
        "field": "allPosts"
    }
}
```

For `allPosts: [Post]` response mapping template:

```
$utils.toJson($context.result)
```

For `Post.relatedPosts: [Post]` request mapping template:

```
{
    "version": "2017-02-28",
    "operation": "Invoke",
    "payload": {
        "field": "relatedPosts",
        "source":  $utils.toJson($context.source)
    }
}
```

For `Post.relatedPosts: [Post]` response mapping template:

```
$utils.toJson($context.result)
```

## Test your GraphQL API
<a name="testing-your-graphql-api"></a>

Now that your Lambda function is connected to GraphQL resolvers, you can run some mutations and queries using the console or a client application.

On the left side of the AWS AppSync console, choose **Queries**, and then paste in the following code:

### addPost Mutation
<a name="addpost-mutation"></a>

```
mutation addPost {
    addPost(
        id: 6
        author: "Author6"
        title: "Sixth book"
        url: "https://www.amazon.com/"
        content: "This is the book is a tutorial for using GraphQL with AWS AppSync."
    ) {
        id
        author
        title
        content
        url
        ups
        downs
    }
}
```

### getPost Query
<a name="getpost-query"></a>

```
query getPost {
    getPost(id: "2") {
        id
        author
        title
        content
        url
        ups
        downs
    }
}
```

### allPosts Query
<a name="allposts-query"></a>

```
query allPosts {
    allPosts {
        id
        author
        title
        content
        url
        ups
        downs
        relatedPosts {
            id
            title
        }
    }
}
```

## Returning errors
<a name="returning-errors"></a>

Any given field resolution can result in an error. With AWS AppSync, you can raise errors from the following sources:
+ Request or response mapping template
+ Lambda function

### From the mapping template
<a name="from-the-mapping-template"></a>

To raise intentional errors, you can use the `$utils.error` helper method from the Velocity Template Language (VTL) template. It takes as argument an `errorMessage`, an `errorType`, and an optional `data` value. The `data` is useful for returning extra data back to the client when an error occurs. The `data` object is added to the `errors` in the GraphQL final response.

The following example shows how to use it in the `Post.relatedPosts: [Post]` response mapping template:

```
$utils.error("Failed to fetch relatedPosts", "LambdaFailure", $context.result)
```

This yields a GraphQL response similar to the following:

```
{
    "data": {
        "allPosts": [
            {
                "id": "2",
                "title": "Second book",
                "relatedPosts": null
            },
            ...
        ]
    },
    "errors": [
        {
            "path": [
                "allPosts",
                0,
                "relatedPosts"
            ],
            "errorType": "LambdaFailure",
            "locations": [
                {
                    "line": 5,
                    "column": 5
                }
            ],
            "message": "Failed to fetch relatedPosts",
            "data": [
                {
                  "id": "2",
                  "title": "Second book"
                },
                {
                  "id": "1",
                  "title": "First book"
                }
            ]
        }
    ]
}
```

Where `allPosts[0].relatedPosts` is *null* because of the error and the `errorMessage`, `errorType`, and `data` are present in the `data.errors[0]` object.

### From the Lambda function
<a name="from-the-lam-function"></a>

AWS AppSync also understands errors that the Lambda function throws. The Lambda programming model lets you raise *handled* errors. If the Lambda function throws an error, AWS AppSync fails to resolve the current field. Only the error message returned from Lambda is set in the response. Currently, you can't pass any extraneous data back to the client by raising an error from the Lambda function.

 **Note**: If your Lambda function raises an *unhandled* error, AWS AppSync uses the error message that Lambda set.

The following Lambda function raises an error:

```
exports.handler = (event, context, callback) => {
    console.log("Received event {}", JSON.stringify(event, 3));
    callback("I fail. Always.");
};
```

This returns a GraphQL response similar to the following:

```
{
    "data": {
        "allPosts": [
            {
                "id": "2",
                "title": "Second book",
                "relatedPosts": null
            },
            ...
        ]
    },
    "errors": [
        {
            "path": [
                "allPosts",
                0,
                "relatedPosts"
            ],
            "errorType": "Lambda:Handled",
            "locations": [
                {
                    "line": 5,
                    "column": 5
                }
            ],
            "message": "I fail. Always."
        }
    ]
}
```

## Advanced use case: Batching
<a name="advanced-use-case-batching"></a>

The Lambda function in this example has a `relatedPosts` field that returns a list of related posts for a given post. In the example queries, the `allPosts` field invocation from the Lambda function returns five posts. Because we specified that we also want to resolve `relatedPosts` for each returned post, the `relatedPosts` field operation is invoked five times.

```
query allPosts {
    allPosts {   // 1 Lambda invocation - yields 5 Posts
        id
        author
        title
        content
        url
        ups
        downs
        relatedPosts {   // 5 Lambda invocations - each yields 5 posts
            id
            title
        }
    }
}
```

While this might not sound substantial in this specific example, this compounded over-fetching can quickly undermine the application.

If you were to fetch `relatedPosts` again on the returned related `Posts` in the same query, the number of invocations would increase dramatically.

```
query allPosts {
    allPosts {   // 1 Lambda invocation - yields 5 Posts
        id
        author
        title
        content
        url
        ups
        downs
        relatedPosts {   // 5 Lambda invocations - each yield 5 posts = 5 x 5 Posts
            id
            title
            relatedPosts {  // 5 x 5 Lambda invocations - each yield 5 posts = 25 x 5 Posts
                id
                title
                author
            }
        }
    }
}
```

In this relatively simple query, AWS AppSync would invoke the Lambda function 1 \$1 5 \$1 25 = 31 times.

This is a fairly common challenge and is often called the N\$11 problem (in this case, N = 5), and it can incur increased latency and cost to the application.

One approach to solving this issue is to batch similar field resolver requests together. In this example, instead of having the Lambda function resolve a list of related posts for a single given post, it could instead resolve a list of related posts for a given batch of posts.

To demonstrate this, let's switch the `Post.relatedPosts: [Post]` resolver to a batch-enabled resolver.

On the right side of the AWS AppSync console, choose the existing `Post.relatedPosts: [Post]` resolver. Change the request mapping template to the following:

```
{
    "version": "2017-02-28",
    "operation": "BatchInvoke",
    "payload": {
        "field": "relatedPosts",
        "source":  $utils.toJson($context.source)
    }
}
```

Only the `operation` field has changed from `Invoke` to `BatchInvoke`. The payload field now becomes an array of whatever is specified in the template. In this example, the Lambda function receives the following as input:

```
[
    {
        "field": "relatedPosts",
        "source": {
            "id": 1
        }
    },
    {
        "field": "relatedPosts",
        "source": {
            "id": 2
        }
    },
    ...
]
```

When `BatchInvoke` is specified in the request mapping template, the Lambda function receives a list of requests and returns a list of results.

Specifically, the list of results must match the size and order of the request payload entries so that AWS AppSync can match the results accordingly.

In this batching example, the Lambda function returns a batch of results as follows:

```
[
    [{"id":"2","title":"Second book"}, {"id":"3","title":"Third book"}],   // relatedPosts for id=1
    [{"id":"3","title":"Third book"}]                                                             // relatedPosts for id=2
]
```

The following Lambda function in Node.js demonstrates this batching functionality for the `Post.relatedPosts` field as follows:

```
exports.handler = (event, context, callback) => {
    console.log("Received event {}", JSON.stringify(event, 3));
    var posts = {
         "1": {"id": "1", "title": "First book", "author": "Author1", "url": "https://amazon.com/", "content": "SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1", "ups": "100", "downs": "10"},
         "2": {"id": "2", "title": "Second book", "author": "Author2", "url": "https://amazon.com", "content": "SAMPLE TEXT AUTHOR 2 SAMPLE TEXT AUTHOR 2 SAMPLE TEXT", "ups": "100", "downs": "10"},
         "3": {"id": "3", "title": "Third book", "author": "Author3", "url": null, "content": null, "ups": null, "downs": null },
         "4": {"id": "4", "title": "Fourth book", "author": "Author4", "url": "https://www.amazon.com/", "content": "SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4", "ups": "1000", "downs": "0"},
         "5": {"id": "5", "title": "Fifth book", "author": "Author5", "url": "https://www.amazon.com/", "content": "SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT", "ups": "50", "downs": "0"} };

    var relatedPosts = {
        "1": [posts['4']],
        "2": [posts['3'], posts['5']],
        "3": [posts['2'], posts['1']],
        "4": [posts['2'], posts['1']],
        "5": []
    };

    console.log("Got a BatchInvoke Request. The payload has %d items to resolve.", event.length);
    // event is now an array
    var field = event[0].field;
    switch(field) {
        case "relatedPosts":
            var results = [];
            // the response MUST contain the same number
            // of entries as the payload array
            for (var i=0; i< event.length; i++) {
                console.log("post {}", JSON.stringify(event[i].source));
                results.push(relatedPosts[event[i].source.id]);
            }
            console.log("results {}", JSON.stringify(results));
            callback(null, results);
            break;
        default:
            callback("Unknown field, unable to resolve" + field, null);
            break;
    }
};
```

### Returning individual errors
<a name="returning-individual-errors"></a>

The previous examples show that it's possible to return a single error from the Lambda function or raise an error from the mapping templates. For batched invocations, raising an error from the Lambda function flags an entire batch as failed. This might be acceptable for specific scenarios where an irrecoverable error occurs, such as a failed connection to a data store. However, in cases where some items in the batch succeed and others fail, it's possible to return both errors and valid data. Because AWS AppSync requires the batch response to list elements matching the original size of the batch, you must define a data structure that can differentiate valid data from an error.

For example, if the Lambda function is expected to return a batch of related posts, you could choose to return a list of `Response` objects where each object has optional *data*, *errorMessage*, and *errorType* fields. If the *errorMessage* field is present, it means that an error occurred.

The following code shows how you could update the Lambda function:

```
exports.handler = (event, context, callback) => {
    console.log("Received event {}", JSON.stringify(event, 3));
    var posts = {
         "1": {"id": "1", "title": "First book", "author": "Author1", "url": "https://amazon.com/", "content": "SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1", "ups": "100", "downs": "10"},
         "2": {"id": "2", "title": "Second book", "author": "Author2", "url": "https://amazon.com", "content": "SAMPLE TEXT AUTHOR 2 SAMPLE TEXT AUTHOR 2 SAMPLE TEXT", "ups": "100", "downs": "10"},
         "3": {"id": "3", "title": "Third book", "author": "Author3", "url": null, "content": null, "ups": null, "downs": null },
         "4": {"id": "4", "title": "Fourth book", "author": "Author4", "url": "https://www.amazon.com/", "content": "SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4", "ups": "1000", "downs": "0"},
         "5": {"id": "5", "title": "Fifth book", "author": "Author5", "url": "https://www.amazon.com/", "content": "SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT", "ups": "50", "downs": "0"} };

    var relatedPosts = {
        "1": [posts['4']],
        "2": [posts['3'], posts['5']],
        "3": [posts['2'], posts['1']],
        "4": [posts['2'], posts['1']],
        "5": []
    };

    console.log("Got a BatchInvoke Request. The payload has %d items to resolve.", event.length);
    // event is now an array
    var field = event[0].field;
    switch(field) {
        case "relatedPosts":
            var results = [];
            results.push({ 'data': relatedPosts['1'] });
            results.push({ 'data': relatedPosts['2'] });
            results.push({ 'data': null, 'errorMessage': 'Error Happened', 'errorType': 'ERROR' });
            results.push(null);
            results.push({ 'data': relatedPosts['3'], 'errorMessage': 'Error Happened with last result', 'errorType': 'ERROR' });
            callback(null, results);
            break;
        default:
            callback("Unknown field, unable to resolve" + field, null);
            break;
    }
};
```

For this example, the following response mapping template parses each item of the Lambda function and raises any errors that occur:

```
#if( $context.result && $context.result.errorMessage )
    $utils.error($context.result.errorMessage, $context.result.errorType, $context.result.data)
#else
    $utils.toJson($context.result.data)
#end
```

This example returns a GraphQL response similar to the following:

```
{
  "data": {
    "allPosts": [
      {
        "id": "1",
        "relatedPostsPartialErrors": [
          {
            "id": "4",
            "title": "Fourth book"
          }
        ]
      },
      {
        "id": "2",
        "relatedPostsPartialErrors": [
          {
            "id": "3",
            "title": "Third book"
          },
          {
            "id": "5",
            "title": "Fifth book"
          }
        ]
      },
      {
        "id": "3",
        "relatedPostsPartialErrors": null
      },
      {
        "id": "4",
        "relatedPostsPartialErrors": null
      },
      {
        "id": "5",
        "relatedPostsPartialErrors": null
      }
    ]
  },
  "errors": [
    {
      "path": [
        "allPosts",
        2,
        "relatedPostsPartialErrors"
      ],
      "errorType": "ERROR",
      "locations": [
        {
          "line": 4,
          "column": 9
        }
      ],
      "message": "Error Happened"
    },
    {
      "path": [
        "allPosts",
        4,
        "relatedPostsPartialErrors"
      ],
      "data": [
        {
          "id": "2",
          "title": "Second book"
        },
        {
          "id": "1",
          "title": "First book"
        }
      ],
      "errorType": "ERROR",
      "locations": [
        {
          "line": 4,
          "column": 9
        }
      ],
      "message": "Error Happened with last result"
    }
  ]
}
```

### Configuring the maximum batching size
<a name="configure-max-batch-size"></a>

By default, when using `BatchInvoke`, AWS AppSync sends requests to your Lambda function in batches of up to five items. You can configure the maximum batch size of your Lambda resolvers.

To configure the maximum batching size on a resolver, use the following command in the AWS Command Line Interface (AWS CLI):

```
$ aws appsync create-resolver --api-id <api-id> --type-name Query --field-name relatedPosts \
 --request-mapping-template "<template>" --response-mapping-template "<template>" --data-source-name "<lambda-datasource>" \ 
 --max-batch-size X
```

**Note**  
When providing a request mapping template, you must use the `BatchInvoke` operation to use batching.

You can also use the following command to enable and configure batching on Direct Lambda Resolvers:

```
$ aws appsync create-resolver --api-id <api-id> --type-name Query --field-name relatedPosts \
 --data-source-name "<lambda-datasource>" \ 
 --max-batch-size X
```

### Maximum batching size configuration with VTL templates
<a name="configure-max-batch-size-vtl"></a>

For Lambda Resolvers that have VTL in-request templates, the maximum batch size will have no effect unless they have directly specified it as a `BatchInvoke` operation in VTL. Similarly, if you are performing a top-level mutation, batching is not conducted for mutations because the GraphQL specification requires parallel mutations to be executed sequentially.

For example, take the following mutations:

```
type Mutation {
    putItem(input: Item): Item
    putItems(inputs: [Item]): [Item]
}
```

Using the first mutation, we can create 10 `Items` as shown in the snippet below:

```
mutation MyMutation {
    v1: putItem($someItem1) {
        id,
        name
    }
    v2: putItem($someItem2) {
        id,
        name
    }
    v3: putItem($someItem3) {
        id,
        name
    } 
    v4: putItem($someItem4) {
        id,
        name
    }
    v5: putItem($someItem5) {
        id,
        name
    }
    v6: putItem($someItem6) {
        id,
        name
    } 
    v7: putItem($someItem7) {
        id,
        name
    }
    v8: putItem($someItem8) {
        id,
        name
    }
    v9: putItem($someItem9) {
        id,
        name
    }
    v10: putItem($someItem10) {
        id,
        name
    }
}
```

In this example, the `Items` will not be batched in a group of 10 even if the maximum batch size is set to 10 in the Lambda Resolver. Instead, they will execute sequentially according to the GraphQL specification.

To perform an actual batch mutation, you may follow the example below using the second mutation:

```
mutation MyMutation {
    putItems([$someItem1, $someItem2, $someItem3,$someItem4, $someItem5, $someItem6, 
    $someItem7, $someItem8, $someItem9, $someItem10]) {
    id,
    name
    }
}
```

For more information about using batching with Direct Lambda Resolvers, see [Direct Lambda Resolvers](resolver-mapping-template-reference-lambda.md#direct-lambda-resolvers).

# Using Amazon OpenSearch Service resolvers in AWS AppSync
<a name="tutorial-elasticsearch-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

AWS AppSync supports using Amazon OpenSearch Service from domains that you have provisioned in your own AWS account, provided they don’t exist inside a VPC. After your domains are provisioned, you can connect to them using a data source, at which point you can configure a resolver in the schema to perform GraphQL operations such as queries, mutations, and subscriptions. This tutorial will take you through some common examples.

For more information, see the [Resolver Mapping Template Reference for OpenSearch](resolver-mapping-template-reference-elasticsearch.md#aws-appsync-resolver-mapping-template-reference-elasticsearch).

## One-Click Setup
<a name="one-click-setup"></a>

To automatically set up a GraphQL endpoint in AWS AppSync with Amazon OpenSearch Service configured you can use this AWS CloudFormation template:

[https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/elasticsearch/appsynces.yml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/elasticsearch/appsynces.yml)

After the AWS CloudFormation deployment completes you can skip directly to [running GraphQL queries and mutations](#tutorial-elasticsearch-resolvers-perform-queries-mutations).

## Create a New OpenSearch Service Domain
<a name="create-a-new-es-domain"></a>

To get started with this tutorial, you need an existing OpenSearch Service domain. If you don’t have one, you can use the following sample. Note that it can take up to 15 minutes for an OpenSearch Service domain to be created before you can move on to integrating it with an AWS AppSync data source.

```
aws cloudformation create-stack --stack-name AppSyncOpenSearch \
--template-url https://s3.us-west-2.amazonaws.com/awsappsync/resources/elasticsearch/ESResolverCFTemplate.yaml \
--parameters ParameterKey=OSDomainName,ParameterValue=ddtestdomain ParameterKey=Tier,ParameterValue=development \
--capabilities CAPABILITY_NAMED_IAM
```

You can launch the following AWS CloudFormation stack in the US West 2 (Oregon) region in your AWS account:

 [https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/elasticsearch/ESResolverCFTemplate.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/elasticsearch/ESResolverCFTemplate.yaml)

## Configure Data Source for OpenSearch Service
<a name="configure-data-source-for-es"></a>

After the OpenSearch Service domain is created, navigate to your AWS AppSync GraphQL API and choose the **Data Sources** tab. Choose **New** and enter a friendly name for the data source, such as “oss”. Then choose **Amazon OpenSearch domain** for **Data source type**, choose the appropriate region, and you should see your OpenSearch Service domain listed. After selecting it you can either create a new role and AWS AppSync will assign the role-appropriate permissions, or you can choose an existing role, which has the following inline policy:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Stmt1234234",
            "Effect": "Allow",
            "Action": [
                "es:ESHttpDelete",
                "es:ESHttpHead",
                "es:ESHttpGet",
                "es:ESHttpPost",
                "es:ESHttpPut"
            ],
            "Resource": [
                "arn:aws:es:us-east-1:111122223333:domain/democluster/*"
            ]
        }
    ]
}
```

------

You’ll also need to set up a trust relationship with AWS AppSync for that role:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "appsync.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

------

Additionally, the OpenSearch Service domain has it’s own **Access Policy** which you can modify through the Amazon OpenSearch Service console. You will need to add a policy similar to the following, with the appropriate actions and resource for the OpenSearch Service domain. Note that the **Principal** will be the AppSync data source role, which if you let the console create this, can be found in the IAM console.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/service-role/APPSYNC_DATASOURCE_ROLE"
            },
            "Action": [
                "es:ESHttpDelete",
                "es:ESHttpHead",
                "es:ESHttpGet",
                "es:ESHttpPost",
                "es:ESHttpPut"
            ],
            "Resource": "arn:aws:es:us-east-1:111122223333:domain/DOMAIN_NAME/*"
        }
    ]
}
```

------

## Connecting a Resolver
<a name="connecting-a-resolver"></a>

Now that the data source is connected to your OpenSearch Service domain, you can connect it to your GraphQL schema with a resolver, as shown in the following example:

```
 schema {
   query: Query
   mutation: Mutation
 }

 type Query {
   getPost(id: ID!): Post
   allPosts: [Post]
 }

 type Mutation {
   addPost(id: ID!, author: String, title: String, url: String, ups: Int, downs: Int, content: String): AWSJSON
 }

type Post {
  id: ID!
  author: String
  title: String
  url: String
  ups: Int
  downs: Int
  content: String
}
...
```

Note that there is a user-defined `Post` type with a field of `id`. In the following examples, we assume there is a process (which can be automated) for putting this type into your OpenSearch Service domain, which would map to a path root of `/post/_doc`, where `post` is the index. From this root path, you can perform individual document searches, wildcard searches with `/id/post*`, or multi-document searches with a path of `/post/_search`. For example, if you have another type called `User`, you can index documents under a new index called `user`, then perform searches with a **path** of `/user/_search`. 

From the schema editor in the AWS AppSync console, modify the preceding `Posts` schema to include a `searchPosts` query:

```
type Query {
  getPost(id: ID!): Post
  allPosts: [Post]
  searchPosts: [Post]
}
```

Save the schema. On the right side, for `searchPosts`, choose **Attach resolver**. In the **Action menu**, choose **Update runtime**, then choose **Unit Resolver (VTL only)**. Then, choose your OpenSearch Service data source. Under the **request mapping template** section, select the drop-down for **Query posts** to get a base template. Modify the `path` to be `/post/_search`. It should look like the following:

```
{
    "version":"2017-02-28",
    "operation":"GET",
    "path":"/post/_search",
    "params":{
        "headers":{},
        "queryString":{},
        "body":{
            "from":0,
            "size":50
        }
    }
}
```

This assumes that the preceding schema has documents that have been indexed in OpenSearch Service under the `post` field. If you structure your data differently, then you’ll need to update accordingly.

Under the **response mapping template** section, you need to specify the appropriate `_source` filter if you want to get back the data results from an OpenSearch Service query and translate to GraphQL. Use the following template:

```
[
    #foreach($entry in $context.result.hits.hits)
    #if( $velocityCount > 1 ) , #end
    $utils.toJson($entry.get("_source"))
    #end
]
```

## Modifying Your Searches
<a name="modifying-your-searches"></a>

The preceding request mapping template performs a simple query for all records. Suppose you want to search by a specific author. Further, suppose you want that author to be an argument defined in your GraphQL query. In the schema editor of the AWS AppSync console, add an `allPostsByAuthor` query:

```
type Query {
  getPost(id: ID!): Post
  allPosts: [Post]
  allPostsByAuthor(author: String!): [Post]
  searchPosts: [Post]
}
```

Now choose **Attach resolver** and select the OpenSearch Service data source, but use the following example in the **response mapping template**:

```
{
    "version":"2017-02-28",
    "operation":"GET",
    "path":"/post/_search",
    "params":{
        "headers":{},
        "queryString":{},
        "body":{
            "from":0,
            "size":50,
            "query":{
                "match" :{
                    "author": $util.toJson($context.arguments.author)
                }
            }
        }
    }
}
```

Note that the `body` is populated with a term query for the `author` field, which is passed through from the client as an argument. You could optionally have prepopulated information, such as standard text, or even use other [utilities](resolver-context-reference.md#aws-appsync-resolver-mapping-template-context-reference).

If you’re using this resolver, fill in the **response mapping template** with the same information as the previous example.

## Adding Data to OpenSearch Service
<a name="adding-data-to-es"></a>

You may want to add data to your OpenSearch Service domain as the result of a GraphQL mutation. This is a powerful mechanism for searching and other purposes. Because you can use GraphQL subscriptions to [make your data real-time](aws-appsync-real-time-data.md), it serves as a mechanism for notifying clients of updates to data in your OpenSearch Service domain.

Return to the **Schema** page in the AWS AppSync console and select **Attach resolver** for the `addPost()` mutation. Select the OpenSearch Service data source again and use the following **response mapping template** for the `Posts` schema:

```
{
    "version":"2017-02-28",
    "operation":"PUT",
    "path": $util.toJson("/post/_doc/$context.arguments.id"),
    "params":{
        "headers":{},
        "queryString":{},
        "body":{
            "id": $util.toJson($context.arguments.id),
            "author": $util.toJson($context.arguments.author),
            "ups": $util.toJson($context.arguments.ups),
            "downs": $util.toJson($context.arguments.downs),
            "url": $util.toJson($context.arguments.url),
            "content": $util.toJson($context.arguments.content),
            "title": $util.toJson($context.arguments.title)
        }
    }
}
```

As before, this is an example of how your data might be structured. If you have different field names or indexes, you need to update the `path` and `body` as appropriate. This example also shows how to use `$context.arguments` to populate the template from your GraphQL mutation arguments.

Before moving on, use the following response mapping template, which will return the result of the mutation operation or error information as output:

```
#if($context.error)
    $util.toJson($ctx.error)
#else
    $util.toJson($context.result)
#end
```

## Retrieving a Single Document
<a name="retrieving-a-single-document"></a>

Finally, if you want to use the `getPost(id:ID)` query in your schema to return an individual document, find this query in the schema editor of the AWS AppSync console and choose **Attach resolver**. Select the OpenSearch Service data source again and use the following mapping template:

```
{
    "version":"2017-02-28",
    "operation":"GET",
    "path": $util.toJson("post/_doc/$context.arguments.id"),
    "params":{
        "headers":{},
        "queryString":{},
        "body":{}
    }
}
```

Because the `path` above uses the `id` argument with an empty body, this returns the single document. However, you need to use the following response mapping template, because now you’re returning a single item and not a list:

```
$utils.toJson($context.result.get("_source"))
```

## Perform Queries and Mutations
<a name="tutorial-elasticsearch-resolvers-perform-queries-mutations"></a>

You should now be able to perform GraphQL operations against your OpenSearch Service domain. Navigate to the **Queries** tab of the AWS AppSync console and add a new record:

```
mutation addPost {
    addPost (
        id:"12345"
        author: "Fred"
        title: "My first book"
        content: "This will be fun to write!"
        url: "publisher website",
        ups: 100,
        downs:20 
       )
}
```

You’ll see the result of the mutation on the right. Similarly, you can now run a `searchPosts` query against your OpenSearch Service domain:

```
query searchPosts {
    searchPosts {
        id
        title
        author
        content
    }
}
```

## Best Practices
<a name="best-practices"></a>
+ OpenSearch Service should be for querying data, not as your primary database. You may want to use OpenSearch Service in conjunction with Amazon DynamoDB as outlined in [Combining GraphQL Resolvers](tutorial-combining-graphql-resolvers.md#aws-appsync-tutorial-combining-graphql-resolvers).
+ Only give access to your domain by allowing the AWS AppSync service role to access the cluster.
+ You can start small in development, with the lowest-cost cluster, and then move to a larger cluster with high availability (HA) as you move into production.

# Using local resolvers in AWS AppSync
<a name="tutorial-local-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

AWS AppSync allows you to use supported data sources (AWS Lambda, Amazon DynamoDB, or Amazon OpenSearch Service) to perform various operations. However, in certain scenarios, a call to a supported data source might not be necessary.

This is where the local resolver comes in handy. Instead of calling a remote data source, the local resolver will just **forward** the result of the request mapping template to the response mapping template. The field resolution will not leave AWS AppSync.

Local resolvers are useful for several use cases. The most popular use case is to publish notifications without triggering a data source call. To demonstrate this use case, let’s build a paging application; where users can page each other. This example leverages *Subscriptions*, so if you aren’t familiar with *Subscriptions*, you can follow the [Real-Time Data](aws-appsync-real-time-data.md) tutorial.

## Create the Paging Application
<a name="create-the-paging-application"></a>

In our paging application, clients can subscribe to an inbox, and send pages to other clients. Each page includes a message. Here is the schema:

```
schema {
    query: Query
    mutation: Mutation
    subscription: Subscription
}

type Subscription {
    inbox(to: String!): Page
    @aws_subscribe(mutations: ["page"])
}

type Mutation {
    page(body: String!, to: String!): Page!
}

type Page {
    from: String
    to: String!
    body: String!
    sentAt: String!
}

type Query {
    me: String
}
```

Let’s attach a resolver on the `Mutation.page` field. In the **Schema** pane, click on *Attach Resolver* next to the field definition on the right panel. Create a new data source of type *None* and name it *PageDataSource*.

For the request mapping template, enter:

```
{
  "version": "2017-02-28",
  "payload": {
    "body": $util.toJson($context.arguments.body),
    "from": $util.toJson($context.identity.username),
    "to":  $util.toJson($context.arguments.to),
    "sentAt": "$util.time.nowISO8601()"
  }
}
```

And for the response mapping template, select the default *Forward the result*. Save your resolver. You application is now ready, let’s page\$1

## Send and subscribe to pages
<a name="send-and-subscribe-to-pages"></a>

For clients to receive pages, they must first be subscribed to an inbox.

In the **Queries** pane let’s execute the `inbox` subscription:

```
subscription Inbox {
    inbox(to: "Nadia") {
        body
        to
        from
        sentAt
    }
}
```

 *Nadia* will receive pages whenever the `Mutation.page` mutation is invoked. Let’s invoke the mutation by executing the mutation:

```
mutation Page {
    page(to: "Nadia", body: "Hello, World!") {
        body
        to
        from
        sentAt
    }
}
```

We just demonstrated the use of local resolvers, by sending a Page and receiving it without leaving AWS AppSync.

# Combining GraphQL resolvers in AWS AppSync
<a name="tutorial-combining-graphql-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

Resolvers and fields in a GraphQL schema have 1:1 relationships with a large degree of flexibility. Because a data source is configured on a resolver independently of a schema, you have the ability for GraphQL types to be resolved or manipulated through different data sources, mixing and matching on a schema to best meet your needs.

The following example scenarios demonstrate how to mix and match data sources in your schema. Before you begin, we recommend that you are familiar with setting up data sources and resolvers for AWS Lambda, Amazon DynamoDB, and Amazon OpenSearch Service as described in the previous tutorials.

## Example schema
<a name="example-schema"></a>

The following schema has a type of `Post` with 3 `Query` operations and 3 `Mutation` operations defined:

```
type Post {
    id: ID!
    author: String!
    title: String
    content: String
    url: String
    ups: Int
    downs: Int
    version: Int!
}

type Query {
    allPost: [Post]
    getPost(id: ID!): Post
    searchPosts: [Post]
}

type Mutation {
    addPost(
        id: ID!,
        author: String!,
        title: String,
        content: String,
        url: String
    ): Post
    updatePost(
        id: ID!,
        author: String!,
        title: String,
        content: String,
        url: String,
        ups: Int!,
        downs: Int!,
        expectedVersion: Int!
    ): Post
    deletePost(id: ID!): Post
}
```

In this example you would have a total of 6 resolvers to attach. One possible way would to have all of these come from an Amazon DynamoDB table, called `Posts`, where `AllPosts` runs a scan and `searchPosts` runs a query, as outlined in the [DynamoDB Resolver Mapping Template Reference](resolver-mapping-template-reference-dynamodb.md#aws-appsync-resolver-mapping-template-reference-dynamodb). However, there are alternatives to meet your business needs, such as having these GraphQL queries resolve from Lambda or OpenSearch Service.

## Alter data through resolvers
<a name="alter-data-through-resolvers"></a>

You might have the need to return results from a database such as DynamoDB (or Amazon Aurora) to clients with some of the attributes changed. This might be due to formatting of the data types, such as timestamp differences on clients, or to handle backwards compatibility issues. For illustrative purposes, in the following example, an AWS Lambda function manipulates the up-votes and down-votes for blog posts by assigning them random numbers each time the GraphQL resolver is invoked:

```
'use strict';
const doc = require('dynamodb-doc');
const dynamo = new doc.DynamoDB();

exports.handler = (event, context, callback) => {
    const payload = {
        TableName: 'Posts',
        Limit: 50,
        Select: 'ALL_ATTRIBUTES',
    };

    dynamo.scan(payload, (err, data) => {
        const result = { data: data.Items.map(item =>{
            item.ups = parseInt(Math.random() * (50 - 10) + 10, 10);
            item.downs = parseInt(Math.random() * (20 - 0) + 0, 10);
            return item;
        }) };
        callback(err, result.data);
    });
};
```

This is a perfectly valid Lambda function and could be attached to the `AllPosts` field in the GraphQL schema so that any query returning all the results gets random numbers for the ups/downs.

## DynamoDB and OpenSearch Service
<a name="ddb-and-es"></a>

For some applications, you might perform mutations or simple lookup queries against DynamoDB, and have a background process transfer documents to OpenSearch Service. You can then simply attach the `searchPosts` Resolver to the OpenSearch Service data source and return search results (from data that originated in DynamoDB) using a GraphQL query. This can be extremely powerful when adding advanced search operations to your applications such keyword, fuzzy word matches or even geospatial lookups. Transferring data from DynamoDB could be done through an ETL process or alternatively you can stream from DynamoDB using Lambda. You can launch a complete example of this using the following AWS CloudFormation stack in the US West 2 (Oregon) Region in your AWS account:

[https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/multipledatasource/appsyncesdbstream.yml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/multipledatasource/appsyncesdbstream.yml) 

The schema in this example lets you add posts using a DynamoDB resolver as follows:

```
mutation add {
    putPost(author:"Nadia"
        title:"My first post"
        content:"This is some test content"
        url:"https://aws.amazon.com/appsync/"
    ){
        id
        title
    }
}
```

This writes data to DynamoDB which then streams data via Lambda to Amazon OpenSearch Service which you could search for all posts by different fields. For example, since the data is in Amazon OpenSearch Service you can search either the author or content fields with free-form text, even with spaces, as follows:

```
query searchName{
    searchAuthor(name:"   Nadia   "){
        id
        title
        content
    }
}

query searchContent{
    searchContent(text:"test"){
        id
        title
        content
    }
}
```

Because the data is written directly to DynamoDB, you can still perform efficient list or item lookup operations against the table with the `allPosts{...}` and `singlePost{...}` queries. This stack uses the following example code for DynamoDB streams:

 **Note:** This code is for example only.

```
var AWS = require('aws-sdk');
var path = require('path');
var stream = require('stream');

var esDomain = {
    endpoint: 'https://opensearch-domain-name.REGION.es.amazonaws.com',
    region: 'REGION',
    index: 'id',
    doctype: 'post'
};

var endpoint = new AWS.Endpoint(esDomain.endpoint)
var creds = new AWS.EnvironmentCredentials('AWS');

function postDocumentToES(doc, context) {
    var req = new AWS.HttpRequest(endpoint);

    req.method = 'POST';
    req.path = '/_bulk';
    req.region = esDomain.region;
    req.body = doc;
    req.headers['presigned-expires'] = false;
    req.headers['Host'] = endpoint.host;

    // Sign the request (Sigv4)
    var signer = new AWS.Signers.V4(req, 'es');
    signer.addAuthorization(creds, new Date());

    // Post document to ES
    var send = new AWS.NodeHttpClient();
    send.handleRequest(req, null, function (httpResp) {
        var body = '';
        httpResp.on('data', function (chunk) {
            body += chunk;
        });
        httpResp.on('end', function (chunk) {
            console.log('Successful', body);
            context.succeed();
        });
    }, function (err) {
        console.log('Error: ' + err);
        context.fail();
    });
}

exports.handler = (event, context, callback) => {
    console.log("event => " + JSON.stringify(event));
    var posts = '';

    for (var i = 0; i < event.Records.length; i++) {
        var eventName = event.Records[i].eventName;
        var actionType = '';
        var image;
        var noDoc = false;
        switch (eventName) {
            case 'INSERT':
                actionType = 'create';
                image = event.Records[i].dynamodb.NewImage;
                break;
            case 'MODIFY':
                actionType = 'update';
                image = event.Records[i].dynamodb.NewImage;
                break;
            case 'REMOVE':
            actionType = 'delete';
                image = event.Records[i].dynamodb.OldImage;
                noDoc = true;
                break;
        }

        if (typeof image !== "undefined") {
            var postData = {};
            for (var key in image) {
                if (image.hasOwnProperty(key)) {
                    if (key === 'postId') {
                        postData['id'] = image[key].S;
                    } else {
                        var val = image[key];
                        if (val.hasOwnProperty('S')) {
                            postData[key] = val.S;
                        } else if (val.hasOwnProperty('N')) {
                            postData[key] = val.N;
                        }
                    }
                }
            }

            var action = {};
            action[actionType] = {};
            action[actionType]._index = 'id';
            action[actionType]._type = 'post';
            action[actionType]._id = postData['id'];
            posts += [
                JSON.stringify(action),
            ].concat(noDoc?[]:[JSON.stringify(postData)]).join('\n') + '\n';
        }
    }
    console.log('posts:',posts);
    postDocumentToES(posts, context);
};
```

You can then use DynamoDB streams to attach this to a DynamoDB table with a primary key of `id`, and any changes to the source of DynamoDB would stream into your OpenSearch Service domain. For more information about configuring this, see the [DynamoDB Streams documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html).

# Using DynamoDB batch operations in AWS AppSync
<a name="tutorial-dynamodb-batch"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

AWS AppSync supports using Amazon DynamoDB batch operations across one or more tables in a single region. Supported operations are `BatchGetItem`, `BatchPutItem`, and `BatchDeleteItem`. By using these features in AWS AppSync, you can perform tasks such as:
+ Pass a list of keys in a single query and return the results from a table
+ Read records from one or more tables in a single query
+ Write records in bulk to one or more tables
+ Conditionally write or delete records in multiple tables that might have a relation

Using batch operations with DynamoDB in AWS AppSync is an advanced technique that takes a little extra thought and knowledge of your backend operations and table structures. Additionally, batch operations in AWS AppSync have two key differences from non-batched operations:
+ The data source role must have permissions to all tables which the resolver will access.
+ The table specification for a resolver is part of the mapping template.

## Permissions
<a name="permissions"></a>

Like other resolvers, you need to create a data source in AWS AppSync and either create a role or use an existing one. Because batch operations require different permissions on DynamoDB tables, you need to grant the configured role permissions for read or write actions:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "dynamodb:BatchGetItem",
                "dynamodb:BatchWriteItem"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:dynamodb:us-east-1:111122223333:table/TABLENAME",
                "arn:aws:dynamodb:us-east-1:111122223333:table/TABLENAME/*"
            ]
        }
    ]
}
```

------

 **Note**: Roles are tied to data sources in AWS AppSync, and resolvers on fields are invoked against a data source. Data sources configured to fetch against DynamoDB only have one table specified, to keep configuration simple. Therefore, when performing a batch operation against multiple tables in a single resolver, which is a more advanced task, you must grant the role on that data source access to any tables the resolver will interact with. This would be done in the **Resource** field in the IAM policy above. Configuration of the tables to make batch calls against is done in the resolver template, which we describe below.

## Data Source
<a name="data-source"></a>

For the sake of simplicity, we’ll use the same data source for all the resolvers used in this tutorial. On the **Data sources** tab, create a new DynamoDB data source and name it **BatchTutorial**. The table name can be anything because table names are specified as part of the request mapping template for batch operations. We will give the table name `empty`.

For this tutorial, any role with the following inline policy will work:

## Single Table Batch
<a name="single-table-batch"></a>

**Warning**  
`BatchPutItem` and `BatchDeleteItem` are not supported when used with conflict detection and resolution. These settings must be disabled to prevent possible errors.

For this example, suppose you have a single table named **Posts** to which you want to add and remove items with batch operations. Use the following schema, noting that for the query, we’ll pass in a list of IDs:

```
type Post {
    id: ID!
    title: String
}

input PostInput {
    id: ID!
    title: String
}

type Query {
    batchGet(ids: [ID]): [Post]
}

type Mutation {
    batchAdd(posts: [PostInput]): [Post]
    batchDelete(ids: [ID]): [Post]
}

schema {
    query: Query
    mutation: Mutation
}
```

Attach a resolver to the `batchAdd()` field with the following **Request Mapping Template**. This automatically takes each item in the GraphQL `input PostInput` type and builds a map, which is needed for the `BatchPutItem` operation:

```
#set($postsdata = [])
#foreach($item in ${ctx.args.posts})
    $util.qr($postsdata.add($util.dynamodb.toMapValues($item)))
#end

{
    "version" : "2018-05-29",
    "operation" : "BatchPutItem",
    "tables" : {
        "Posts": $utils.toJson($postsdata)
    }
}
```

In this case, the **Response Mapping Template** is a simple passthrough, but the table name is appended as `..data.Posts` to the context object as follows:

```
$util.toJson($ctx.result.data.Posts)
```

Now navigate to the **Queries** page of the AWS AppSync console and run the following **batchAdd** mutation:

```
mutation add {
    batchAdd(posts:[{
            id: 1 title: "Running in the Park"},{
            id: 2 title: "Playing fetch"
        }]){
            id
            title
    }
}
```

You should see the results printed to the screen, and can independently validate through the DynamoDB console that both values wrote to the **Posts** table.

Next, attach a resolver to the `batchGet()` field with the following **Request Mapping Template**. This automatically takes each item in the GraphQL `ids:[]` type and builds a map that is needed for the `BatchGetItem` operation:

```
#set($ids = [])
#foreach($id in ${ctx.args.ids})
    #set($map = {})
    $util.qr($map.put("id", $util.dynamodb.toString($id)))
    $util.qr($ids.add($map))
#end

{
    "version" : "2018-05-29",
    "operation" : "BatchGetItem",
    "tables" : {
        "Posts": {
            "keys": $util.toJson($ids),
            "consistentRead": true,
            "projection" : {
                "expression" : "#id, title",
                "expressionNames" : { "#id" : "id"}
                }
        }
    }
}
```

The **Response Mapping Template** is again a simple passthrough, with again the table name appended as `..data.Posts` to the context object:

```
$util.toJson($ctx.result.data.Posts)
```

Now go back to the **Queries** page of the AWS AppSync console, and run the following **batchGet Query**:

```
query get {
    batchGet(ids:[1,2,3]){
        id
        title
    }
}
```

This should return the results for the two `id` values that you added earlier. Note that a `null` value returned for the `id` with a value of `3`. This is because there was no record in your **Posts** table with that value yet. Also note that AWS AppSync returns the results in the same order as the keys passed in to the query, which is an additional feature that AWS AppSync does on your behalf. So if you switch to `batchGet(ids:[1,3,2)`, you’ll see the order changed. You’ll also know which `id` returned a `null` value.

Finally, attach a resolver to the `batchDelete()` field with the following **Request Mapping Template**. This automatically takes each item in the GraphQL `ids:[]` type and builds a map that is needed for the `BatchGetItem` operation:

```
#set($ids = [])
#foreach($id in ${ctx.args.ids})
    #set($map = {})
    $util.qr($map.put("id", $util.dynamodb.toString($id)))
    $util.qr($ids.add($map))
#end

{
    "version" : "2018-05-29",
    "operation" : "BatchDeleteItem",
    "tables" : {
        "Posts": $util.toJson($ids)
    }
}
```

The **Response Mapping Template** is again a simple passthrough, with again the table name appended as `..data.Posts` to the context object:

```
$util.toJson($ctx.result.data.Posts)
```

Now go back to the **Queries** page of the AWS AppSync console, and run the following **batchDelete** mutation:

```
mutation delete {
    batchDelete(ids:[1,2]){ id }
}
```

The records with `id` `1` and `2` should now be deleted. If you re-run the `batchGet()` query from earlier, these should return `null`.

## Multi-Table Batch
<a name="multi-table-batch"></a>

**Warning**  
`BatchPutItem` and `BatchDeleteItem` are not supported when used with conflict detection and resolution. These settings must be disabled to prevent possible errors.

AWS AppSync also enables you to perform batch operations across tables. Let’s build a more complex application. Imagine we are building a Pet Health app, where sensors report the pet location and body temperature. The sensors are battery powered and attempt to connect to the network every few minutes. When a sensor establishes connection, it sends its readings to our AWS AppSync API. Triggers then analyze the data so a dashboard can be presented to the pet owner. Let’s focus on representing the interactions between the sensor and the backend data store.

As a prerequisite, let’s first create two DynamoDB tables; **locationReadings** will store sensor location readings and **temperatureReadings** will store sensor temperature readings. Both tables happen to share the same primary key structure: `sensorId (String)` being the partition key, and `timestamp (String)` the sort key.

Let’s use the following GraphQL schema:

```
type Mutation {
    # Register a batch of readings
    recordReadings(tempReadings: [TemperatureReadingInput], locReadings: [LocationReadingInput]): RecordResult
    # Delete a batch of readings
    deleteReadings(tempReadings: [TemperatureReadingInput], locReadings: [LocationReadingInput]): RecordResult
}

type Query {
    # Retrieve all possible readings recorded by a sensor at a specific time
    getReadings(sensorId: ID!, timestamp: String!): [SensorReading]
}

type RecordResult {
    temperatureReadings: [TemperatureReading]
    locationReadings: [LocationReading]
}

interface SensorReading {
    sensorId: ID!
    timestamp: String!
}

# Sensor reading representing the sensor temperature (in Fahrenheit)
type TemperatureReading implements SensorReading {
    sensorId: ID!
    timestamp: String!
    value: Float
}

# Sensor reading representing the sensor location (lat,long)
type LocationReading implements SensorReading {
    sensorId: ID!
    timestamp: String!
    lat: Float
    long: Float
}

input TemperatureReadingInput {
    sensorId: ID!
    timestamp: String
    value: Float
}

input LocationReadingInput {
    sensorId: ID!
    timestamp: String
    lat: Float
    long: Float
}
```

### BatchPutItem - Recording Sensor Readings
<a name="batchputitem-recording-sensor-readings"></a>

Our sensors need to be able to send their readings once they connect to the internet. The GraphQL field `Mutation.recordReadings` is the API they will use to do so. Let’s attach a resolver to bring our API to life.

Select **Attach** next to the `Mutation.recordReadings` field. On the next screen, pick the same `BatchTutorial` data source created at the beginning of the tutorial.

Let’s add the following request mapping template:

 **Request Mapping Template** 

```
## Convert tempReadings arguments to DynamoDB objects
#set($tempReadings = [])
#foreach($reading in ${ctx.args.tempReadings})
    $util.qr($tempReadings.add($util.dynamodb.toMapValues($reading)))
#end

## Convert locReadings arguments to DynamoDB objects
#set($locReadings = [])
#foreach($reading in ${ctx.args.locReadings})
    $util.qr($locReadings.add($util.dynamodb.toMapValues($reading)))
#end

{
    "version" : "2018-05-29",
    "operation" : "BatchPutItem",
    "tables" : {
        "locationReadings": $utils.toJson($locReadings),
        "temperatureReadings": $utils.toJson($tempReadings)
    }
}
```

As you can see, the `BatchPutItem` operation allows us to specify multiple tables.

Let’s use the following response mapping template.

 **Response Mapping Template** 

```
## If there was an error with the invocation
## there might have been partial results
#if($ctx.error)
    ## Append a GraphQL error for that field in the GraphQL response
    $utils.appendError($ctx.error.message, $ctx.error.message)
#end
## Also returns data for the field in the GraphQL response
$utils.toJson($ctx.result.data)
```

With batch operations, there can be both errors and results returned from the invocation. In that case, we’re free to do some extra error handling.

 **Note**: The use of `$utils.appendError()` is similar to the `$util.error()`, with the major distinction that it doesn’t interrupt the evaluation of the mapping template. Instead, it signals there was an error with the field, but allows the template to be evaluated and consequently return data back to the caller. We recommend you use `$utils.appendError()` when your application needs to return partial results.

Save the resolver and navigate to the **Queries** page of the AWS AppSync console. Let’s send some sensor readings\$1

Execute the following mutation:

```
mutation sendReadings {
  recordReadings(
    tempReadings: [
      {sensorId: 1, value: 85.5, timestamp: "2018-02-01T17:21:05.000+08:00"},
      {sensorId: 1, value: 85.7, timestamp: "2018-02-01T17:21:06.000+08:00"},
      {sensorId: 1, value: 85.8, timestamp: "2018-02-01T17:21:07.000+08:00"},
      {sensorId: 1, value: 84.2, timestamp: "2018-02-01T17:21:08.000+08:00"},
      {sensorId: 1, value: 81.5, timestamp: "2018-02-01T17:21:09.000+08:00"}
    ]
    locReadings: [
      {sensorId: 1, lat: 47.615063, long: -122.333551, timestamp: "2018-02-01T17:21:05.000+08:00"},
      {sensorId: 1, lat: 47.615163, long: -122.333552, timestamp: "2018-02-01T17:21:06.000+08:00"}
      {sensorId: 1, lat: 47.615263, long: -122.333553, timestamp: "2018-02-01T17:21:07.000+08:00"}
      {sensorId: 1, lat: 47.615363, long: -122.333554, timestamp: "2018-02-01T17:21:08.000+08:00"}
      {sensorId: 1, lat: 47.615463, long: -122.333555, timestamp: "2018-02-01T17:21:09.000+08:00"}
    ]) {
    locationReadings {
      sensorId
      timestamp
      lat
      long
    }
    temperatureReadings {
      sensorId
      timestamp
      value
    }
  }
}
```

We sent 10 sensor readings in one mutation, with readings split up across two tables. Use the DynamoDB console to validate that data shows up in both the **locationReadings** and **temperatureReadings** tables.

### BatchDeleteItem - Deleting Sensor Readings
<a name="batchdeleteitem-deleting-sensor-readings"></a>

Similarly, we would also need to delete batches of sensor readings. Let’s use the `Mutation.deleteReadings` GraphQL field for this purpose. Select **Attach** next to the `Mutation.recordReadings` field. On the next screen, pick the same `BatchTutorial` data source created at the beginning of the tutorial.

Let’s use the following request mapping template.

 **Request Mapping Template** 

```
## Convert tempReadings arguments to DynamoDB primary keys
#set($tempReadings = [])
#foreach($reading in ${ctx.args.tempReadings})
    #set($pkey = {})
    $util.qr($pkey.put("sensorId", $reading.sensorId))
    $util.qr($pkey.put("timestamp", $reading.timestamp))
    $util.qr($tempReadings.add($util.dynamodb.toMapValues($pkey)))
#end

## Convert locReadings arguments to DynamoDB primary keys
#set($locReadings = [])
#foreach($reading in ${ctx.args.locReadings})
    #set($pkey = {})
    $util.qr($pkey.put("sensorId", $reading.sensorId))
    $util.qr($pkey.put("timestamp", $reading.timestamp))
    $util.qr($locReadings.add($util.dynamodb.toMapValues($pkey)))
#end

{
    "version" : "2018-05-29",
    "operation" : "BatchDeleteItem",
    "tables" : {
        "locationReadings": $utils.toJson($locReadings),
        "temperatureReadings": $utils.toJson($tempReadings)
    }
}
```

The response mapping template is the same as the one we used for `Mutation.recordReadings`.

 **Response Mapping Template** 

```
## If there was an error with the invocation
## there might have been partial results
#if($ctx.error)
    ## Append a GraphQL error for that field in the GraphQL response
    $utils.appendError($ctx.error.message, $ctx.error.message)
#end
## Also return data for the field in the GraphQL response
$utils.toJson($ctx.result.data)
```

Save the resolver and navigate to the **Queries** page of the AWS AppSync console. Now, let’s delete a couple of sensor readings\$1

Execute the following mutation:

```
mutation deleteReadings {
  # Let's delete the first two readings we recorded
  deleteReadings(
    tempReadings: [{sensorId: 1, timestamp: "2018-02-01T17:21:05.000+08:00"}]
    locReadings: [{sensorId: 1, timestamp: "2018-02-01T17:21:05.000+08:00"}]) {
    locationReadings {
      sensorId
      timestamp
      lat
      long
    }
    temperatureReadings {
      sensorId
      timestamp
      value
    }
  }
}
```

Validate through the DynamoDB console that these two readings have been deleted from the **locationReadings** and **temperatureReadings** tables.

### BatchGetItem - Retrieve Readings
<a name="batchgetitem-retrieve-readings"></a>

Another common operation for our Pet Health app would be to retrieve the readings for a sensor at a specific point in time. Let’s attach a resolver to the `Query.getReadings` GraphQL field on our schema. Select **Attach**, and on the next screen pick the same `BatchTutorial` data source created at the beginning of the tutorial.

Let’s add the following request mapping template.

 **Request Mapping Template** 

```
## Build a single DynamoDB primary key,
## as both locationReadings and tempReadings tables
## share the same primary key structure
#set($pkey = {})
$util.qr($pkey.put("sensorId", $ctx.args.sensorId))
$util.qr($pkey.put("timestamp", $ctx.args.timestamp))

{
    "version" : "2018-05-29",
    "operation" : "BatchGetItem",
    "tables" : {
        "locationReadings": {
            "keys": [$util.dynamodb.toMapValuesJson($pkey)],
            "consistentRead": true
        },
        "temperatureReadings": {
            "keys": [$util.dynamodb.toMapValuesJson($pkey)],
            "consistentRead": true
        }
    }
}
```

Note that we are now using the **BatchGetItem** operation.

Our response mapping template is going to be a little different because we chose to return a `SensorReading` list. Let’s map the invocation result to the desired shape.

 **Response Mapping Template** 

```
## Merge locationReadings and temperatureReadings
## into a single list
## __typename needed as schema uses an interface
#set($sensorReadings = [])

#foreach($locReading in $ctx.result.data.locationReadings)
    $util.qr($locReading.put("__typename", "LocationReading"))
    $util.qr($sensorReadings.add($locReading))
#end

#foreach($tempReading in $ctx.result.data.temperatureReadings)
    $util.qr($tempReading.put("__typename", "TemperatureReading"))
    $util.qr($sensorReadings.add($tempReading))
#end

$util.toJson($sensorReadings)
```

Save the resolver and navigate to the **Queries** page of the AWS AppSync console. Now, let’s retrieve sensor readings\$1

Execute the following query:

```
query getReadingsForSensorAndTime {
  # Let's retrieve the very first two readings
  getReadings(sensorId: 1, timestamp: "2018-02-01T17:21:06.000+08:00") {
    sensorId
    timestamp
    ...on TemperatureReading {
      value
    }
    ...on LocationReading {
      lat
      long
    }
  }
}
```

We have successfully demonstrated the use of DynamoDB batch operations using AWS AppSync.

## Error Handling
<a name="error-handling"></a>

In AWS AppSync, data source operations can sometimes return partial results. Partial results is the term we will use to denote when the output of an operation is comprised of some data and an error. Because error handling is inherently application specific, AWS AppSync gives you the opportunity to handle errors in the response mapping template. The resolver invocation error, if present, is available from the context as `$ctx.error`. Invocation errors always include a message and a type, accessible as properties `$ctx.error.message` and `$ctx.error.type`. During the response mapping template invocation, you can handle partial results in three ways:

1. swallow the invocation error by just returning data

1. raise an error (using `$util.error(...)`) by stopping the response mapping template evaluation, which won’t return any data.

1. append an error (using `$util.appendError(...)`) and also return data

Let’s demonstrate each of the three points above with DynamoDB batch operations\$1

### DynamoDB Batch operations
<a name="dynamodb-batch-operations"></a>

With DynamoDB batch operations, it is possible that a batch partially completes. That is, it is possible that some of the requested items or keys are left unprocessed. If AWS AppSync is unable to complete a batch, unprocessed items and an invocation error will be set on the context.

We will implement error handling using the `Query.getReadings` field configuration from the `BatchGetItem` operation from the previous section of this tutorial. This time, let’s pretend that while executing the `Query.getReadings` field, the `temperatureReadings` DynamoDB table ran out of provisioned throughput. DynamoDB raised a **ProvisionedThroughputExceededException** at the second attempt by AWS AppSync to process the remaining elements in the batch.

The following JSON represents the serialized context after the DynamoDB batch invocation but before the response mapping template was evaluated.

```
{
  "arguments": {
    "sensorId": "1",
    "timestamp": "2018-02-01T17:21:05.000+08:00"
  },
  "source": null,
  "result": {
    "data": {
      "temperatureReadings": [
        null
      ],
      "locationReadings": [
        {
          "lat": 47.615063,
          "long": -122.333551,
          "sensorId": "1",
          "timestamp": "2018-02-01T17:21:05.000+08:00"
        }
      ]
    },
    "unprocessedKeys": {
      "temperatureReadings": [
        {
          "sensorId": "1",
          "timestamp": "2018-02-01T17:21:05.000+08:00"
        }
      ],
      "locationReadings": []
    }
  },
  "error": {
    "type": "DynamoDB:ProvisionedThroughputExceededException",
    "message": "You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes. (...)"
  },
  "outErrors": []
}
```

A few things to note on the context:
+ the invocation error has been set on the context at `$ctx.error` by AWS AppSync, and the error type has been set to **DynamoDB:ProvisionedThroughputExceededException**.
+ results are mapped per table under `$ctx.result.data`, even though an error is present
+ keys that were left unprocessed are available at `$ctx.result.data.unprocessedKeys`. Here, AWS AppSync was unable to retrieve the item with key (sensorId:1, timestamp:2018-02-01T17:21:05.000\$108:00) because of insufficient table throughput.

 **Note**: For `BatchPutItem`, it is `$ctx.result.data.unprocessedItems`. For `BatchDeleteItem`, it is `$ctx.result.data.unprocessedKeys`.

Let’s handle this error in three different ways.

#### 1. Swallowing the invocation error
<a name="swallowing-the-invocation-error"></a>

Returning data without handling the invocation error effectively swallows the error, making the result for the given GraphQL field always successful.

The response mapping template we write is familiar and only focuses on the result data.

Response mapping template:

```
$util.toJson($ctx.result.data)
```

GraphQL response:

```
{
  "data": {
    "getReadings": [
      {
        "sensorId": "1",
        "timestamp": "2018-02-01T17:21:05.000+08:00",
        "lat": 47.615063,
        "long": -122.333551
      },
      {
        "sensorId": "1",
        "timestamp": "2018-02-01T17:21:05.000+08:00",
        "value": 85.5
      }
    ]
  }
}
```

No errors will be added to the error response as only data was acted on.

#### 2. Raising an error to abort the template execution
<a name="raising-an-error-to-abort-the-template-execution"></a>

When partial failures should be treated as complete failures from the client’s perspective, you can abort the template execution to prevent returning data. The `$util.error(...)` utility method achieves exactly this behavior.

Response mapping template:

```
## there was an error let's mark the entire field
## as failed and do not return any data back in the response
#if ($ctx.error)
    $util.error($ctx.error.message, $ctx.error.type, null, $ctx.result.data.unprocessedKeys)
#end

$util.toJson($ctx.result.data)
```

GraphQL response:

```
{
  "data": {
    "getReadings": null
  },
  "errors": [
    {
      "path": [
        "getReadings"
      ],
      "data": null,
      "errorType": "DynamoDB:ProvisionedThroughputExceededException",
      "errorInfo": {
        "temperatureReadings": [
          {
            "sensorId": "1",
            "timestamp": "2018-02-01T17:21:05.000+08:00"
          }
        ],
        "locationReadings": []
      },
      "locations": [
        {
          "line": 58,
          "column": 3
        }
      ],
      "message": "You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes. (...)"
    }
  ]
}
```

Even though some results might have been returned from the DynamoDB batch operation, we chose to raise an error such that the `getReadings` GraphQL field is null and the error has been added to the GraphQL response *errors* block.

#### 3. Appending an error to return both data and errors
<a name="appending-an-error-to-return-both-data-and-errors"></a>

In certain cases, to provide a better user experience, applications can return partial results and notify their clients of the unprocessed items. The clients can decide to either implement a retry or translate the error back to the end user. The `$util.appendError(...)` is the utility method that enables this behavior by letting the application designer append errors on the context without interfering with the evaluation of the template. After evaluating the template, AWS AppSync will process any context errors by appending them to the errors block of the GraphQL response.

Response mapping template:

```
#if ($ctx.error)
    ## pass the unprocessed keys back to the caller via the `errorInfo` field
    $util.appendError($ctx.error.message, $ctx.error.type, null, $ctx.result.data.unprocessedKeys)
#end

$util.toJson($ctx.result.data)
```

We forwarded both the invocation error and unprocessedKeys element inside the errors block of the GraphQL response. The `getReadings` field also return partial data from the **locationReadings** table as you can see in the response below.

GraphQL response:

```
{
  "data": {
    "getReadings": [
      null,
      {
        "sensorId": "1",
        "timestamp": "2018-02-01T17:21:05.000+08:00",
        "value": 85.5
      }
    ]
  },
  "errors": [
    {
      "path": [
        "getReadings"
      ],
      "data": null,
      "errorType": "DynamoDB:ProvisionedThroughputExceededException",
      "errorInfo": {
        "temperatureReadings": [
          {
            "sensorId": "1",
            "timestamp": "2018-02-01T17:21:05.000+08:00"
          }
        ],
        "locationReadings": []
      },
      "locations": [
        {
          "line": 58,
          "column": 3
        }
      ],
      "message": "You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes. (...)"
    }
  ]
}
```

# Performing DynamoDB transactions in AWS AppSync
<a name="tutorial-dynamodb-transact"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

AWS AppSync supports using Amazon DynamoDB transaction operations across one or more tables in a single region. Supported operations are `TransactGetItems` and `TransactWriteItems`. By using these features in AWS AppSync, you can perform tasks such as:
+ Pass a list of keys in a single query and return the results from a table
+ Read records from one or more tables in a single query
+ Write records in transaction to one or more tables in an all-or-nothing way
+ Execute transactions when some conditions are satisfied

## Permissions
<a name="permissions"></a>

Like other resolvers, you need to create a data source in AWS AppSync and either create a role or use an existing one. Because transaction operations require different permissions on DynamoDB tables, you need to grant the configured role permissions for read or write actions:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:PutItem",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:UpdateItem"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:dynamodb:us-east-1:111122223333:table/TABLENAME",
                "arn:aws:dynamodb:us-east-1:111122223333:table/TABLENAME/*"
            ]
        }
    ]
}
```

------

 **Note**: Roles are tied to data sources in AWS AppSync, and resolvers on fields are invoked against a data source. Data sources configured to fetch against DynamoDB only have one table specified, to keep configuration simple. Therefore, when performing a transaction operation against multiple tables in a single resolver, which is a more advanced task, you must grant the role on that data source access to any tables the resolver will interact with. This would be done in the **Resource** field in the IAM policy above. Configuration of the transaction calls against the tables is done in the resolver template, which we describe below.

## Data Source
<a name="data-source"></a>

For the sake of simplicity, we’ll use the same data source for all the resolvers used in this tutorial. On the **Data sources** tab, create a new DynamoDB data source and name it **TransactTutorial**. The table name can be anything because table names are specified as part of the request mapping template for transaction operations. We will give the table name `empty`.

We’ll have two tables called **savingAccounts** and **checkingAccounts**, both with `accountNumber` as partition key, and a **transactionHistory** table with `transactionId` as partition key.

For this tutorial, any role with the following inline policy will work. Replace `region` and `accountId` with your region and account ID:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:PutItem",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:UpdateItem"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:dynamodb:us-east-1:111122223333:table/savingAccounts",
                "arn:aws:dynamodb:us-east-1:111122223333:table/savingAccounts/*",
                "arn:aws:dynamodb:us-east-1:111122223333:table/checkingAccounts",
                "arn:aws:dynamodb:us-east-1:111122223333:table/checkingAccounts/*",
                "arn:aws:dynamodb:us-east-1:111122223333:table/transactionHistory",
                "arn:aws:dynamodb:us-east-1:111122223333:table/transactionHistory/*"
            ]
        }
    ]
}
```

------

## Transactions
<a name="transactions"></a>

For this example, the context is a classic banking transaction, where we’ll use `TransactWriteItems` to:
+ Transfer money from saving accounts to checking accounts
+ Generate new transaction records for each transaction

And then we’ll use `TransactGetItems` to retrieve details from saving accounts and checking accounts.

**Warning**  
`TransactWriteItems` is not supported when used with conflict detection and resolution. These settings must be disabled to prevent possible errors.

We define our GraphQL schema as follows:

```
type SavingAccount {
    accountNumber: String!
    username: String
    balance: Float
}

type CheckingAccount {
    accountNumber: String!
    username: String
    balance: Float
}

type TransactionHistory {
    transactionId: ID!
    from: String
    to: String
    amount: Float
}

type TransactionResult {
    savingAccounts: [SavingAccount]
    checkingAccounts: [CheckingAccount]
    transactionHistory: [TransactionHistory]
}

input SavingAccountInput {
    accountNumber: String!
    username: String
    balance: Float
}

input CheckingAccountInput {
    accountNumber: String!
    username: String
    balance: Float
}

input TransactionInput {
    savingAccountNumber: String!
    checkingAccountNumber: String!
    amount: Float!
}

type Query {
    getAccounts(savingAccountNumbers: [String], checkingAccountNumbers: [String]): TransactionResult
}

type Mutation {
    populateAccounts(savingAccounts: [SavingAccountInput], checkingAccounts: [CheckingAccountInput]): TransactionResult
    transferMoney(transactions: [TransactionInput]): TransactionResult
}

schema {
    query: Query
    mutation: Mutation
}
```

### TransactWriteItems - Populate Accounts
<a name="transactwriteitems-populate-accounts"></a>

In order to transfer money between accounts, we need to populate the table with the details. We’ll use the GraphQL operation `Mutation.populateAccounts` to do so.

In the Schema section, click on **Attach** next to the `Mutation.populateAccounts` operation. Go to VTL Unit Resolvers, then choose the same `TransactTutorial` data source.

Now use the following request mapping template:

 **Request Mapping Template** 

```
#set($savingAccountTransactPutItems = [])
#set($index = 0)
#foreach($savingAccount in ${ctx.args.savingAccounts})
    #set($keyMap = {})
    $util.qr($keyMap.put("accountNumber", $util.dynamodb.toString($savingAccount.accountNumber)))
    #set($attributeValues = {})
    $util.qr($attributeValues.put("username", $util.dynamodb.toString($savingAccount.username)))
    $util.qr($attributeValues.put("balance", $util.dynamodb.toNumber($savingAccount.balance)))
    #set($index = $index + 1)
    #set($savingAccountTransactPutItem = {"table": "savingAccounts",
        "operation": "PutItem",
        "key": $keyMap,
        "attributeValues": $attributeValues})
    $util.qr($savingAccountTransactPutItems.add($savingAccountTransactPutItem))
#end

#set($checkingAccountTransactPutItems = [])
#set($index = 0)
#foreach($checkingAccount in ${ctx.args.checkingAccounts})
    #set($keyMap = {})
    $util.qr($keyMap.put("accountNumber", $util.dynamodb.toString($checkingAccount.accountNumber)))
    #set($attributeValues = {})
    $util.qr($attributeValues.put("username", $util.dynamodb.toString($checkingAccount.username)))
    $util.qr($attributeValues.put("balance", $util.dynamodb.toNumber($checkingAccount.balance)))
    #set($index = $index + 1)
    #set($checkingAccountTransactPutItem = {"table": "checkingAccounts",
        "operation": "PutItem",
        "key": $keyMap,
        "attributeValues": $attributeValues})
    $util.qr($checkingAccountTransactPutItems.add($checkingAccountTransactPutItem))
#end

#set($transactItems = [])
$util.qr($transactItems.addAll($savingAccountTransactPutItems))
$util.qr($transactItems.addAll($checkingAccountTransactPutItems))

{
    "version" : "2018-05-29",
    "operation" : "TransactWriteItems",
    "transactItems" : $util.toJson($transactItems)
}
```

And the following response mapping template:

 **Response Mapping Template** 

```
#if ($ctx.error)
    $util.appendError($ctx.error.message, $ctx.error.type, null, $ctx.result.cancellationReasons)
#end

#set($savingAccounts = [])
#foreach($index in [0..2])
    $util.qr($savingAccounts.add(${ctx.result.keys[$index]}))
#end

#set($checkingAccounts = [])
#foreach($index in [3..5])
    $util.qr($checkingAccounts.add(${ctx.result.keys[$index]}))
#end

#set($transactionResult = {})
$util.qr($transactionResult.put('savingAccounts', $savingAccounts))
$util.qr($transactionResult.put('checkingAccounts', $checkingAccounts))

$util.toJson($transactionResult)
```

Save the resolver and navigate to the **Queries** section of the AWS AppSync console to populate the accounts.

Execute the following mutation:

```
mutation populateAccounts {
  populateAccounts (
    savingAccounts: [
      {accountNumber: "1", username: "Tom", balance: 100},
      {accountNumber: "2", username: "Amy", balance: 90},
      {accountNumber: "3", username: "Lily", balance: 80},
    ]
    checkingAccounts: [
      {accountNumber: "1", username: "Tom", balance: 70},
      {accountNumber: "2", username: "Amy", balance: 60},
      {accountNumber: "3", username: "Lily", balance: 50},
    ]) {
    savingAccounts {
      accountNumber
    }
    checkingAccounts {
      accountNumber
    }
  }
}
```

We populated 3 saving accounts and 3 checking accounts in one mutation.

Use the DynamoDB console to validate that data shows up in both the **savingAccounts** and **checkingAccounts** tables.

### TransactWriteItems - Transfer Money
<a name="transactwriteitems-transfer-money"></a>

Attach a resolver to the `transferMoney` mutation with the following **Request Mapping Template**. Note the values of `amounts`, `savingAccountNumbers`, and `checkingAccountNumbers` are the same.

```
#set($amounts = [])
#foreach($transaction in ${ctx.args.transactions})
    #set($attributeValueMap = {})
    $util.qr($attributeValueMap.put(":amount", $util.dynamodb.toNumber($transaction.amount)))
    $util.qr($amounts.add($attributeValueMap))
#end

#set($savingAccountTransactUpdateItems = [])
#set($index = 0)
#foreach($transaction in ${ctx.args.transactions})
    #set($keyMap = {})
    $util.qr($keyMap.put("accountNumber", $util.dynamodb.toString($transaction.savingAccountNumber)))
    #set($update = {})
    $util.qr($update.put("expression", "SET balance = balance - :amount"))
    $util.qr($update.put("expressionValues", $amounts[$index]))
    #set($index = $index + 1)
    #set($savingAccountTransactUpdateItem = {"table": "savingAccounts",
        "operation": "UpdateItem",
        "key": $keyMap,
        "update": $update})
    $util.qr($savingAccountTransactUpdateItems.add($savingAccountTransactUpdateItem))
#end

#set($checkingAccountTransactUpdateItems = [])
#set($index = 0)
#foreach($transaction in ${ctx.args.transactions})
    #set($keyMap = {})
    $util.qr($keyMap.put("accountNumber", $util.dynamodb.toString($transaction.checkingAccountNumber)))
    #set($update = {})
    $util.qr($update.put("expression", "SET balance = balance + :amount"))
    $util.qr($update.put("expressionValues", $amounts[$index]))
    #set($index = $index + 1)
    #set($checkingAccountTransactUpdateItem = {"table": "checkingAccounts",
        "operation": "UpdateItem",
        "key": $keyMap,
        "update": $update})
    $util.qr($checkingAccountTransactUpdateItems.add($checkingAccountTransactUpdateItem))
#end

#set($transactionHistoryTransactPutItems = [])
#foreach($transaction in ${ctx.args.transactions})
    #set($keyMap = {})
    $util.qr($keyMap.put("transactionId", $util.dynamodb.toString(${utils.autoId()})))
    #set($attributeValues = {})
    $util.qr($attributeValues.put("from", $util.dynamodb.toString($transaction.savingAccountNumber)))
    $util.qr($attributeValues.put("to", $util.dynamodb.toString($transaction.checkingAccountNumber)))
    $util.qr($attributeValues.put("amount", $util.dynamodb.toNumber($transaction.amount)))
    #set($transactionHistoryTransactPutItem = {"table": "transactionHistory",
        "operation": "PutItem",
        "key": $keyMap,
        "attributeValues": $attributeValues})
    $util.qr($transactionHistoryTransactPutItems.add($transactionHistoryTransactPutItem))
#end

#set($transactItems = [])
$util.qr($transactItems.addAll($savingAccountTransactUpdateItems))
$util.qr($transactItems.addAll($checkingAccountTransactUpdateItems))
$util.qr($transactItems.addAll($transactionHistoryTransactPutItems))

{
    "version" : "2018-05-29",
    "operation" : "TransactWriteItems",
    "transactItems" : $util.toJson($transactItems)
}
```

We will have 3 banking transactions in a single `TransactWriteItems` operation. Use the following **Response Mapping Template**:

```
#if ($ctx.error)
    $util.appendError($ctx.error.message, $ctx.error.type, null, $ctx.result.cancellationReasons)
#end

#set($savingAccounts = [])
#foreach($index in [0..2])
    $util.qr($savingAccounts.add(${ctx.result.keys[$index]}))
#end

#set($checkingAccounts = [])
#foreach($index in [3..5])
    $util.qr($checkingAccounts.add(${ctx.result.keys[$index]}))
#end

#set($transactionHistory = [])
#foreach($index in [6..8])
    $util.qr($transactionHistory.add(${ctx.result.keys[$index]}))
#end

#set($transactionResult = {})
$util.qr($transactionResult.put('savingAccounts', $savingAccounts))
$util.qr($transactionResult.put('checkingAccounts', $checkingAccounts))
$util.qr($transactionResult.put('transactionHistory', $transactionHistory))

$util.toJson($transactionResult)
```

Now navigate to the **Queries** section of the AWS AppSync console and execute the **transferMoney** mutation as follows:

```
mutation write {
  transferMoney(
    transactions: [
      {savingAccountNumber: "1", checkingAccountNumber: "1", amount: 7.5},
      {savingAccountNumber: "2", checkingAccountNumber: "2", amount: 6.0},
      {savingAccountNumber: "3", checkingAccountNumber: "3", amount: 3.3}
    ]) {
    savingAccounts {
      accountNumber
    }
    checkingAccounts {
      accountNumber
    }
    transactionHistory {
      transactionId
    }
  }
}
```

We sent 2 banking transactions in one mutation. Use the DynamoDB console to validate that data shows up in the **savingAccounts**, **checkingAccounts**, and **transactionHistory** tables.

### TransactGetItems - Retrieve Accounts
<a name="transactgetitems-retrieve-accounts"></a>

In order to retrieve the details from saving accounts and checking accounts in a single transactional request we’ll attach a resolver to the `Query.getAccounts` GraphQL operation on our schema. Select **Attach**, go to VTL Unit Resolvers, then on the next screen, pick the same `TransactTutorial` data source created at the beginning of the tutorial. Configure the templates as follows:

 **Request Mapping Template** 

```
#set($savingAccountsTransactGets = [])
#foreach($savingAccountNumber in ${ctx.args.savingAccountNumbers})
    #set($savingAccountKey = {})
    $util.qr($savingAccountKey.put("accountNumber", $util.dynamodb.toString($savingAccountNumber)))
    #set($savingAccountTransactGet = {"table": "savingAccounts", "key": $savingAccountKey})
    $util.qr($savingAccountsTransactGets.add($savingAccountTransactGet))
#end

#set($checkingAccountsTransactGets = [])
#foreach($checkingAccountNumber in ${ctx.args.checkingAccountNumbers})
    #set($checkingAccountKey = {})
    $util.qr($checkingAccountKey.put("accountNumber", $util.dynamodb.toString($checkingAccountNumber)))
    #set($checkingAccountTransactGet = {"table": "checkingAccounts", "key": $checkingAccountKey})
    $util.qr($checkingAccountsTransactGets.add($checkingAccountTransactGet))
#end

#set($transactItems = [])
$util.qr($transactItems.addAll($savingAccountsTransactGets))
$util.qr($transactItems.addAll($checkingAccountsTransactGets))

{
    "version" : "2018-05-29",
    "operation" : "TransactGetItems",
    "transactItems" : $util.toJson($transactItems)
}
```

 **Response Mapping Template** 

```
#if ($ctx.error)
    $util.appendError($ctx.error.message, $ctx.error.type, null, $ctx.result.cancellationReasons)
#end

#set($savingAccounts = [])
#foreach($index in [0..2])
    $util.qr($savingAccounts.add(${ctx.result.items[$index]}))
#end

#set($checkingAccounts = [])
#foreach($index in [3..4])
    $util.qr($checkingAccounts.add($ctx.result.items[$index]))
#end

#set($transactionResult = {})
$util.qr($transactionResult.put('savingAccounts', $savingAccounts))
$util.qr($transactionResult.put('checkingAccounts', $checkingAccounts))

$util.toJson($transactionResult)
```

Save the resolver and navigate to the **Queries** sections of the AWS AppSync console. In order to retrieve the saving accounts and checing accounts, execute the following query:

```
query getAccounts {
  getAccounts(
    savingAccountNumbers: ["1", "2", "3"],
    checkingAccountNumbers: ["1", "2"]
  ) {
    savingAccounts {
      accountNumber
      username
      balance
    }
    checkingAccounts {
      accountNumber
      username
      balance
    }
  }
}
```

We have successfully demonstrated the use of DynamoDB transactions using AWS AppSync.

# Using HTTP resolvers in AWS AppSync
<a name="tutorial-http-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

AWS AppSync enables you to use supported data sources (that is, AWS Lambda, Amazon DynamoDB, Amazon OpenSearch Service, or Amazon Aurora) to perform various operations, in addition to any arbitrary HTTP endpoints to resolve GraphQL fields. After your HTTP endpoints are available, you can connect to them using a data source. Then, you can configure a resolver in the schema to perform GraphQL operations such as queries, mutations, and subscriptions. This tutorial walks you through some common examples.

In this tutorial you use a REST API (created using Amazon API Gateway and Lambda) with an AWS AppSync GraphQL endpoint.

## One-Click Setup
<a name="one-click-setup"></a>

If you want to automatically set up a GraphQL endpoint in AWS AppSync with an HTTP endpoint configured (using Amazon API Gateway and Lambda), you can use the following AWS CloudFormation template :

[https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/http/http-full.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/http/http-full.yaml)

## Creating a REST API
<a name="creating-a-rest-api"></a>

You can use the following AWS CloudFormation template to set up a REST endpoint that works for this tutorial:

[https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/http/http-api-gw.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/http/http-api-gw.yaml)

The AWS CloudFormation stack performs the following steps:

1. Sets up a Lambda function that contains your business logic for your microservice.

1. Sets up an API Gateway REST API with the following endpoint/method/content type combination:


****  

| API Resource Path | HTTP Method | Supported Content Type | 
| --- | --- | --- | 
|  /v1/users  |  POST  |  application/json  | 
|  /v1/users  |  GET  |  application/json  | 
|  /v1/users/1  |  GET  |  application/json  | 
|  /v1/users/1  |  PUT  |  application/json  | 
|  /v1/users/1  |  DELETE  |  application/json  | 

## Creating Your GraphQL API
<a name="creating-your-graphql-api"></a>

To create the GraphQL API in AWS AppSync:
+ Open the AWS AppSync console and choose **Create API**.
+ For the API name, type `UserData`.
+ Choose **Custom schema**.
+ Choose **Create**.

The AWS AppSync console creates a new GraphQL API for you using the API key authentication mode. You can use the console to set up the rest of the GraphQL API and run queries on it for the remainder of this tutorial.

## Creating a GraphQL Schema
<a name="creating-a-graphql-schema"></a>

Now that you have a GraphQL API, let’s create a GraphQL schema. From the schema editor in the AWS AppSync console, make sure you schema matches the following schema:

```
schema {
    query: Query
    mutation: Mutation
}

type Mutation {
    addUser(userInput: UserInput!): User
    deleteUser(id: ID!): User
}

type Query {
    getUser(id: ID): User
    listUser: [User!]!
}

type User {
    id: ID!
    username: String!
    firstname: String
    lastname: String
    phone: String
    email: String
}

input UserInput {
    id: ID!
    username: String!
    firstname: String
    lastname: String
    phone: String
    email: String
}
```

## Configure Your HTTP Data Source
<a name="configure-your-http-data-source"></a>

To configure your HTTP data source, do the following:
+ On the **DataSources** tab, choose **New**, and then type a friendly name for the data source (for example, `HTTP`).
+ In **Data source type**, choose **HTTP**.
+ Set the endpoint to the API Gateway endpoint that is created. Make sure that you don’t include the stage name as part of the endpoint.

 **Note:** At this time only public endpoints are supported by AWS AppSync.

 **Note:** For more information about the certifying authorities that are recognized by the AWS AppSync service, see [Certificate Authorities (CA) Recognized by AWS AppSync for HTTPS Endpoints](http-cert-authorities.md#aws-appsync-http-certificate-authorities).

## Configuring Resolvers
<a name="configuring-resolvers"></a>

In this step, you connect the http data source to the **getUser** query.

To set up the resolver:
+ Choose the **Schema** tab.
+ In the **Data types** pane on the right under the **Query** type, find the **getUser** field and choose **Attach**.
+ In **Data source name**, choose **HTTP**.
+ In **Configure the request mapping template**, paste the following code:

```
{
    "version": "2018-05-29",
    "method": "GET",
    "params": {
        "headers": {
            "Content-Type": "application/json"
        }
    },
    "resourcePath": $util.toJson("/v1/users/${ctx.args.id}")
}
```
+ In **Configure the response mapping template**, paste the following code:

```
## return the body
#if($ctx.result.statusCode == 200)
    ##if response is 200
    $ctx.result.body
#else
    ##if response is not 200, append the response to error block.
    $utils.appendError($ctx.result.body, "$ctx.result.statusCode")
#end
```
+ Choose the **Query** tab, and then run the following query:

```
query GetUser{
    getUser(id:1){
        id
        username
    }
}
```

This should return the following response:

```
{
    "data": {
        "getUser": {
            "id": "1",
            "username": "nadia"
        }
    }
}
```
+ Choose the **Schema** tab.
+ In the **Data types** pane on the right under **Mutation**, find the **addUser** field and choose **Attach**.
+ In **Data source name**, choose **HTTP**.
+ In **Configure the request mapping template**, paste the following code:

```
{
    "version": "2018-05-29",
    "method": "POST",
    "resourcePath": "/v1/users",
    "params":{
      "headers":{
        "Content-Type": "application/json",
      },
      "body": $util.toJson($ctx.args.userInput)
    }
}
```
+ In **Configure the response mapping template**, paste the following code:

```
## Raise a GraphQL field error in case of a datasource invocation error
#if($ctx.error)
    $util.error($ctx.error.message, $ctx.error.type)
#end
## if the response status code is not 200, then return an error. Else return the body **
#if($ctx.result.statusCode == 200)
    ## If response is 200, return the body.
    $ctx.result.body
#else
    ## If response is not 200, append the response to error block.
    $utils.appendError($ctx.result.body, "$ctx.result.statusCode")
#end
```
+ Choose the **Query** tab, and then run the following query:

```
mutation addUser{
    addUser(userInput:{
        id:"2",
        username:"shaggy"
    }){
        id
        username
    }
}
```

This should return the following response:

```
{
    "data": {
        "getUser": {
        "id": "2",
        "username": "shaggy"
        }
    }
}
```

## Invoking AWS Services
<a name="invoking-aws-services"></a>

You can use HTTP resolvers to set up a GraphQL API interface for AWS services. HTTP requests to AWS must be signed with the [Signature Version 4 process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) so that AWS can identify who sent them. AWS AppSync calculates the signature on your behalf when you associate an IAM role with the HTTP data source.

You provide two additional components to invoke AWS services with HTTP resolvers:
+ An IAM role with permissions to call the AWS service APIs
+ Signing configuration in the data source

For example, if you want to call the [ListGraphqlApis operation](https://docs.aws.amazon.com/appsync/latest/APIReference/API_ListGraphqlApis.html) with HTTP resolvers, you first [create an IAM role](attaching-a-data-source.md#aws-appsync-getting-started-build-a-schema-from-scratch) that AWS AppSync assumes with the following policy attached:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "appsync:ListGraphqlApis"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
```

------

Next, create the HTTP data source for AWS AppSync. In this example, you call AWS AppSync in the US West (Oregon) Region. Set up the following HTTP configuration in a file named `http.json`, which includes the signing region and service name:

```
{
    "endpoint": "https://appsync.us-west-2.amazonaws.com/",
    "authorizationConfig": {
        "authorizationType": "AWS_IAM",
        "awsIamConfig": {
            "signingRegion": "us-west-2",
            "signingServiceName": "appsync"
        }
    }
}
```

Then, use the AWS CLI to create the data source with an associated role as follows:

```
aws appsync create-data-source --api-id <API-ID> \
                               --name AWSAppSync \
                               --type HTTP \
                               --http-config file:///http.json \
                               --service-role-arn <ROLE-ARN>
```

When you attach a resolver to the field in the schema, use the following request mapping template to call AWS AppSync:

```
{
    "version": "2018-05-29",
    "method": "GET",
    "resourcePath": "/v1/apis"
}
```

When you run a GraphQL query for this data source, AWS AppSync signs the request using the role you provided and includes the signature in the request. The query returns a list of AWS AppSync GraphQL APIs in your account in that AWS Region.

# Using Aurora Serverless v2 with AWS AppSync
<a name="tutorial-rds-resolvers"></a>

Connect your GraphQL API to Aurora Serverless databases using AWS AppSync. This integration lets you execute SQL statements through GraphQL queries, mutations, and subscriptions - giving you a flexible way to interact with your relational data.

**Note**  
This tutorial uses the `US-EAST-1` Region.

**Benefits**
+ Seamless integration between GraphQL and relational databases
+ Ability to perform SQL operations through GraphQL interfaces
+ Serverless scalability with Aurora Serverless v2
+ Secure data access through AWS Secrets Manager
+ Protection against SQL injection through input sanitization
+ Flexible query capabilities including filtering and range operations

**Common Use Cases**
+ Building scalable applications with relational data requirements
+ Creating APIs that need both GraphQL flexibility and SQL database capabilities
+ Managing data operations through GraphQL mutations and queries
+ Implementing secure database access patterns

In this tutorial, you will learn the following.
+ Set up an Aurora Serverless v2 cluster
+ Enable Data API functionality
+ Create and configure database structures
+ Define GraphQL schemas for database operations
+ Implement resolvers for queries and mutations
+ Secure your data access through proper input sanitization
+ Execute various database operations through GraphQL interfaces

**Topics**
+ [

## Setting up your database cluster
](#create-cluster)
+ [

## Enable Data API
](#enable-data-api)
+ [

## Create database and table
](#create-database-and-table)
+ [

## GraphQL schema
](#graphql-schema)
+ [

## Connect Your API to Database Operations
](#configuring-resolvers)
+ [

## Modify Your Data Through the API
](#run-mutations)
+ [

## Retrieve Your Data
](#run-queries)
+ [

## Secure Your Data Access
](#input-sanitization)

## Setting up your database cluster
<a name="create-cluster"></a>

Before adding an Amazon RDS data source to AWS AppSync, you must first enable a Data API on an Aurora Serverless v2 cluster and **configure a secret** using *AWS Secrets Manager*. You can create an Aurora Serverless v2 cluster using the AWS CLI:

```
aws rds create-db-cluster \
    --db-cluster-identifier appsync-tutorial \
    --engine aurora-mysql \
    --engine-version 8.0 \
    --serverless-v2-scaling-configuration MinCapacity=0,MaxCapacity=1 \
    --master-username USERNAME \
    --master-user-password COMPLEX_PASSWORD \
    --enable-http-endpoint
```

This will return an ARN for the cluster.

After creating the cluster, you must add an Aurora Serverless v2 instance using the following command.

```
aws rds create-db-instance \
    --db-cluster-identifier appsync-tutorial \
    --db-instance-identifier appsync-tutorial-instance-1 \
    --db-instance-class db.serverless \
    --engine aurora-mysql
```

**Note**  
These endpoints take time to activate. You can check their status in the Amazon RDS console in the **Connectivity & security** tab for the cluster. You can also check the status of your cluster with the following AWS CLI command.   

```
aws rds describe-db-clusters \
    --db-cluster-identifier appsync-tutorial \
    --query "DBClusters[0].Status"
```

You can create a *Secret* using the AWS Secrets Manager Console or the AWS CLI with an input file such as the following using the `USERNAME` and `COMPLEX_PASSWORD` from the previous step.

```
{
    "username": "USERNAME",
    "password": "COMPLEX_PASSWORD"
}
```

Pass this as a parameter to the AWS CLI:

```
aws secretsmanager create-secret --name HttpRDSSecret --secret-string file://creds.json --region us-east-1
```

This will return an ARN for the secret.

 **Note the ARN** of your Aurora Serverless cluster and Secret for later use in the AppSync console when creating a data source.

## Enable Data API
<a name="enable-data-api"></a>

You can enable the Data API on your cluster by [following the instructions in the RDS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html). The Data API must be enabled before adding as an AppSync data source.

## Create database and table
<a name="create-database-and-table"></a>

Once you have enabled your Data API you can ensure it works with the `aws rds-data execute-statement` command in the AWS CLI. This will ensure that your Aurora Serverless cluster is configured correctly before adding it to your AppSync API. First create a database called *TESTDB* with the `--sql` parameter like so:

```
aws rds-data execute-statement --resource-arn "arn:aws:rds:us-east-1:123456789000:cluster:http-endpoint-test" \
--schema "mysql"  --secret-arn "arn:aws:secretsmanager:us-east-1:123456789000:secret:testHttp2-AmNvc1"  \
--region us-east-1 --sql "create DATABASE TESTDB"
```

If this runs without error, add a table with the *create table* command:

```
aws rds-data execute-statement --resource-arn "arn:aws:rds:us-east-1:123456789000:cluster:http-endpoint-test" \
 --schema "mysql"  --secret-arn "arn:aws:secretsmanager:us-east-1:123456789000:secret:testHttp2-AmNvc1" \
 --region us-east-1 \
 --sql "create table Pets(id varchar(200), type varchar(200), price float)" --database "TESTDB"
```

If everything has run without issue you can move forward to adding the cluster as a data source in your AppSync API.

## GraphQL schema
<a name="graphql-schema"></a>

Now that your Aurora Serverless Data API is up and running with a table, we will create a GraphQL schema and attach resolvers for performing mutations and subscriptions. Create a new API in the AWS AppSync console and navigate to the **Schema** page, and enter the following:

```
type Mutation {
    createPet(input: CreatePetInput!): Pet
    updatePet(input: UpdatePetInput!): Pet
    deletePet(input: DeletePetInput!): Pet
}

input CreatePetInput {
    type: PetType
    price: Float!
}

input UpdatePetInput {
id: ID!
    type: PetType
    price: Float!
}

input DeletePetInput {
    id: ID!
}

type Pet {
    id: ID!
    type: PetType
    price: Float
}

enum PetType {
    dog
    cat
    fish
    bird
    gecko
}

type Query {
    getPet(id: ID!): Pet
    listPets: [Pet]
    listPetsByPriceRange(min: Float, max: Float): [Pet]
}

schema {
    query: Query
    mutation: Mutation
}
```

 **Save** your schema and navigate to the **Data Sources** page and create a new data source. Select **Relational database** for the Data source type, and provide a friendly name. Use the database name that you created in the last step, as well as the **Cluster ARN** that you created it in. For the **Role** you can either have AppSync create a new role or create one with a policy similar to the below:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds-data:BatchExecuteStatement",
                "rds-data:BeginTransaction",
                "rds-data:CommitTransaction",
                "rds-data:ExecuteStatement",
                "rds-data:RollbackTransaction"
            ],
            "Resource": [
                "arn:aws:rds:us-east-1:111122223333:cluster:mydbcluster",
                "arn:aws:rds:us-east-1:111122223333:cluster:mydbcluster:*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetSecretValue"
            ],
            "Resource": [
            "arn:aws:secretsmanager:us-east-1:111122223333:secret:mysecret",
            "arn:aws:secretsmanager:us-east-1:111122223333:secret:mysecret:*"
            ]
        }
    ]
}
```

------

Note there are two **Statements** in this policy which you are granting role access. The first **Resource** is your Aurora Serverless cluster and the second is your AWS Secrets Manager ARN. You will need to provide **BOTH** ARNs in the AppSync data source configuration before clicking **Create**.

Pass this as a parameter to the AWS CLI.

```
aws secretsmanager create-secret \
  --name HttpRDSSecret \
  --secret-string file://creds.json \
  --region us-east-1
```

This will return an ARN for the secret. Take note of the ARN of your Aurora Serverless cluster and Secret for later when creating a data source in the AWS AppSync console.

### Build Your Database Structure
<a name="create-database-and-table"></a>

Once you have enabled your Data API you can ensure it works with the `aws rds-data execute-statement` command in the AWS CLI. This will ensure that your Aurora Serverless v2 cluster is configured correctly before adding it to your AWS AppSync API. First, create a database called *TESTDB* with the `--sql` parameter as follows.

```
aws rds-data execute-statement \
                --resource-arn "arn:aws:rds:us-east-1:111122223333:cluster:appsync-tutorial" \
                --secret-arn "arn:aws:secretsmanager:us-east-1:111122223333:secret:appsync-tutorial-rds-secret"  \
                --region us-east-1 \
                --sql "create DATABASE TESTDB"
```

If this runs without errors, add a table with the following *create table* command.

```
aws rds-data execute-statement \
      --resource-arn "arn:aws:rds:us-east-1:111122223333:cluster:http-endpoint-test" \
      --secret-arn "arn:aws:secretsmanager:us-east-1:111122223333:secret:testHttp2-AmNvc1" \
      --region us-east-1 \
      --sql "create table Pets(id varchar(200), type varchar(200), price float)" \
      --database "TESTDB"
```

### Design Your API Interface
<a name="graphql-schema"></a>

After Aurora Serverless v2 Data API is up and running with a table, create a GraphQL schema and attach resolvers for performing mutations and subscriptions. Create a new API in the AWS AppSync console and navigate to the **Schema** page in the console, and enter the following.

```
type Mutation {
        createPet(input: CreatePetInput!): Pet
        updatePet(input: UpdatePetInput!): Pet
        deletePet(input: DeletePetInput!): Pet
    }
    
    input CreatePetInput {
        type: PetType
        price: Float!
    }
    
    input UpdatePetInput {
        id: ID!
        type: PetType
        price: Float!
    }
    
    input DeletePetInput {
        id: ID!
    }
    
    type Pet {
        id: ID!
        type: PetType
        price: Float
    }
    
    enum PetType {
        dog
        cat
        fish
        bird
        gecko
    }
    
    type Query {
        getPet(id: ID!): Pet
        listPets: [Pet]
        listPetsByPriceRange(min: Float, max: Float): [Pet]
    }
    
    schema {
        query: Query
        mutation: Mutation
    }
```

 **Save** your schema and navigate to the **Data Sources** page and create a new data source. Choose **Relational database** for the **Data source** type, and provide a friendly name. Use the database name that you created in the last step, as well as the **Cluster ARN** that you created it in. For the **Role** you can either have AWS AppSync create a new role or create one with a policy similar to the following.

------
#### [ JSON ]

****  

```
{
        "Version":"2012-10-17",		 	 	 
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "rds-data:BatchExecuteStatement",
                    "rds-data:BeginTransaction",
                    "rds-data:CommitTransaction",
                    "rds-data:ExecuteStatement",
                    "rds-data:RollbackTransaction"
                ],
                "Resource": [
                    "arn:aws:rds:us-east-1:111122223333:cluster:mydbcluster",
                    "arn:aws:rds:us-east-1:111122223333:cluster:mydbcluster:*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "secretsmanager:GetSecretValue"
                ],
                "Resource": [
                "arn:aws:secretsmanager:us-east-1:111122223333:secret:mysecret",
                "arn:aws:secretsmanager:us-east-1:111122223333:secret:mysecret:*"
                ]
            }
        ]
    }
```

------

Note there are two **Statements** in this policy which you are granting role access. The first **Resource** is your Aurora Serverless v2 cluster and the second is your AWS Secrets Manager ARN. You will need to provide **BOTH** ARNs in the AWS AppSync data source configuration before clicking **Create**.

## Connect Your API to Database Operations
<a name="configuring-resolvers"></a>

Now that we have a valid GraphQL schema and an RDS data source, you can attach resolvers to the GraphQL fields to your schema. Our API will offer the following capabilities:

1. create a pet using the *Mutation.createPet* field

1. update a pet using the *Mutation.updatePet* field

1. delete a pet using the *Mutation.deletePet* field

1. get a single using via the *Query.getPet* field

1. list all using the *Query.listPets* field

1. list pets in a price range using the *Query.listPetsByPriceRange* field

### Mutation.createPet
<a name="mutation-createpet"></a>

From the schema editor in the AWS AppSync console, on the right side choose **Attach Resolver** for `createPet(input: CreatePetInput!): Pet`. Choose your RDS data source. In the **request mapping template** section, add the following template:

```
#set($id=$utils.autoId())
{
"version": "2018-05-29",
    "statements": [
        "insert into Pets VALUES (:ID, :TYPE, :PRICE)",
        "select * from Pets WHERE id = :ID"
    ],
    "variableMap": {
        ":ID": "$ctx.args.input.id",
        ":TYPE": $util.toJson($ctx.args.input.type),
        ":PRICE": $util.toJson($ctx.args.input.price)
    }
}
```

The system executes SQL statements sequentially, based on the order in the **statements** array. The results will come back in the same order. Since this is a mutation, you will run a *select* statement after the *insert* to retrieve the committed values in order to populate the GraphQL response mapping template.

In the **response mapping template** section, add the following template:

```
$utils.toJson($utils.rds.toJsonObject($ctx.result)[1][0])
```

Because the *statements* has two SQL queries, we need to specify the second result in the matrix that comes back from the database with: `$utils.rds.toJsonString($ctx.result))[1][0])`.

### Mutation.updatePet
<a name="mutation-updatepet"></a>

From the schema editor in the AWS AppSync console, choose **Attach Resolver** for `updatePet(input: UpdatePetInput!): Pet`. Choose your **RDS data source**. In the **request mapping template** section, add the following template.

```
{
"version": "2018-05-29",
    "statements": [
        $util.toJson("update Pets set type=:TYPE, price=:PRICE WHERE id=:ID"),
        $util.toJson("select * from Pets WHERE id = :ID")
    ],
    "variableMap": {
        ":ID": "$ctx.args.input.id",
        ":TYPE": $util.toJson($ctx.args.input.type),
        ":PRICE": $util.toJson($ctx.args.input.price)
    }
}
```

In the **response mapping template** section, add the following template.

```
$utils.toJson($utils.rds.toJsonObject($ctx.result)[1][0])
```

### Mutation.deletePet
<a name="mutation-deletepet"></a>

From the schema editor in the AWS AppSync console, choose **Attach Resolver** for `deletePet(input: DeletePetInput!): Pet`. Choose your **RDS data source**. In the **request mapping template** section, add the following template.

```
{
"version": "2018-05-29",
    "statements": [
        $util.toJson("select * from Pets WHERE id=:ID"),
        $util.toJson("delete from Pets WHERE id=:ID")
    ],
    "variableMap": {
        ":ID": "$ctx.args.input.id"
    }
}
```

In the **response mapping template** section, add the following template.

```
$utils.toJson($utils.rds.toJsonObject($ctx.result)[0][0])
```

### Query.getPet
<a name="query-getpet"></a>

Now that the mutations are created for your schema, connect the three queries to showcase how to get individual items, lists, and apply SQL filtering. From the **schema editor** in the AWS AppSync console, choose **Attach Resolver** for `getPet(id: ID!): Pet`. Choose your **RDS data source**. In the **request mapping template** section, add the following template.

```
{
"version": "2018-05-29",
        "statements": [
            $util.toJson("select * from Pets WHERE id=:ID")
    ],
    "variableMap": {
        ":ID": "$ctx.args.id"
    }
}
```

In the **response mapping template** section, add the following template:

```
$utils.toJson($utils.rds.toJsonObject($ctx.result)[0][0])
```

### Query.listPets
<a name="query-listpets"></a>

From the schema editor in the AWS AppSync console, on the right side choose **Attach Resolver** for `getPet(id: ID!): Pet`. Choose your **RDS data source**. In the **request mapping template** section, add the following template.

```
{
    "version": "2018-05-29",
    "statements": [
        "select * from Pets"
    ]
}
```

In the **response mapping template** section, add the following template.

```
$utils.toJson($utils.rds.toJsonObject($ctx.result)[0])
```

### Query.listPetsByPriceRange
<a name="query-listpetsbypricerange"></a>

From the schema editor in the AWS AppSync console, on the right side choose **Attach Resolver** for `getPet(id: ID!): Pet`. Choose your **RDS data source**. In the **request mapping template** section, add the following template.

```
{
    "version": "2018-05-29",
    "statements": [
            "select * from Pets where price > :MIN and price < :MAX"
    ],

    "variableMap": {
        ":MAX": $util.toJson($ctx.args.max),
        ":MIN": $util.toJson($ctx.args.min)
    }
}
```

In the **response mapping template** section, add the following template:

```
$utils.toJson($utils.rds.toJsonObject($ctx.result)[0])
```

## Modify Your Data Through the API
<a name="run-mutations"></a>

Now that you have configured all of your resolvers with SQL statements and connected your GraphQL API to your Serverless Aurora Data API, you can begin performing mutations and queries. In AWS AppSync console, choose the **Queries** tab and enter the following to create a Pet:

```
mutation add {
    createPet(input : { type:fish, price:10.0 }){
        id
        type
        price
    }
}
```

The response should contain the *id*, *type*, and *price* like so:

```
{
  "data": {
    "createPet": {
      "id": "c6fedbbe-57ad-4da3-860a-ffe8d039882a",
      "type": "fish",
      "price": "10.0"
    }
  }
}
```

You can modify this item by running the *updatePet* mutation:

```
mutation update {
    updatePet(input : {
        id: ID_PLACEHOLDER,
        type:bird,
        price:50.0
    }){
        id
        type
        price
    }
}
```

Note that we used the *id* which was returned from the *createPet* operation earlier. This will be a unique value for your record as the resolver leveraged `$util.autoId()`. You could delete a record in a similar manner:

```
mutation delete {
    deletePet(input : {id:ID_PLACEHOLDER}){
        id
        type
        price
    }
}
```

Create a few records with the first mutation with different values for *price* and then run some queries.

## Retrieve Your Data
<a name="run-queries"></a>

Still in the **Queries** tab of the console, use the following statement to list all of the records you’ve created.

```
query allpets {
    listPets {
        id
        type
        price
    }
}
```

Leverage the SQL *WHERE* predicate that had `where price > :MIN and price < :MAX` in our mapping template for *Query.listPetsByPriceRange* with the following GraphQL query:

```
query petsByPriceRange {
    listPetsByPriceRange(min:1, max:11) {
        id
        type
        price
    }
}
```

You should only see records with a *price* over \$11 or less than \$110. Finally, you can perform queries to retrieve individual records as follows:

```
query onePet {
    getPet(id:ID_PLACEHOLDER){
        id
        type
        price
    }
}
```

## Secure Your Data Access
<a name="input-sanitization"></a>

SQL injection is a security vulnerability in database applications. It occurs when attackers insert malicious SQL code through user input fields. This can allow unauthorized access to database data. We recommend that you carefully validate and sanitize all user inputs before processing using `variableMap` for protection against SQL injection attacks. If variable maps are not used, you are responsible for sanitizing the arguments of their GraphQL operations. One way to do this is to provide input specific validation steps in the request mapping template before execution of a SQL statement against your Data API. Let’s see how we can modify the request mapping template of the `listPetsByPriceRange` example. Instead of relying solely on the user input you can do the following:

```
#set($validMaxPrice = $util.matches("\d{1,3}[,\\.]?(\\d{1,2})?",$ctx.args.maxPrice))

#set($validMinPrice = $util.matches("\d{1,3}[,\\.]?(\\d{1,2})?",$ctx.args.minPrice))


#if (!$validMaxPrice || !$validMinPrice)
    $util.error("Provided price input is not valid.")
#end
{
    "version": "2018-05-29",
    "statements": [
            "select * from Pets where price > :MIN and price < :MAX"
    ],

    "variableMap": {
        ":MAX": $util.toJson($ctx.args.maxPrice),
        ":MIN": $util.toJson($ctx.args.minPrice)
    }
}
```

Another way to protect against rogue input when executing resolvers against your Data API is to use prepared statements together with stored procedure and parameterized inputs. For example, in the resolver for `listPets` define the following procedure that executes the *select* as a prepared statement:

```
CREATE PROCEDURE listPets (IN type_param VARCHAR(200))
  BEGIN
     PREPARE stmt FROM 'SELECT * FROM Pets where type=?';
     SET @type = type_param;
     EXECUTE stmt USING @type;
     DEALLOCATE PREPARE stmt;
  END
```

Create this in your Aurora Serverless v2 Instance.

```
aws rds-data execute-statement --resource-arn "arn:aws:rds:us-east-1:xxxxxxxxxxxx:cluster:http-endpoint-test" \
--schema "mysql"  --secret-arn "arn:aws:secretsmanager:us-east-1:xxxxxxxxxxxx:secret:httpendpoint-xxxxxx"  \
--region us-east-1  --database "DB_NAME" \
--sql "CREATE PROCEDURE listPets (IN type_param VARCHAR(200)) BEGIN PREPARE stmt FROM 'SELECT * FROM Pets where type=?'; SET @type = type_param; EXECUTE stmt USING @type; DEALLOCATE PREPARE stmt; END"
```

The resulting resolver code for listPets is simplified since we now simply call the stored procedure. At a minimum, any string input should have single quotes [escaped](#escaped).

```
#set ($validType = $util.isString($ctx.args.type) && !$util.isNullOrBlank($ctx.args.type))
#if (!$validType)
    $util.error("Input for 'type' is not valid.", "ValidationError")
#end

{
    "version": "2018-05-29",
    "statements": [
        "CALL listPets(:type)"
    ]
    "variableMap": {
        ":type": $util.toJson($ctx.args.type.replace("'", "''"))
    }
}
```

### Using escape strings
<a name="escaped"></a>

Use single quotes to mark the start and end of string literals in an SQL statement e.g.. `'some string value'`. To allow string values with one or more single quote characters ( `'`) to be used within a string, each must be replaced with two single quotes (`''`). For example, if the input string is `Nadia's dog`, you would escape it for the SQL statement like

```
update Pets set type='Nadia''s dog' WHERE id='1'
```

# Using pipeline resolvers in AWS AppSync
<a name="tutorial-pipeline-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

AWS AppSync provides a simple way to wire a GraphQL field to a single data source through unit resolvers. However, executing a single operation might not be enough. Pipeline resolvers offer the ability to serially execute operations against data sources. Create functions in your API and attach them to a pipeline resolver. Each function execution result is piped to the next until no function is left to execute. With pipeline resolvers you can now build more complex workflows directly in AWS AppSync. In this tutorial, you build a simple pictures viewing app, where users can post and view pictures posted by their friends.

## One-Click Setup
<a name="one-click-setup"></a>

If you want to automatically set up the GraphQL endpoint in AWS AppSync with all the resolvers configured and the necessary AWS resources, you can use the following AWS CloudFormation template :

[https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/pipeline/pipeline-resolvers-full.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/pipeline/pipeline-resolvers-full.yaml)

This stack creates the following resources in your account:
+ IAM Role for AWS AppSync to access the resources in your account
+ 2 DynamoDB tables
+ 1 Amazon Cognito user pool
+ 2 Amazon Cognito user pool groups
+ 3 Amazon Cognito user pool users
+ 1 AWS AppSync API

At the end of the AWS CloudFormation stack creation process you receive one email for each of the three Amazon Cognito users that were created. Each email contains a temporary password that you use to log in as an Amazon Cognito user to the AWS AppSync console. Save the passwords for the remainder of the tutorial.

## Manual Setup
<a name="manual-setup"></a>

If you prefer to manually go through a step-by-step process through the AWS AppSync console, follow the setup process below.

### Setting Up Your Non AWS AppSync Resources
<a name="setting-up-your-non-aws-appsync-resources"></a>

The API communicates with two DynamoDB tables: a **pictures** table that stores pictures and a **friends** table that stores relationships between users. The API is configured to use Amazon Cognito user pool as authentication type. The following CloudFormation stack sets up these resources in the account.

[https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/pipeline/pipeline-resolvers-resources-only.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/pipeline/pipeline-resolvers-resources-only.yaml)

At the end of the AWS CloudFormation stack creation process you receive one email for each of the three Amazon Cognito users that were created. Each email contains a temporary password that you use to log in as an Amazon Cognito user to the AWS AppSync console. Save the passwords for the remainder of the tutorial.

### Creating Your GraphQL API
<a name="creating-your-graphql-api"></a>

To create the GraphQL API in AWS AppSync:

1. Open the AWS AppSync console and choose **Build From Scratch** and choose **Start**.

1. Set the name of the API to `AppSyncTutorial-PicturesViewer`.

1. Choose **Create**.

The AWS AppSync console creates a new GraphQL API for you using the API key authentication mode. You can use the console to set up the rest of the GraphQL API and run queries against it for the rest of this tutorial.

### Configuring The GraphQL API
<a name="configuring-the-graphql-api"></a>

You need to configure the AWS AppSync API with the Amazon Cognito user pool that you just created.

1. Choose the **Settings** tab.

1. Under the **Authorization Type** section, choose *Amazon Cognito User Pool*.

1. Under **User Pool Configuration**, choose **US-WEST-2** for the *AWS Region*.

1. Choose the **AppSyncTutorial-UserPool** user pool.

1. Choose **DENY** as *Default Action*.

1. Leave the **AppId client regex** field blank.

1. Choose **Save**.

The API is now set up to use Amazon Cognito user pool as its authorization type.

### Configuring Data Sources for the DynamoDB Tables
<a name="configuring-data-sources-for-the-ddb-tables"></a>

After the DynamoDB tables have been created, navigate to your AWS AppSync GraphQL API in the console and choose the **Data Sources** tab. Now, you’re going to create a datasource in AWS AppSync for each of the DynamoDB tables that you just created.

1. Choose the **Data source** tab.

1. Choose **New** to create a new data source.

1. For the data source name, enter `PicturesDynamoDBTable`.

1. For data source type, choose **Amazon DynamoDB table**.

1. For region, choose **US-WEST-2**.

1. From the list of tables, choose the **AppSyncTutorial-Pictures**DynamoDB table.

1. In the **Create or use an existing role** section, choose **Existing role**.

1. Choose the role that was just created from the CloudFormation template. If you did not change the *ResourceNamePrefix*, the name of the role should be **AppSyncTutorial-DynamoDBRole**.

1. Choose **Create**.

Repeat the same process for the **friends** table, the name of the DynamoDB table should be **AppSyncTutorial-Friends** if you did not change the *ResourceNamePrefix* parameter at the time of creating the CloudFormation stack.

### Creating the GraphQL Schema
<a name="creating-the-graphql-schema"></a>

Now that the data sources are connected to your DynamoDB tables, let’s create a GraphQL schema. From the schema editor in the AWS AppSync console, make sure your schema matches the following schema:

```
schema {
    query: Query
    mutation: Mutation
}

type Mutation {
    createPicture(input: CreatePictureInput!): Picture!
    @aws_auth(cognito_groups: ["Admins"])
    createFriendship(id: ID!, target: ID!): Boolean
    @aws_auth(cognito_groups: ["Admins"])
}

type Query {
    getPicturesByOwner(id: ID!): [Picture]
    @aws_auth(cognito_groups: ["Admins", "Viewers"])
}

type Picture {
    id: ID!
    owner: ID!
    src: String
}

input CreatePictureInput {
    owner: ID!
    src: String!
}
```

Choose **Save Schema** to save your schema.

Some of the schema fields have been annotated with the *@aws\$1auth* directive. Since the API default action configuration is set to *DENY*, the API rejects all users that are not members of the groups mentioned inside the *@aws\$1auth* directive. For more information about how to secure your API, you can read the [Security](security-authz.md#aws-appsync-security) page. In this case, only admin users have access to the *Mutation.createPicture* and *Mutation.createFriendship* fields, while users that are members of either *Admins* or *Viewers* groups can access the *Query.getPicturesByOwner* field. All other users don’t have access.

### Configuring Resolvers
<a name="configuring-resolvers"></a>

Now that you have a valid GraphQL schema and two data sources, you can attach resolvers to the GraphQL fields on the schema. The API offers the following capabilities:
+ Create a picture via the *Mutation.createPicture* field
+ Create friendship via the *Mutation.createFriendship* field
+ Retrieve a picture via the *Query.getPicture* field

#### Mutation.createPicture
<a name="mutation-createpicture"></a>

From the schema editor in the AWS AppSync console, on the right side choose **Attach Resolver** for `createPicture(input: CreatePictureInput!): Picture!`. Choose the DynamoDB*PicturesDynamoDBTable* data source. In the **request mapping template** section, add the following template:

```
#set($id = $util.autoId())

{
    "version" : "2018-05-29",

    "operation" : "PutItem",

    "key" : {
        "id" : $util.dynamodb.toDynamoDBJson($id),
        "owner": $util.dynamodb.toDynamoDBJson($ctx.args.input.owner)
    },

    "attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args.input)
}
```

In the **response mapping template** section, add the following template:

```
#if($ctx.error)
    $util.error($ctx.error.message, $ctx.error.type)
#end
$util.toJson($ctx.result)
```

The create picture functionality is done. You are saving a picture in the **Pictures** table, using a randomly generated UUID as id of the picture, and using the Cognito username as owner of the picture.

#### Mutation.createFriendship
<a name="mutation-createfriendship"></a>

From the schema editor in the AWS AppSync console, on the right side choose **Attach Resolver** for `createFriendship(id: ID!, target: ID!): Boolean`. Choose the DynamoDB**FriendsDynamoDBTable** data source. In the **request mapping template** section, add the following template:

```
#set($userToFriendFriendship = { "userId" : "$ctx.args.id", "friendId": "$ctx.args.target" })
#set($friendToUserFriendship = { "userId" : "$ctx.args.target", "friendId": "$ctx.args.id" })
#set($friendsItems = [$util.dynamodb.toMapValues($userToFriendFriendship), $util.dynamodb.toMapValues($friendToUserFriendship)])

{
    "version" : "2018-05-29",
    "operation" : "BatchPutItem",
    "tables" : {
        ## Replace 'AppSyncTutorial-' default below with the ResourceNamePrefix you provided in the CloudFormation template
        "AppSyncTutorial-Friends": $util.toJson($friendsItems)
    }
}
```

Important: In the **BatchPutItem** request template, the exact name of the DynamoDB table should be present. The default table name is *AppSyncTutorial-Friends*. If you are using the wrong table name, you get an error when AppSync tries to assume the provided role.

For the sake of simplicity in this tutorial, proceed as if the friendship request has been approved and save the relationship entry directly into the **AppSyncTutorialFriends** table.

Effectively, you’re storing two items for each friendship as the relationship is bi-directional. For more details about Amazon DynamoDB best practices to represent many-to-many relationships, see [DynamoDB Best Practices](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-adjacency-graphs.html) .

In the **response mapping template** section, add the following template:

```
#if($ctx.error)
    $util.error($ctx.error.message, $ctx.error.type)
#end
true
```

Note: Make sure your request template contains the right table name. The default name is *AppSyncTutorial-Friends*, but your table name might differ if you changed the CloudFormation **ResourceNamePrefix** parameter.

#### Query.getPicturesByOwner
<a name="query-getpicturesbyowner"></a>

Now that you have friendships and pictures, you need to provide the ability for users to view their friends’ pictures. To satisfy this requirement, you need to first check that the requester is friend with the owner, and finally query for the pictures.

Because this functionality requires two data source operations, you’re going to create two functions. The first function, **isFriend**, checks whether the requester and the owner are friends. The second function, **getPicturesByOwner**, retrieves the requested pictures given an owner ID. Let’s look at the execution flow below for the proposed resolver on the *Query.getPicturesByOwner* field:

1. Before mapping template: Prepare the context and field input arguments.

1. isFriend function: Checks whether the requester is the owner of the picture. If not, it checks whether the requester and owner users are friends by doing a DynamoDB GetItem operation on the friends table.

1. getPicturesByOwner function: Retrieves pictures from the Pictures table using a DynamoDB Query operation on the *owner-index* Global Secondary Index.

1. After mapping template: Maps picture result so DynamoDB attributes map correctly to the expected GraphQL type fields.

Let’s first create the functions.

##### isFriend Function
<a name="isfriend-function"></a>

1. Choose the **Functions** tab.

1. Choose **Create Function** to create a function.

1. For the data source name, enter `FriendsDynamoDBTable`.

1. For the function name, enter *isFriend*.

1. Inside the request mapping template text area, paste the following template:

   ```
   #set($ownerId = $ctx.prev.result.owner)
   #set($callerId = $ctx.prev.result.callerId)
   
   ## if the owner is the caller, no need to make the check
   #if($ownerId == $callerId)
       #return($ctx.prev.result)
   #end
   
   {
       "version" : "2018-05-29",
   
       "operation" : "GetItem",
   
       "key" : {
           "userId" : $util.dynamodb.toDynamoDBJson($callerId),
           "friendId" : $util.dynamodb.toDynamoDBJson($ownerId)
       }
   }
   ```

1. Inside the response mapping template text area, paste the following template:

   ```
   #if($ctx.error)
       $util.error("Unable to retrieve friend mapping message: ${ctx.error.message}", $ctx.error.type)
   #end
   
   ## if the users aren't friends
   #if(!$ctx.result)
       $util.unauthorized()
   #end
   
   $util.toJson($ctx.prev.result)
   ```

1. Choose **Create Function**.

Result: You’ve created the **isFriend** function.

##### getPicturesByOwner function
<a name="getpicturesbyowner-function"></a>

1. Choose the **Functions** tab.

1. Choose **Create Function** to create a function.

1. For the data source name, enter `PicturesDynamoDBTable`.

1. For the function name, enter `getPicturesByOwner`.

1. Inside the request mapping template text area, paste the following template:

   ```
   {
       "version" : "2018-05-29",
   
       "operation" : "Query",
   
       "query" : {
           "expression": "#owner = :owner",
           "expressionNames": {
               "#owner" : "owner"
           },
           "expressionValues" : {
               ":owner" : $util.dynamodb.toDynamoDBJson($ctx.prev.result.owner)
           }
       },
   
       "index": "owner-index"
   }
   ```

1. Inside the response mapping template text area, paste the following template:

   ```
   #if($ctx.error)
       $util.error($ctx.error.message, $ctx.error.type)
   #end
   
   $util.toJson($ctx.result)
   ```

1. Choose **Create Function**.

Result: You’ve created the **getPicturesByOwner** function. Now that the functions have been created, attach a pipeline resolver to the *Query.getPicturesByOwner* field.

From the schema editor in the AWS AppSync console, on the right side choose **Attach Resolver** for `Query.getPicturesByOwner(id: ID!): [Picture]`. On the following page, choose the **Convert to pipeline resolver** link that appears underneath the data source drop-down list. Use the following for the before mapping template:

```
#set($result = { "owner": $ctx.args.id, "callerId": $ctx.identity.username })
$util.toJson($result)
```

In the **after mapping template** section, use the following:

```
#foreach($picture in $ctx.result.items)
    ## prepend "src://" to picture.src property
    #set($picture['src'] = "src://${picture['src']}")
#end
$util.toJson($ctx.result.items)
```

Choose **Create Resolver**. You have successfully attached your first pipeline resolver. On the same page, add the two functions you created previously. In the functions section, choose **Add A Function** and then choose or type the name of the first function, **isFriend**. Add the second function by following the same process for the **getPicturesByOwner** function. Make sure the **isFriend** function appears first in the list followed by the **getPicturesByOwner** function. You can use the up and down arrows to rearrange to order of execution of the functions in the pipeline.

Now that the pipeline resolver is created and you’ve attached the functions, let’s test the newly created GraphQL API.

## Testing Your GraphQL API
<a name="testing-your-graphql-api"></a>

First, you need to populate pictures and friendships by executing a few mutations using the admin user you created. On the left side of the AWS AppSync console, choose the **Queries** tab.

### createPicture Mutation
<a name="createpicture-mutation"></a>

1. In AWS AppSync console, choose the **Queries** tab.

1. Choose **Login With User Pools**.

1. On the modal, enter the Cognito Sample Client ID that was created by the CloudFormation stack for example, 37solo6mmhh7k4v63cqdfgdg5d).

1. Enter the user name you passed as parameter to the CloudFormation stack. Default is **nadia**.

1. Use the temporary password that was sent to the email you provided as parameter to the CloudFormation stack (for example, *UserPoolUserEmail*).

1. Choose Login. You should now see the button renamed to **Logout nadia**, or whatever user name you chose when creating the CloudFormation stack (that is, *UserPoolUsername*).

Let’s send a few *createPicture* mutations to populate the pictures table. Execute the following GraphQL query inside the console:

```
mutation {
  createPicture(input:{
    owner: "nadia"
    src: "nadia.jpg"
  }) {
    id
    owner
    src
  }
}
```

The response should look like below:

```
{
  "data": {
    "createPicture": {
      "id": "c6fedbbe-57ad-4da3-860a-ffe8d039882a",
      "owner": "nadia",
      "src": "nadia.jpg"
    }
  }
}
```

Let’s add a few more pictures:

```
mutation {
  createPicture(input:{
    owner: "shaggy"
    src: "shaggy.jpg"
  }) {
    id
    owner
    src
  }
}
```

```
mutation {
  createPicture(input:{
    owner: "rex"
    src: "rex.jpg"
  }) {
    id
    owner
    src
  }
}
```

You’ve added three pictures using **nadia** as the admin user.

### createFriendship Mutation
<a name="createfriendship-mutation"></a>

Let’s add a friendship entry. Execute the following mutations in the console.

Note: You must still be logged in as the admin user (the default admin user is **nadia**).

```
mutation {
  createFriendship(id: "nadia", target: "shaggy")
}
```

The response should look like:

```
{
  "data": {
    "createFriendship": true
  }
}
```

 **nadia** and **shaggy** are friends. **rex** is not friends with anybody.

### getPicturesByOwner Query
<a name="getpicturesbyowner-query"></a>

For this step, log in as the **nadia** user using Cognito User Pools, using the credentials set up in the beginning of this tutorial. As **nadia**, retrieve the pictures owned by **shaggy**.

```
query {
    getPicturesByOwner(id: "shaggy") {
        id
        owner
        src
    }
}
```

Since **nadia** and **shaggy** are friends, the query should return the corresponding picture.

```
{
  "data": {
    "getPicturesByOwner": [
      {
        "id": "05a16fba-cc29-41ee-a8d5-4e791f4f1079",
        "owner": "shaggy",
        "src": "src://shaggy.jpg"
      }
    ]
  }
}
```

Similarly, if **nadia** attempts to retrieve her own pictures, it also succeeds. The pipeline resolver has been optimized to avoid running the **isFriend** GetItem operation in that case. Try the following query:

```
query {
    getPicturesByOwner(id: "nadia") {
        id
        owner
        src
    }
}
```

If you enable logging on your API (in the **Settings** pane), set the debug level to **ALL**, and run the same query again, it returns logs for the field execution. By looking at the logs, you can determine whether the **isFriend** function returned early at the **Request Mapping Template** stage:

```
{
  "errors": [],
  "mappingTemplateType": "Request Mapping",
  "path": "[getPicturesByOwner]",
  "resolverArn": "arn:aws:appsync:us-west-2:XXXX:apis/XXXX/types/Query/fields/getPicturesByOwner",
  "functionArn": "arn:aws:appsync:us-west-2:XXXX:apis/XXXX/functions/o2f42p2jrfdl3dw7s6xub2csdfs",
  "functionName": "isFriend",
  "earlyReturnedValue": {
    "owner": "nadia",
    "callerId": "nadia"
  },
  "context": {
    "arguments": {
      "id": "nadia"
    },
    "prev": {
      "result": {
        "owner": "nadia",
        "callerId": "nadia"
      }
    },
    "stash": {},
    "outErrors": []
  },
  "fieldInError": false
}
```

The *earlyReturnedValue* key represents the data that was returned by the *\$1return* directive.

Finally, even though **rex** is a member of the **Viewers** Cognito UserPool Group, and because **rex** isn’t friends with anybody, he won’t be able to access any of the pictures owned by **shaggy** or **nadia**. If you log in as **rex** in the console and execute the following query:

```
query {
    getPicturesByOwner(id: "nadia") {
        id
        owner
        src
    }
}
```

You get the following unauthorized error:

```
{
  "data": {
    "getPicturesByOwner": null
  },
  "errors": [
    {
      "path": [
        "getPicturesByOwner"
      ],
      "data": null,
      "errorType": "Unauthorized",
      "errorInfo": null,
      "locations": [
        {
          "line": 2,
          "column": 9,
          "sourceName": null
        }
      ],
      "message": "Not Authorized to access getPicturesByOwner on type Query"
    }
  ]
}
```

You have successfully implemented complex authorization using pipeline resolvers.

# Using Delta Sync operations on versioned data sources in AWS AppSync
<a name="tutorial-delta-sync"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html).

Client applications in AWS AppSync store data by caching GraphQL responses locally to disk in a mobile/web application. Versioned data sources and `Sync` operations give customers the ability to perform the sync process using a single resolver. This allows clients to hydrate their local cache with results from one base query that might have a lot of records, and then receive only the data altered since their last query (the *delta updates*). By allowing clients to perform the base hydration of the cache with an initial request and incremental updates in another, you can move the computation from your client application to the backend. This is substantially more efficient for client applications that frequently switch between online and offline states.

To implement Delta Sync, the `Sync` query uses the `Sync` operation on a versioned data source. When an AWS AppSync mutation changes an item in a versioned data source, a record of that change will be stored in the *Delta* table as well. You can choose to use different *Delta* tables (e.g. one per type, one per domain area) for other versioned data sources or a single *Delta* table for your API. AWS AppSync recommends against using a single *Delta* table for multiple APIs to avoid the collision of primary keys.

In addition, Delta Sync clients can also receive a subscription as an argument, and then the client coordinates subscription reconnects and writes between offline to online transitions. Delta Sync performs this by automatically resuming subscriptions (including exponential backoff and retry with jitter through different network error scenarios), and storing events in a queue. The appropriate delta or base query is then run before merging any events from the queue, and finally processing subscriptions as normal.

Documentation for client configuration options, including the Amplify DataStore, is available on the [Amplify Framework website](https://aws-amplify.github.io/). This documentation outlines how to set up versioned DynamoDB data sources and `Sync` operations to work with the Delta Sync client for optimal data access.

## One-Click Setup
<a name="one-click-setup"></a>

To automatically set up the GraphQL endpoint in AWS AppSync with all the resolvers configured and the necessary AWS resources, use this AWS CloudFormation template:

[https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/deltasync/deltasync-v2-full.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/deltasync/deltasync-v2-full.yaml) 

This stack creates the following resources in your account:
+ 2 DynamoDB tables (Base and Delta)
+ 1 AWS AppSync API with API key
+ 1 IAM Role with policy for DynamoDB tables

Two tables are used to partition your sync queries into a second table that acts as a journal of missed events when the clients were offline. To keep the queries efficient on the delta table, [Amazon DynamoDB TTLs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html) are used to automatically groom the events as necessary. The TTL time is configurable for your needs on the data source (you might want this as 1hour, 1day, etc.).

## Schema
<a name="schema"></a>

To demonstrate Delta Sync, the sample application creates a *Posts* schema backed by a *Base* and *Delta* table in DynamoDB. AWS AppSync automatically writes the mutations to both tables. The sync query pulls records from the *Base* or *Delta* table as appropriate, and a single subscription is defined to show how clients can leverage this in their reconnection logic.

```
input CreatePostInput {
    author: String!
    title: String!
    content: String!
    url: String
    ups: Int
    downs: Int
    _version: Int
}

interface Connection {
  nextToken: String
  startedAt: AWSTimestamp!
}

type Mutation {
    createPost(input: CreatePostInput!): Post
    updatePost(input: UpdatePostInput!): Post
    deletePost(input: DeletePostInput!): Post
}

type Post {
    id: ID!
    author: String!
    title: String!
    content: String!
    url: AWSURL
    ups: Int
    downs: Int
    _version: Int
    _deleted: Boolean
    _lastChangedAt: AWSTimestamp!
}

type PostConnection implements Connection {
    items: [Post!]!
    nextToken: String
    startedAt: AWSTimestamp!
}

type Query {
    getPost(id: ID!): Post
    syncPosts(limit: Int, nextToken: String, lastSync: AWSTimestamp): PostConnection!
}

type Subscription {
    onCreatePost: Post
        @aws_subscribe(mutations: ["createPost"])
    onUpdatePost: Post
        @aws_subscribe(mutations: ["updatePost"])
    onDeletePost: Post
        @aws_subscribe(mutations: ["deletePost"])
}

input DeletePostInput {
    id: ID!
    _version: Int!
}

input UpdatePostInput {
    id: ID!
    author: String
    title: String
    content: String
    url: String
    ups: Int
    downs: Int
    _version: Int!
}

schema {
    query: Query
    mutation: Mutation
    subscription: Subscription
}
```

The GraphQL schema is standard, but a couple things are worth calling out before moving forward. First, all of the mutations automatically first write to the *Base* table and then to the *Delta* table. The *Base* table is the central source of truth for state while the *Delta* table is your journal. If you don’t pass in the `lastSync: AWSTimestamp`, the `syncPosts` query runs against the *Base* table and hydrates the cache as well as running at periodic times as a *global catchup process* for edge cases when clients are offline longer than your configured TTL time in the *Delta* table. If you do pass in the `lastSync: AWSTimestamp`, the `syncPosts` query runs against your *Delta* table and is used by clients to retrieve changed events since they were last offline. Amplify clients automatically pass the `lastSync: AWSTimestamp` value, and persist to disk appropriately.

The *\$1deleted* field on *Post* is used for **DELETE** operations. When clients are offline and records are removed from the *Base* table, this attribute notifies clients performing synchronization to evict items from their local cache. In cases where clients are offline for longer periods of time and the item has been removed before the client can retrieve this value with a Delta Sync query, the global catch-up event in the base query (configurable in the client) runs and removes the item from the cache. This field is marked optional because it only returns a value when running a sync query that has deleted items present.

## Mutations
<a name="mutations"></a>

For all of the mutations, AWS AppSync does a standard Create/Update/Delete operation in the *Base* table and also records the change in the *Delta* table automatically. You can reduce or extend the time to keep records by modifying the `DeltaSyncTableTTL` value on the data source. For organizations with a high velocity of data, it may make sense to keep this short. Alternatively, if your clients are offline for longer periods of time, it might be prudent to keep this longer.

## Sync Queries
<a name="sync-queries"></a>

The *base query* is a DynamoDB Sync operation without a `lastSync` value specified. For many organizations, this works because the base query only runs on startup and at a periodic basis thereafter.

The *delta query* is a DynamoDB Sync operation with a `lastSync` value specified. The *delta query* executes whenever the client comes back online from an offline state (as long as the base query periodic time hasn’t triggered to run). Clients automatically track the last time they successfully ran a query to sync data.

When a delta query is run, the query’s resolver uses the `ds_pk` and `ds_sk` to query only for the records that have changed since the last time the client performed a sync. The client stores the appropriate GraphQL response.

For more information on executing Sync Queries, see the [Sync Operation documentation](aws-appsync-conflict-detection-and-sync-sync-operations.md).

## Example
<a name="example"></a>

Let’s start first by calling a `createPost` mutation to create an item:

```
mutation create {
  createPost(input: {author: "Nadia", title: "My First Post", content: "Hello World"}) {
    id
    author
    title
    content
    _version
    _lastChangedAt
    _deleted
  }
}
```

The return value of this mutation will look as follows:

```
{
  "data": {
    "createPost": {
      "id": "81d36bbb-1579-4efe-92b8-2e3f679f628b",
      "author": "Nadia",
      "title": "My First Post",
      "content": "Hello World",
      "_version": 1,
      "_lastChangedAt": 1574469356331,
      "_deleted": null
    }
  }
}
```

If you examine the contents of the *Base* table, you will see a record that looks like:

```
{
  "_lastChangedAt": {
    "N": "1574469356331"
  },
  "_version": {
    "N": "1"
  },
  "author": {
    "S": "Nadia"
  },
  "content": {
    "S": "Hello World"
  },
  "id": {
    "S": "81d36bbb-1579-4efe-92b8-2e3f679f628b"
  },
  "title": {
    "S": "My First Post"
  }
}
```

If you examine the contents of the *Delta* table, you will see a record that looks like:

```
{
  "_lastChangedAt": {
    "N": "1574469356331"
  },
  "_ttl": {
    "N": "1574472956"
  },
  "_version": {
    "N": "1"
  },
  "author": {
    "S": "Nadia"
  },
  "content": {
    "S": "Hello World"
  },
  "ds_pk": {
    "S": "AppSync-delta-sync-post:2019-11-23"
  },
  "ds_sk": {
    "S": "00:35:56.331:81d36bbb-1579-4efe-92b8-2e3f679f628b:1"
  },
  "id": {
    "S": "81d36bbb-1579-4efe-92b8-2e3f679f628b"
  },
  "title": {
    "S": "My First Post"
  }
}
```

Now we can simulate a *Base* query that a client will run to hydrate its local data store using a `syncPosts` query like:

```
query baseQuery {
  syncPosts(limit: 100, lastSync: null, nextToken: null) {
    items {
      id
      author
      title
      content
      _version
      _lastChangedAt
    }
    startedAt
    nextToken
  }
}
```

The return value of this *Base* query will look as follows:

```
{
  "data": {
    "syncPosts": {
      "items": [
        {
          "id": "81d36bbb-1579-4efe-92b8-2e3f679f628b",
          "author": "Nadia",
          "title": "My First Post",
          "content": "Hello World",
          "_version": 1,
          "_lastChangedAt": 1574469356331
        }
      ],
      "startedAt": 1574469602238,
      "nextToken": null
    }
  }
}
```

We’ll save the `startedAt` value later to simulate a *Delta* query, but first we need to make a change to our table. Let’s use the `updatePost` mutation to modify our existing Post:

```
mutation updatePost {
  updatePost(input: {id: "81d36bbb-1579-4efe-92b8-2e3f679f628b", _version: 1, title: "Actually this is my Second Post"}) {
    id
    author
    title
    content
    _version
    _lastChangedAt
    _deleted
  }
}
```

The return value of this mutation will look as follows:

```
{
  "data": {
    "updatePost": {
      "id": "81d36bbb-1579-4efe-92b8-2e3f679f628b",
      "author": "Nadia",
      "title": "Actually this is my Second Post",
      "content": "Hello World",
      "_version": 2,
      "_lastChangedAt": 1574469851417,
      "_deleted": null
    }
  }
}
```

If you examine the contents of the *Base* table now, you should see the updated item:

```
{
  "_lastChangedAt": {
    "N": "1574469851417"
  },
  "_version": {
    "N": "2"
  },
  "author": {
    "S": "Nadia"
  },
  "content": {
    "S": "Hello World"
  },
  "id": {
    "S": "81d36bbb-1579-4efe-92b8-2e3f679f628b"
  },
  "title": {
    "S": "Actually this is my Second Post"
  }
}
```

If you examine the contents of the *Delta* table now, you should see two records:

1. A record when the item was created

1. A record for when the item was updated.

The new item will look like:

```
{
  "_lastChangedAt": {
    "N": "1574469851417"
  },
  "_ttl": {
    "N": "1574473451"
  },
  "_version": {
    "N": "2"
  },
  "author": {
    "S": "Nadia"
  },
  "content": {
    "S": "Hello World"
  },
  "ds_pk": {
    "S": "AppSync-delta-sync-post:2019-11-23"
  },
  "ds_sk": {
    "S": "00:44:11.417:81d36bbb-1579-4efe-92b8-2e3f679f628b:2"
  },
  "id": {
    "S": "81d36bbb-1579-4efe-92b8-2e3f679f628b"
  },
  "title": {
    "S": "Actually this is my Second Post"
  }
}
```

Now we can simulate a *Delta* query to retrieve modifications that occurred when a client was offline. We will use the `startedAt` value returned from our *Base* query to make the request:

```
query delta {
  syncPosts(limit: 100, lastSync: 1574469602238, nextToken: null) {
    items {
      id
      author
      title
      content
      _version
    }
    startedAt
    nextToken
  }
}
```

The return value of this *Delta* query will look as follows:

```
{
  "data": {
    "syncPosts": {
      "items": [
        {
          "id": "81d36bbb-1579-4efe-92b8-2e3f679f628b",
          "author": "Nadia",
          "title": "Actually this is my Second Post",
          "content": "Hello World",
          "_version": 2
        }
      ],
      "startedAt": 1574470400808,
      "nextToken": null
    }
  }
}
```