

# JavaScript resolver tutorials for AWS AppSync
<a name="tutorials-js"></a>

Data sources and resolvers are used by AWS AppSync to translate GraphQL requests and fetch information from your AWS resources. AWS AppSync supports automatic provisioning and connections with certain data source types. AWS AppSync also supports AWS Lambda, Amazon DynamoDB, relational databases (Amazon Aurora Serverless), Amazon OpenSearch Service, and HTTP endpoints as data sources. You can use a GraphQL API with your existing AWS resources or build data sources and resolvers from scratch. The following sections are meant to elucidate some of the more common GraphQL use cases in the form of tutorials.

**Topics**
+ [Creating a simple post application using DynamoDB JavaScript resolvers](tutorial-dynamodb-resolvers-js.md)
+ [Using AWS Lambda resolvers](tutorial-lambda-resolvers-js.md)
+ [Using local resolvers](tutorial-local-resolvers-js.md)
+ [Combining GraphQL resolvers](tutorial-combining-graphql-resolvers-js.md)
+ [Using OpenSearch Service resolvers](tutorial-elasticsearch-resolvers-js.md)
+ [Performing DynamoDB transactions](tutorial-dynamodb-transact-js.md)
+ [Using DynamoDB batch operations](tutorial-dynamodb-batch-js.md)
+ [Using HTTP resolvers](tutorial-http-resolvers-js.md)
+ [Using Aurora PostgreSQL with Data API](aurora-serverless-tutorial-js.md)

# Creating a simple post application using DynamoDB JavaScript resolvers
<a name="tutorial-dynamodb-resolvers-js"></a>

In this tutorial, you will import your Amazon DynamoDB tables to AWS AppSync and connect them to build a fully-functional GraphQL API using JavaScript pipeline resolvers that you can leverage in your own application.

You will use the AWS AppSync console to provision your Amazon DynamoDB resources, create your resolvers, and connect them to your data sources. You will also be able to read and write to your Amazon DynamoDB database through GraphQL statements and subscribe to real-time data.

There are specific steps that must be completed in order for GraphQL statements to be translated to Amazon DynamoDB operations and for responses to be translated back into GraphQL. This tutorial outlines the configuration process through several real-world scenarios and data access patterns.

## Creating your GraphQL API
<a name="create-graphql-api"></a>

**To create a GraphQL API in AWS AppSync**

1. Open the AppSync console and choose **Create API**.

1. Select **Design from scratch** and choose **Next**.

1. Name your API `PostTutorialAPI`, then choose **Next**. Skip to the review page while keeping the rest of the options set to their default values and choose `Create`.

The AWS AppSync console creates a new GraphQL API for you. By detault, it's using the API key authentication mode. You can use the console to set up the rest of the GraphQL API and run queries against it for the rest of this tutorial.

## Defining a basic post API
<a name="define-post-api"></a>

Now that you have your GraphQL API, you can set up a basic schema that allows the basic creation, retrieval, and deletion of post data.

**To add data to your schema**

1. In your API, choose the **Schema** tab.

1. We will create a schema that defines a `Post` type and an operation `addPost` to add and get `Post` objects. In the **Schema** pane, replace the contents with the following code:

   ```
   schema {
       query: Query
       mutation: Mutation
   }
   
   type Query {
       getPost(id: ID): Post
   }
   
   type Mutation {
       addPost(
           id: ID!
           author: String!
           title: String!
           content: String!
           url: String!
       ): Post!
   }
   
   type Post {
       id: ID!
       author: String
       title: String
       content: String
       url: String
       ups: Int!
       downs: Int!
       version: Int!
   }
   ```

1. Choose **Save Schema**.

## Setting up your Amazon DynamoDB table
<a name="configure-dynamodb"></a>

The AWS AppSync console can help provision the AWS resources needed to store your own resources in an Amazon DynamoDB table. In this step, you’ll create an Amazon DynamoDB table to store your posts. You’ll also set up a [secondary index](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html) that we’ll use later.

**To create your Amazon DynamoDB table**

1. On the **Schema** page, choose **Create Resources**.

1. Choose **Use existing type**, then choose the `Post` type.

1. In the **Additional Indexes** section, choose **Add Index**.

1. Name the index `author-index`.

1. Set the `Primary key` to `author` and the `Sort` key to `None`.

1. Disable **Automatically generate GraphQL**. In this example, we'll create the resolver ourselves.

1. Choose **Create**.

You now have a new data source called `PostTable`, which you can see by visiting **Data sources** in the side tab. You will use this data source to link your queries and mutations to your Amazon DynamoDB table. 

## Setting up an addPost resolver (Amazon DynamoDB PutItem)
<a name="configure-addpost"></a>

Now that AWS AppSync is aware of the Amazon DynamoDB table, you can link it to individual queries and mutations by defining resolvers. The first resolver you create is the `addPost` pipeline resolver using JavaScript, which enables you to create a post in your Amazon DynamoDB table. A pipeline resolver has the following components: 
+ The location in the GraphQL schema to attach the resolver. In this case, you are setting up a resolver on the `createPost` field on the `Mutation` type. This resolver will be invoked when the caller calls mutation `{ addPost(...){...} }`. 
+ The data source to use for this resolver. In this case, you want to use the DynamoDB data source you defined earlier, so you can add entries into the `post-table-for-tutorial` DynamoDB table.
+ The request handler. The request handler is a function that handles the incoming request from the caller and translates it into instructions for AWS AppSync to perform against DynamoDB.
+ The response handler. The job of the response handler is to handle the response from DynamoDB and translate it back into something that GraphQL expects. This is useful if the shape of the data in DynamoDB is different to the `Post` type in GraphQL, but in this case they have the same shape, so you just pass the data through. 

**To set up your resolver**

1. In your API, choose the **Schema** tab.

1. In the **Resolvers** pane, find the `addPost` field under the `Mutation` type, then choose **Attach**.

1. Choose your data source, then choose **Create**.

1. In your code editor, replace the code with this snippet:

   ```
   import { util } from '@aws-appsync/utils'
   import * as ddb from '@aws-appsync/utils/dynamodb'
   
   export function request(ctx) {
   	const item = { ...ctx.arguments, ups: 1, downs: 0, version: 1 }
   	const key = { id: ctx.args.id ?? util.autoId() }
   	return ddb.put({ key, item })
   }
   
   export function response(ctx) {
   	return ctx.result
   }
   ```

1. Choose **Save**.

**Note**  
In this code, you use the DynamoDB module utils that allow you to easily create DynamoDB requests.

AWS AppSync comes with a utility for automatic ID generation called `util.autoId()`, which is used to generate an ID for your new post. If you do not specify an ID, the utility will automatically generate it for you.

```
const key = { id: ctx.args.id ?? util.autoId() }
```

For more information about the utilities available for JavaScript, see [JavaScript runtime features for resolvers and functions](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference-js.html). 

### Call the API to add a post
<a name="call-api-addpost"></a>

Now that the resolver has been configured, AWS AppSync can translate an incoming `addPost` mutation to an Amazon DynamoDB `PutItem` operation. You can now run a mutation to put something in the table.

**To run the operation**

1. In your API, choose the **Queries** tab.

1. In the **Queries** pane, add the following mutation:

   ```
   mutation addPost {
     addPost(
       id: 123,
       author: "AUTHORNAME"
       title: "Our first post!"
       content: "This is our first post."
       url: "https://aws.amazon.com/appsync/"
     ) {
       id
       author
       title
       content
       url
       ups
       downs
       version
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `addPost`. The results of the newly created post should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "addPost": {
         "id": "123",
         "author": "AUTHORNAME",
         "title": "Our first post!",
         "content": "This is our first post.",
         "url": "https://aws.amazon.com/appsync/",
         "ups": 1,
         "downs": 0,
         "version": 1
       }
     }
   }
   ```

The following explanation shows what occurred:

1. AWS AppSync received an `addPost` mutation request.

1. AWS AppSync executes the request handler of the resolver. The `ddb.put` function creates a `PutItem` request that looks like this:

   ```
   {
     operation: 'PutItem',
     key: { id: { S: '123' } },
     attributeValues: {
       downs: { N: 0 },
       author: { S: 'AUTHORNAME' },
       ups: { N: 1 },
       title: { S: 'Our first post!' },
       version: { N: 1 },
       content: { S: 'This is our first post.' },
       url: { S: 'https://aws.amazon.com/appsync/' }
     }
   }
   ```

1. AWS AppSync uses this value to generate and execute a Amazon DynamoDB `PutItem` request.

1. AWS AppSync took the results of the `PutItem` request and converted them back to GraphQL types.

   ```
   {
       "id" : "123",
       "author": "AUTHORNAME",
       "title": "Our first post!",
       "content": "This is our first post.",
       "url": "https://aws.amazon.com/appsync/",
       "ups" : 1,
       "downs" : 0,
       "version" : 1
   }
   ```

1. The response handler returns the result immediately (`return ctx.result`).

1. The final result is visible in the GraphQL response.

## Setting up the getPost resolver (Amazon DynamoDB GetItem)
<a name="configure-getpost"></a>

Now that you’re able to add data to the Amazon DynamoDB table, you need to set up the `getPost` query so it can retrieve that data from the table. To do this, you set up another resolver.

**To add your resolver**

1. In your API, choose the **Schema** tab.

1. In the **Resolvers** pane on the right, find the `getPost` field on the `Query` type and then choose **Attach**.

1. Choose your data source, then choose **Create**.

1. In the code editor, replace the code with this snippet:

   ```
   import * as ddb from '@aws-appsync/utils/dynamodb'
   	
   export function request(ctx) {
   	return ddb.get({ key: { id: ctx.args.id } })
   }
   
   export const response = (ctx) => ctx.result
   ```

1. Save your resolver.

**Note**  
In this resolver, we use an arrow function expression for the response handler.

### Call the API to get a post
<a name="call-api-getpost"></a>

Now that the resolver has been set up, AWS AppSync knows how to translate an incoming `getPost` query to an Amazon DynamoDB `GetItem` operation. You can now run a query to retrieve the post you created earlier.

**To run your query**

1. In your API, choose the **Queries** tab. 

1. In the **Queries** pane, add the following code, and use the id that you copied after creating your post:

   ```
   query getPost {
     getPost(id: "123") {
       id
       author
       title
       content
       url
       ups
       downs
       version
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `getPost`. The results of the newly created post should appear in the **Results** pane to the right of the **Queries** pane.

1. The post retrieved from Amazon DynamoDB should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "getPost": {
         "id": "123",
         "author": "AUTHORNAME",
         "title": "Our first post!",
         "content": "This is our first post.",
         "url": "https://aws.amazon.com/appsync/",
         "ups": 1,
         "downs": 0,
         "version": 1
       }
     }
   }
   ```

Alternatively, take the following example:

```
query getPost {
  getPost(id: "123") {
    id
    author
    title
  }
}
```

If your `getPost` query only needs the `id`, `author`, and `title`, you can change your request function to use projection expressions to specify only the attributes that you want from your DynamoDB table to avoid unnecessary data transfer from DynamoDB to AWS AppSync. For example, the request function may look like the snippet below:

```
import * as ddb from '@aws-appsync/utils/dynamodb'

export function request(ctx) {
	return ddb.get({
		key: { id: ctx.args.id },
		projection: ['author', 'id', 'title'],
	})
}

export const response = (ctx) => ctx.result
```

You can also use a [selectionSetList](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference-js.html#aws-appsync-resolver-context-reference-info-js) with `getPost` to represent the `expression`:

```
import * as ddb from '@aws-appsync/utils/dynamodb'

export function request(ctx) {
	const projection = ctx.info.selectionSetList.map((field) => field.replace('/', '.'))
	return ddb.get({ key: { id: ctx.args.id }, projection })
}

export const response = (ctx) => ctx.result
```

## Create an updatePost mutation (Amazon DynamoDB UpdateItem)
<a name="configure-updatepost"></a>

So far, you can create and retrieve `Post` objects in Amazon DynamoDB. Next, you’ll set up a new mutation to update an object. Compared to the `addPost` mutation that requires all fields to be specified, this mutation allows you to only specify the fields that you want to change. It also introduced a new `expectedVersion` argument that allows you to specify the version that you want to modify. You’ll set up a condition that makes sure that you are modifying the latest version of the object. You’ll do this using the `UpdateItem` Amazon DynamoDB operation.sc

**To update your resolver**

1. In your API, choose the **Schema** tab.

1. In the **Schema** pane, modify the `Mutation` type to add a new `updatePost` mutation as follows:

   ```
   type Mutation {
       updatePost(
           id: ID!,
           author: String,
           title: String,
           content: String,
           url: String,
           expectedVersion: Int!
       ): Post
       
       addPost(
           id: ID
           author: String!
           title: String!
           content: String!
           url: String!
       ): Post!
   }
   ```

1. Choose **Save Schema**.

1. In the **Resolvers** pane on the right, find the newly created `updatePost` field on the `Mutation` type, then choose **Attach**. Create your new resolver using the snippet below:

   ```
   import { util } from '@aws-appsync/utils';
   import * as ddb from '@aws-appsync/utils/dynamodb';
   
   export function request(ctx) {
     const { id, expectedVersion, ...rest } = ctx.args;
     const values = Object.entries(rest).reduce((obj, [key, value]) => {
       obj[key] = value ?? ddb.operations.remove();
       return obj;
     }, {});
   
     return ddb.update({
       key: { id },
       condition: { version: { eq: expectedVersion } },
       update: { ...values, version: ddb.operations.increment(1) },
     });
   }
   
   export function response(ctx) {
     const { error, result } = ctx;
     if (error) {
       util.appendError(error.message, error.type);
     }
     return result;
   ```

1. Save any changes you made.

This resolver uses `ddb.update` to create an Amazon DynamoDB `UpdateItem` request. Instead of writing the entire item, you’re just asking Amazon DynamoDB to update certain attributes. This is done using Amazon DynamoDB update expressions.

The `ddb.update` function takes a key and an update object as arguments. Then, you check the values of the incoming arguments. When a value is set to `null`, use the DynamoDB `remove` operation to signal that the value should be removed from the DynamoDB item.

There is also a new `condition` section. A condition expression allows you tell AWS AppSync and Amazon DynamoDB whether or not the request should succeed based on the state of the object already in Amazon DynamoDB before the operation is performed. In this case, you only want the `UpdateItem` request to succeed if the `version` field of the item currently in Amazon DynamoDB matches the `expectedVersion` argument exactly. When the item is updated, we want to increment the value of the `version`. This is easy to do with the operation function `increment`.

For more information about condition expressions, see the [Condition expressions](https://docs.aws.amazon.com/appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-condition-expressions) documentation.

For more info about the `UpdateItem` request, see the [UpdateItem](https://docs.aws.amazon.com/appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-updateitem) documentation and the [DynamoDB module](https://docs.aws.amazon.com/appsync/latest/devguide/built-in-modules-js.html) documentation. 

For more information about how to write update expressions, see the [DynamoDB UpdateExpressions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html) documentation.

### Call the API to update a post
<a name="call-api-updatepost"></a>

Let’s try updating the `Post` object with the new resolver.

**To update your object**

1. In your API, choose the **Queries** tab.

1. In the **Queries** pane, add the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier:

   ```
   mutation updatePost {
     updatePost(
       id:123
       title: "An empty story"
       content: null
       expectedVersion: 1
     ) {
       id
       author
       title
       content
       url
       ups
       downs
       version
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `updatePost`.

1. The updated post in Amazon DynamoDB should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "updatePost": {
         "id": "123",
         "author": "A new author",
         "title": "An empty story",
         "content": null,
         "url": "https://aws.amazon.com/appsync/",
         "ups": 1,
         "downs": 0,
         "version": 2
       }
     }
   }
   ```

In this request, you asked AWS AppSync and Amazon DynamoDB to update the `title` and `content` fields only. All of the other fields were left alone (other than incrementing the `version` field). You set the `title` attribute to a new value and removed the `content` attribute from the post. The `author`, `url`, `ups`, and `downs` fields were left untouched. Try executing the mutation request again while leaving the request exactly as is. You should see a response similar to the following:

```
{
  "data": {
    "updatePost": null
  },
  "errors": [
    {
      "path": [
        "updatePost"
      ],
      "data": null,
      "errorType": "DynamoDB:ConditionalCheckFailedException",
      "errorInfo": null,
      "locations": [
        {
          "line": 2,
          "column": 3,
          "sourceName": null
        }
      ],
      "message": "The conditional request failed (Service: DynamoDb, Status Code: 400, Request ID: 1RR3QN5F35CS8IV5VR4OQO9NNBVV4KQNSO5AEMVJF66Q9ASUAAJG)"
    }
  ]
}
```

The request fails because the condition expression evaluates to `false`: 

1. The first time you ran the request, the value of the `version` field of the post in Amazon DynamoDB was `1`, which matched the `expectedVersion` argument. The request succeeded, which meant the `version` field was incremented in Amazon DynamoDB to `2`.

1. The second time you ran the request, the value of the `version` field of the post in Amazon DynamoDB was `2`, which did not match the `expectedVersion` argument.

This pattern is typically called *optimistic locking*.

## Create vote mutations (Amazon DynamoDB UpdateItem)
<a name="configure-vote-mutations"></a>

The `Post` type contains `ups` and `downs` fields to enable the recording of upvotes and downvotes. However, at this moment, the API doesn’t let us do anything with them. Let’s add a mutation to let us upvote and downvote the posts.

**To add your mutation**

1. In your API, choose the **Schema** tab.

1. In the **Schema** pane, modify the `Mutation` type and add the `DIRECTION` enum to add new vote mutations:

   ```
   type Mutation {
       vote(id: ID!, direction: DIRECTION!): Post
       updatePost(
           id: ID!,
           author: String,
           title: String,
           content: String,
           url: String,
           expectedVersion: Int!
       ): Post
       addPost(
           id: ID,
           author: String!,
           title: String!,
           content: String!,
           url: String!
       ): Post!
   }
   
   enum DIRECTION {
     UP
     DOWN
   }
   ```

1. Choose **Save Schema**.

1. In the **Resolvers** pane on the right, find the newly created `vote` field on the `Mutation` type, and then choose **Attach**. Create a new resolver by creating and replacing the code with the following snippet:

   ```
   import * as ddb from '@aws-appsync/utils/dynamodb';
   
   export function request(ctx) {
     const field = ctx.args.direction === 'UP' ? 'ups' : 'downs';
     return ddb.update({
       key: { id: ctx.args.id },
       update: {
         [field]: ddb.operations.increment(1),
         version: ddb.operations.increment(1),
       },
     });
   }
   
   export const response = (ctx) => ctx.result;
   ```

1. Save any changes you made.

### Call the API to upvote or downvote a post
<a name="call-api-vote"></a>

Now that the new resolvers have been set up, AWS AppSync knows how to translate an incoming `upvotePost` or `downvote` mutation to an Amazon DynamoDB `UpdateItem` operation. You can now run mutations to upvote or downvote the post you created earlier.

**To run your mutation**

1. In your API, choose the **Queries** tab.

1. In the **Queries** pane, add the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier:

   ```
   mutation votePost {
     vote(id:123, direction: UP) {
       id
       author
       title
       content
       url
       ups
       downs
       version
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `votePost`.

1. The updated post in Amazon DynamoDB should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "vote": {
         "id": "123",
         "author": "A new author",
         "title": "An empty story",
         "content": null,
         "url": "https://aws.amazon.com/appsync/",
         "ups": 6,
         "downs": 0,
         "version": 4
       }
     }
   }
   ```

1. Choose **Run** a few more times. You should see the `ups` and `version` fields incrementing by `1` each time you execute the query.

1. Change the query to call it with a different `DIRECTION`.

   ```
   mutation votePost {
     vote(id:123, direction: DOWN) {
       id
       author
       title
       content
       url
       ups
       downs
       version
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `votePost`.

   This time, you should see the `downs` and `version` fields incrementing by `1` each time you run the query.

## Setting up a deletePost resolver (Amazon DynamoDB DeleteItem)
<a name="configure-deletepost"></a>

Next, you'll want to create a mutation to delete a post. You’ll do this using the `DeleteItem` Amazon DynamoDB operation.

**To add your mutation**

1. In your schema, choose the **Schema** tab.

1. In the **Schema** pane, modify the `Mutation` type to add a new `deletePost` mutation:

   ```
   type Mutation {
       deletePost(id: ID!, expectedVersion: Int): Post
       vote(id: ID!, direction: DIRECTION!): Post
       updatePost(
           id: ID!,
           author: String,
           title: String,
           content: String,
           url: String,
           expectedVersion: Int!
       ): Post
       addPost(
           id: ID
           author: String!,
           title: String!,
           content: String!,
           url: String!
       ): Post!
   }
   ```

1. This time, you made the `expectedVersion` field optional. Next, choose **Save Schema**.

1. In the **Resolvers** pane on the right, find the newly created `delete` field in the `Mutation` type, then choose **Attach**. Create a new resolver using the following code:

   ```
   import { util } from '@aws-appsync/utils'
   
   import { util } from '@aws-appsync/utils';
   import * as ddb from '@aws-appsync/utils/dynamodb';
   
   export function request(ctx) {
     let condition = null;
     if (ctx.args.expectedVersion) {
       condition = {
         or: [
           { id: { attributeExists: false } },
           { version: { eq: ctx.args.expectedVersion } },
         ],
       };
     }
     return ddb.remove({ key: { id: ctx.args.id }, condition });
   }
   
   export function response(ctx) {
     const { error, result } = ctx;
     if (error) {
       util.appendError(error.message, error.type);
     }
     return result;
   }
   ```
**Note**  
The `expectedVersion` argument is an optional argument. If the caller set an `expectedVersion` argument in the request, the request handler adds a condition that only allows the `DeleteItem` request to succeed if the item is already deleted or if the `version` attribute of the post in Amazon DynamoDB exactly matches the `expectedVersion`. If left out, no condition expression is specified on the `DeleteItem` request. It succeeds regardless of the value of `version` or whether or not the item exists in Amazon DynamoDB.  
Even though you’re deleting an item, you can return the item that was deleted, if it was not already deleted.

For more info about the `DeleteItem` request, see the [DeleteItem](https://docs.aws.amazon.com/appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-deleteitem) documentation.

### Call the API to delete a post
<a name="call-api-delete"></a>

Now that the resolver has been set up, AWS AppSync knows how to translate an incoming `delete` mutation to an Amazon DynamoDB `DeleteItem` operation. You can now run a mutation to delete something in the table.

**To run your mutation**

1. In your API, choose the **Queries** tab.

1. In the **Queries** pane, add the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier:

   ```
   mutation deletePost {
     deletePost(id:123) {
       id
       author
       title
       content
       url
       ups
       downs
       version
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `deletePost`.

1. The post is deleted from Amazon DynamoDB. Note that AWS AppSync returns the value of the item that was deleted from Amazon DynamoDB, which should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "deletePost": {
         "id": "123",
         "author": "A new author",
         "title": "An empty story",
         "content": null,
         "url": "https://aws.amazon.com/appsync/",
         "ups": 6,
         "downs": 4,
         "version": 12
       }
     }
   }
   ```

1. The value is only returned if this call to `deletePost` is the one that actually deletes it from Amazon DynamoDB. Choose **Run** again.

1. The call still succeeds, but no value is returned:

   ```
   {
     "data": {
       "deletePost": null
     }
   }
   ```

1. Now, let’s try deleting a post, but this time specifying an `expectedValue`. First, you’ll need to create a new post because you’ve just deleted the one you’ve been working with so far.

1. In the **Queries** pane, add the following mutation:

   ```
   mutation addPost {
     addPost(
       id:123
       author: "AUTHORNAME"
       title: "Our second post!"
       content: "A new post."
       url: "https://aws.amazon.com/appsync/"
     ) {
       id
       author
       title
       content
       url
       ups
       downs
       version
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `addPost`.

1. The results of the newly created post should appear in the **Results** pane to the right of the **Queries** pane. Record the `id` of the newly created object because you'll need it in just a moment. It should look similar to the following:

   ```
   {
     "data": {
       "addPost": {
         "id": "123",
         "author": "AUTHORNAME",
         "title": "Our second post!",
         "content": "A new post.",
         "url": "https://aws.amazon.com/appsync/",
         "ups": 1,
         "downs": 0,
         "version": 1
       }
     }
   }
   ```

1. Now, let’s try to delete that post with an illegal value for **expectedVersion**. In the **Queries** pane, add the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier:

   ```
   mutation deletePost {
     deletePost(
       id:123
       expectedVersion: 9999
     ) {
       id
       author
       title
       content
       url
       ups
       downs
       version
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `deletePost`. The following result is returned:

   ```
   {
     "data": {
       "deletePost": null
     },
     "errors": [
       {
         "path": [
           "deletePost"
         ],
         "data": null,
         "errorType": "DynamoDB:ConditionalCheckFailedException",
         "errorInfo": null,
         "locations": [
           {
             "line": 2,
             "column": 3,
             "sourceName": null
           }
         ],
         "message": "The conditional request failed (Service: DynamoDb, Status Code: 400, Request ID: 7083O037M1FTFRK038A4CI9H43VV4KQNSO5AEMVJF66Q9ASUAAJG)"
       }
     ]
   }
   ```

1. The request failed because the condition expression evaluates to `false`. The value for `version` of the post in Amazon DynamoDB doesn't match the `expectedValue` specified in the arguments. The current value of the object is returned in the `data` field in the `errors` section of the GraphQL response. Retry the request, but correct the `expectedVersion`: 

   ```
   mutation deletePost {
     deletePost(
       id:123
       expectedVersion: 1
     ) {
       id
       author
       title
       content
       url
       ups
       downs
       version
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `deletePost`. 

   This time the request succeeds, and the value that was deleted from Amazon DynamoDB is returned:

   ```
   {
     "data": {
       "deletePost": {
         "id": "123",
         "author": "AUTHORNAME",
         "title": "Our second post!",
         "content": "A new post.",
         "url": "https://aws.amazon.com/appsync/",
         "ups": 1,
         "downs": 0,
         "version": 1
       }
     }
   }
   ```

1. Choose **Run** again. The call still succeeds, but this time no value is returned because the post was already deleted in Amazon DynamoDB.

   ```
   { "data": { "deletePost": null } }
   ```

## Setting up an allPost resolver (Amazon DynamoDB Scan)
<a name="configure-allpost"></a>

So far, the API is only useful if you know the `id` of each post you want to look at. Let’s add a new resolver that returns all the posts in the table.

**To add your mutation**

1. In your API, choose the **Schema** tab.

1. In the **Schema** pane, modify the `Query` type to add a new `allPost` query as follows:

   ```
   type Query {
       allPost(limit: Int, nextToken: String): PaginatedPosts!
       getPost(id: ID): Post
   }
   ```

1. Add a new `PaginationPosts` type:

   ```
   type PaginatedPosts {
       posts: [Post!]!
       nextToken: String
   }
   ```

1. Choose **Save Schema**.

1. In the **Resolvers** pane on the right, find the newly created `allPost` field in the `Query` type, then choose **Attach**. Create a new resolver with the following code:

   ```
   import * as ddb from '@aws-appsync/utils/dynamodb';
   
   export function request(ctx) {
     const { limit = 20, nextToken } = ctx.arguments;
     return ddb.scan({ limit, nextToken });
   }
   
   export function response(ctx) {
     const { items: posts = [], nextToken } = ctx.result;
     return { posts, nextToken };
   }
   ```

   This resolver's request handler expects two optional arguments: 
   + `limit` - Specifies the maximum number of items to return in a single call.
   + `nextToken` - Used to retrieve the next set of results (we’ll show where the value for `nextToken` comes from later).

1. Save any changes made to your resolver.

For more information about `Scan` request, see the [Scan](https://docs.aws.amazon.com/appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-scan) reference documentation.

### Call the API to scan all posts
<a name="call-api-scan"></a>

Now that the resolver has been set up, AWS AppSync knows how to translate an incoming `allPost` query to an Amazon DynamoDB `Scan` operation. You can now scan the table to retrieve all the posts. Before you can try it out though, you need to populate the table with some data because you’ve deleted everything you’ve worked with so far.

**To add and query data **

1. In your API, choose the **Queries** tab.

1. In the **Queries** pane, add the following mutation:

   ```
   mutation addPost {
     post1: addPost(id:1 author: "AUTHORNAME" title: "A series of posts, Volume 1" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
     post2: addPost(id:2 author: "AUTHORNAME" title: "A series of posts, Volume 2" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
     post3: addPost(id:3 author: "AUTHORNAME" title: "A series of posts, Volume 3" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
     post4: addPost(id:4 author: "AUTHORNAME" title: "A series of posts, Volume 4" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
     post5: addPost(id:5 author: "AUTHORNAME" title: "A series of posts, Volume 5" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
     post6: addPost(id:6 author: "AUTHORNAME" title: "A series of posts, Volume 6" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
     post7: addPost(id:7 author: "AUTHORNAME" title: "A series of posts, Volume 7" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
     post8: addPost(id:8 author: "AUTHORNAME" title: "A series of posts, Volume 8" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
     post9: addPost(id:9 author: "AUTHORNAME" title: "A series of posts, Volume 9" content: "Some content" url: "https://aws.amazon.com/appsync/" ) { title }
   }
   ```

1. Choose **Run** (the orange play button). 

1. Now, let’s scan the table, returning five results at a time. In the **Queries** pane, add the following query:

   ```
   query allPost {
     allPost(limit: 5) {
       posts {
         id
         title
       }
       nextToken
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `allPost`.

   The first five posts should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "allPost": {
         "posts": [
           {
             "id": "5",
             "title": "A series of posts, Volume 5"
           },
           {
             "id": "1",
             "title": "A series of posts, Volume 1"
           },
           {
             "id": "6",
             "title": "A series of posts, Volume 6"
           },
           {
             "id": "9",
             "title": "A series of posts, Volume 9"
           },
           {
             "id": "7",
             "title": "A series of posts, Volume 7"
           }
         ],
         "nextToken": "<token>"
       }
     }
   }
   ```

1. You received five results and a `nextToken` that you can use to get the next set of results. Update the `allPost` query to include the `nextToken` from the previous set of results: 

   ```
   query allPost {
     allPost(
       limit: 5
       nextToken: "<token>"
     ) {
       posts {
         id
         author
       }
       nextToken
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `allPost`.

   The remaining four posts should appear in the **Results** pane to the right of the **Queries** pane. There is no `nextToken` in this set of results because you’ve paged through all nine posts with none remaining. It should look similar to the following:

   ```
   {
     "data": {
       "allPost": {
         "posts": [
           {
             "id": "2",
             "title": "A series of posts, Volume 2"
           },
           {
             "id": "3",
             "title": "A series of posts, Volume 3"
           },
           {
             "id": "4",
             "title": "A series of posts, Volume 4"
           },
           {
             "id": "8",
             "title": "A series of posts, Volume 8"
           }
         ],
         "nextToken": null
       }
     }
   }
   ```

## Setting up an allPostsByAuthor resolver(Amazon DynamoDB Query)
<a name="configure-query"></a>

In addition to scanning Amazon DynamoDB for all posts, you can also query Amazon DynamoDB to retrieve posts created by a specific author. The Amazon DynamoDB table you created earlier already has a `GlobalSecondaryIndex` called `author-index` that you can use with an Amazon DynamoDB `Query` operation to retrieve all posts created by a specific author.

**To add your query**

1. In your API, choose the **Schema** tab.

1. In the **Schema** pane, modify the `Query` type to add a new `allPostsByAuthor` query as follows:

   ```
   type Query {
       allPostsByAuthor(author: String!, limit: Int, nextToken: String): PaginatedPosts!
       allPost(limit: Int, nextToken: String): PaginatedPosts!
       getPost(id: ID): Post
   }
   ```

   Note that this uses the same `PaginatedPosts` type that you used with the `allPost` query.

1. Choose **Save Schema**.

1. In the **Resolvers** pane on the right, find the newly created `allPostsByAuthor` field on the `Query` type, and then choose **Attach**. Create a resolver using the snippet below:

   ```
   import * as ddb from '@aws-appsync/utils/dynamodb';
   
   export function request(ctx) {
     const { limit = 20, nextToken, author } = ctx.arguments;
     return ddb.query({
       index: 'author-index',
       query: { author: { eq: author } },
       limit,
       nextToken,
     });
   }
   
   export function response(ctx) {
     const { items: posts = [], nextToken } = ctx.result;
     return { posts, nextToken };
   }
   ```

   Like the `allPost` resolver, this resolver has two optional arguments:
   + `limit` - Specifies the maximum number of items to return in a single call.
   + `nextToken` - Retrieves the next set of results (the value for `nextToken` can be obtained from a previous call).

1. Save any changes made to your resolver.

For more information about the `Query` request, see the [Query](https://docs.aws.amazon.com/appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-query) reference documentation.

### Call the API to query all posts by author
<a name="call-api-query"></a>

Now that the resolver has been set up, AWS AppSync knows how to translate an incoming `allPostsByAuthor` mutation to a DynamoDB `Query` operation against the `author-index` index. You can now query the table to retrieve all the posts by a specific author.

Before this, however, let’s populate the table with some more posts, because every post so far has the same author.

**To add data and query**

1. In your API, choose the **Queries** tab.

1. In the **Queries** pane, add the following mutation:

   ```
   mutation addPost {
     post1: addPost(id:10 author: "Nadia" title: "The cutest dog in the world" content: "So cute. So very, very cute." url: "https://aws.amazon.com/appsync/" ) { author, title }
     post2: addPost(id:11 author: "Nadia" title: "Did you know...?" content: "AppSync works offline?" url: "https://aws.amazon.com/appsync/" ) { author, title }
     post3: addPost(id:12 author: "Steve" title: "I like GraphQL" content: "It's great" url: "https://aws.amazon.com/appsync/" ) { author, title }
   }
   ```

1. Choose **Run** (the orange play button), then choose `addPost`.

1. Now, let’s query the table, returning all posts authored by `Nadia`. In the **Queries** pane, add the following query: 

   ```
   query allPostsByAuthor {
     allPostsByAuthor(author: "Nadia") {
       posts {
         id
         title
       }
       nextToken
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `allPostsByAuthor`. All posts authored by `Nadia` should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "allPostsByAuthor": {
         "posts": [
           {
             "id": "10",
             "title": "The cutest dog in the world"
           },
           {
             "id": "11",
             "title": "Did you know...?"
           }
         ],
         "nextToken": null
       }
     }
   }
   ```

1. Pagination works for `Query` just the same as it does for `Scan`. For example, let’s look for all posts by `AUTHORNAME`, getting five at a time.

1. In the **Queries** pane, add the following query: 

   ```
   query allPostsByAuthor {
     allPostsByAuthor(
       author: "AUTHORNAME"
       limit: 5
     ) {
       posts {
         id
         title
       }
       nextToken
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `allPostsByAuthor`. All posts authored by `AUTHORNAME` should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "allPostsByAuthor": {
         "posts": [
           {
             "id": "6",
             "title": "A series of posts, Volume 6"
           },
           {
             "id": "4",
             "title": "A series of posts, Volume 4"
           },
           {
             "id": "2",
             "title": "A series of posts, Volume 2"
           },
           {
             "id": "7",
             "title": "A series of posts, Volume 7"
           },
           {
             "id": "1",
             "title": "A series of posts, Volume 1"
           }
         ],
         "nextToken": "<token>"
       }
     }
   }
   ```

1. Update the `nextToken` argument with the value returned from the previous query as follows:

   ```
   query allPostsByAuthor {
     allPostsByAuthor(
       author: "AUTHORNAME"
       limit: 5
       nextToken: "<token>"
     ) {
       posts {
         id
         title
       }
       nextToken
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `allPostsByAuthor`. The remaining posts authored by `AUTHORNAME` should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "allPostsByAuthor": {
         "posts": [
           {
             "id": "8",
             "title": "A series of posts, Volume 8"
           },
           {
             "id": "5",
             "title": "A series of posts, Volume 5"
           },
           {
             "id": "3",
             "title": "A series of posts, Volume 3"
           },
           {
             "id": "9",
             "title": "A series of posts, Volume 9"
           }
         ],
         "nextToken": null
       }
     }
   }
   ```

## Using sets
<a name="using-sets"></a>

Up to this point, the `Post` type has been a flat key/value object. You can also model complex objects with your resolver, such as sets, lists, and maps. Let’s update the `Post` type to include tags. A post can have zero or more tags, which are stored in DynamoDB as a String Set. You’ll also set up some mutations to add and remove tags, and a new query to scan for posts with a specific tag.

**To set up your data**

1. In your API, choose the **Schema** tab. 

1. In the **Schema** pane, modify the `Post` type to add a new `tags` field as follows:

   ```
   type Post {
     id: ID!
     author: String
     title: String
     content: String
     url: String
     ups: Int!
     downs: Int!
     version: Int!
     tags: [String!]
   }
   ```

1. In the **Schema** pane, modify the `Query` type to add a new `allPostsByTag` query as follows:

   ```
   type Query {
     allPostsByTag(tag: String!, limit: Int, nextToken: String): PaginatedPosts!
     allPostsByAuthor(author: String!, limit: Int, nextToken: String): PaginatedPosts!
     allPost(limit: Int, nextToken: String): PaginatedPosts!
     getPost(id: ID): Post
   }
   ```

1. In the **Schema** pane, modify the `Mutation` type to add new `addTag` and `removeTag` mutations as follows:

   ```
   type Mutation {
     addTag(id: ID!, tag: String!): Post
     removeTag(id: ID!, tag: String!): Post
     deletePost(id: ID!, expectedVersion: Int): Post
     upvotePost(id: ID!): Post
     downvotePost(id: ID!): Post
     updatePost(
       id: ID!,
       author: String,
       title: String,
       content: String,
       url: String,
       expectedVersion: Int!
     ): Post
     addPost(
       author: String!,
       title: String!,
       content: String!,
       url: String!
     ): Post!
   }
   ```

1. Choose **Save Schema**.

1. In the **Resolvers** pane on the right, find the newly created `allPostsByTag` field on the `Query` type, and then choose **Attach**. Create your resolver using the snippet below:

   ```
   import * as ddb from '@aws-appsync/utils/dynamodb';
   
   export function request(ctx) {
     const { limit = 20, nextToken, tag } = ctx.arguments;
     return ddb.scan({ limit, nextToken, filter: { tags: { contains: tag } } });
   }
   
   export function response(ctx) {
     const { items: posts = [], nextToken } = ctx.result;
     return { posts, nextToken };
   }
   ```

1. Save any changes you've made to your resolver.

1. Now, do the same for the `Mutation` field `addTag` using the snippet below:
**Note**  
Though the DynamoDB utils currently don't support set operations, you can still interact with sets by building the request yourself.

   ```
   import { util } from '@aws-appsync/utils'
   
   export function request(ctx) {
   	const { id, tag } = ctx.arguments
   	const expressionValues = util.dynamodb.toMapValues({ ':plusOne': 1 })
   	expressionValues[':tags'] = util.dynamodb.toStringSet([tag])
   
   	return {
   		operation: 'UpdateItem',
   		key: util.dynamodb.toMapValues({ id }),
   		update: {
   			expression: `ADD tags :tags, version :plusOne`,
   			expressionValues,
   		},
   	}
   }
   
   export const response = (ctx) => ctx.result
   ```

1. Save any changes made to your resolver.

1. Repeat this one more time for the `Mutation` field `removeTag` using the snippet below:

   ```
   import { util } from '@aws-appsync/utils';
   	
   export function request(ctx) {
   	  const { id, tag } = ctx.arguments;
   	  const expressionValues = util.dynamodb.toMapValues({ ':plusOne': 1 });
   	  expressionValues[':tags'] = util.dynamodb.toStringSet([tag]);
   	
   	  return {
   	    operation: 'UpdateItem',
   	    key: util.dynamodb.toMapValues({ id }),
   	    update: {
   	      expression: `DELETE tags :tags ADD version :plusOne`,
   	      expressionValues,
   	    },
   	  };
   	}
   	
   	export const response = (ctx) => ctx.resultexport
   ```

1. Save any changes made to your resolver.

### Call the API to work with tags
<a name="call-api-tags"></a>

Now that you’ve set up the resolvers, AWS AppSync knows how to translate incoming `addTag`, `removeTag`, and `allPostsByTag` requests into DynamoDB `UpdateItem` and `Scan` operations. To try it out, let’s select one of the posts you created earlier. For example, let’s use a post authored by `Nadia`.

**To use tags**

1. In your API, choose the **Queries** tab.

1. In the **Queries** pane, add the following query:

   ```
   query allPostsByAuthor {
     allPostsByAuthor(
       author: "Nadia"
     ) {
       posts {
         id
         title
       }
       nextToken
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `allPostsByAuthor`.

1. All of Nadia’s posts should appear in the **Results** pane to the right of the **Queries** pane. It should look similar to the following:

   ```
   {
     "data": {
       "allPostsByAuthor": {
         "posts": [
           {
             "id": "10",
             "title": "The cutest dog in the world"
           },
           {
             "id": "11",
             "title": "Did you known...?"
           }
         ],
         "nextToken": null
       }
     }
   }
   ```

1. Let’s use the one with the title *The cutest dog in the world*. Record its `id` because you’ll use it later. Now, let’s try adding a `dog` tag.

1. In the **Queries** pane, add the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier.

   ```
   mutation addTag {
     addTag(id:10 tag: "dog") {
       id
       title
       tags
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `addTag`. The post is updated with the new tag:

   ```
   {
     "data": {
       "addTag": {
         "id": "10",
         "title": "The cutest dog in the world",
         "tags": [
           "dog"
         ]
       }
     }
   }
   ```

1. You can add more tags. Update the mutation to change the `tag` argument to `puppy`:

   ```
   mutation addTag {
     addTag(id:10 tag: "puppy") {
       id
       title
       tags
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `addTag`. The post is updated with the new tag:

   ```
   {
     "data": {
       "addTag": {
         "id": "10",
         "title": "The cutest dog in the world",
         "tags": [
           "dog",
           "puppy"
         ]
       }
     }
   }
   ```

1. You can also delete tags. In the **Queries** pane, add the following mutation. You’ll also need to update the `id` argument to the value you noted down earlier:

   ```
   mutation removeTag {
     removeTag(id:10 tag: "puppy") {
       id
       title
       tags
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `removeTag`. The post is updated and the `puppy` tag is deleted.

   ```
   {
     "data": {
       "addTag": {
         "id": "10",
         "title": "The cutest dog in the world",
         "tags": [
           "dog"
         ]
       }
     }
   }
   ```

1. You can also search for all posts that have a tag. In the **Queries** pane, add the following query: 

   ```
   query allPostsByTag {
     allPostsByTag(tag: "dog") {
       posts {
         id
         title
         tags
       }
       nextToken
     }
   }
   ```

1. Choose **Run** (the orange play button), then choose `allPostsByTag`. All posts that have the `dog` tag are returned as follows:

   ```
   {
     "data": {
       "allPostsByTag": {
         "posts": [
           {
             "id": "10",
             "title": "The cutest dog in the world",
             "tags": [
               "dog",
               "puppy"
             ]
           }
         ],
         "nextToken": null
       }
     }
   }
   ```

## Conclusion
<a name="conclusion-dynamodb-tutorial-js"></a>

In this tutorial, you’ve built an API that lets you manipulate `Post` objects in DynamoDB using AWS AppSync and GraphQL. 

To clean up, you can delete the AWS AppSync GraphQL API from the console. 

To delete the role associated with your DynamoDB table, select your data source in the **Data Sources** table and click **edit**. Note the value of the role under **Create or use an existing role**. Go to the IAM console to delete the role.

To delete your DynamoDB table, click on the name of the table in the data sources list. This takes you to the DynamoDB console where you can delete the table. 

# Using AWS Lambda resolvers in AWS AppSync
<a name="tutorial-lambda-resolvers-js"></a>

You can use AWS Lambda with AWS AppSync to resolve any GraphQL field. For example, a GraphQL query might send a call to an Amazon Relational Database Service (Amazon RDS) instance, and a GraphQL mutation might write to an Amazon Kinesis stream. In this section, we'll show you how to write a Lambda function that performs business logic based on the invocation of a GraphQL field operation.

## Powertools for AWS Lambda
<a name="powertools-graphql"></a>

The Powertools for AWS Lambda GraphQL event handler simplifies the routing and processing of GraphQL events in Lambda functions. It is available for Python and Typescript. Read more about about the GraphQL API Event Handler on the Powertools for AWS Lambda documentation, see the following references.
+ [Powertools for AWS Lambda GraphQL Event Handler (Python) ](https://docs.aws.amazon.com/powertools/python/latest/core/event_handler/appsync/)
+ [Powertools for AWS Lambda GraphQL Event Handler (Typescript)](https://docs.aws.amazon.com/powertools/typescript/latest/features/event-handler/appsync-graphql/) 

## Create a Lambda function
<a name="create-a-lam-function-js"></a>

The following example shows a Lambda function written in `Node.js` (runtime: Node.js 18.x) that performs different operations on blog posts as part of a blog post application. Note that the code should be saved in a file name with a .mis extension.

```
export const handler = async (event) => {
console.log('Received event {}', JSON.stringify(event, 3))

  const posts = {
1: { id: '1', title: 'First book', author: 'Author1', url: 'https://amazon.com/', content: 'SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1', ups: '100', downs: '10', },
    2: { id: '2', title: 'Second book', author: 'Author2', url: 'https://amazon.com', content: 'SAMPLE TEXT AUTHOR 2 SAMPLE TEXT AUTHOR 2 SAMPLE TEXT', ups: '100', downs: '10', },
    3: { id: '3', title: 'Third book', author: 'Author3', url: null, content: null, ups: null, downs: null },
    4: { id: '4', title: 'Fourth book', author: 'Author4', url: 'https://www.amazon.com/', content: 'SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4', ups: '1000', downs: '0', },
    5: { id: '5', title: 'Fifth book', author: 'Author5', url: 'https://www.amazon.com/', content: 'SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT', ups: '50', downs: '0', },
  }

  const relatedPosts = {
1: [posts['4']],
    2: [posts['3'], posts['5']],
    3: [posts['2'], posts['1']],
    4: [posts['2'], posts['1']],
    5: [],
  }

  console.log('Got an Invoke Request.')
  let result
  switch (event.field) {
case 'getPost':
      return posts[event.arguments.id]
    case 'allPosts':
      return Object.values(posts)
    case 'addPost':
      // return the arguments back
return event.arguments
    case 'addPostErrorWithData':
      result = posts[event.arguments.id]
      // attached additional error information to the post
      result.errorMessage = 'Error with the mutation, data has changed'
      result.errorType = 'MUTATION_ERROR'
return result
    case 'relatedPosts':
      return relatedPosts[event.source.id]
    default:
      throw new Error('Unknown field, unable to resolve ' + event.field)
  }
}
```

This Lambda function retrieves a post by ID, adds a post, retrieves a list of posts, and fetches related posts for a given post. 

**Note**  
The Lambda function uses the `switch` statement on `event.field` to determine which field is currently being resolved.

Create this Lambda function using the AWS Management Console.

## Configure a data source for Lambda
<a name="configure-data-source-for-lamlong-js"></a>

After you create the Lambda function, navigate to your GraphQL API in the AWS AppSync console, and then choose the **Data Sources** tab.

Choose **Create data source**, enter a friendly **Data source name** (for example, **Lambda**), and then for **Data source type**, choose **AWS Lambda function**. For **Region**, choose the same Region as your function. For **Function ARN**, choose the Amazon Resource Name (ARN) of your Lambda function.

After choosing your Lambda function, you can either create a new AWS Identity and Access Management (IAM) role (for which AWS AppSync assigns the appropriate permissions) or choose an existing role that has the following inline policy:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:InvokeFunction"
            ],
            "Resource": "arn:aws:lambda:us-east-1:111122223333:function:LAMBDA_FUNCTION"
        }
    ]
}
```

------

You must also set up a trust relationship with AWS AppSync for the IAM role as follows:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "appsync.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

------

## Create a GraphQL schema
<a name="creating-a-graphql-schema-js"></a>

Now that the data source is connected to your Lambda function, create a GraphQL schema.

From the schema editor in the AWS AppSync console, make sure that your schema matches the following schema:

```
schema {
    query: Query
    mutation: Mutation
}
type Query {
    getPost(id:ID!): Post
    allPosts: [Post]
}
type Mutation {
    addPost(id: ID!, author: String!, title: String, content: String, url: String): Post!
}
type Post {
    id: ID!
    author: String!
    title: String
    content: String
    url: String
    ups: Int
    downs: Int
    relatedPosts: [Post]
}
```

## Configure resolvers
<a name="configuring-resolvers-js"></a>

Now that you've registered a Lambda data source and a valid GraphQL schema, you can connect your GraphQL fields to your Lambda data source using resolvers.

You will create a resolver that uses the AWS AppSync JavaScript (`APPSYNC_JS`) runtime and interact with your Lambda functions. To learn more about writing AWS AppSync resolvers and functions with JavaScript, see [JavaScript runtime features for resolvers and functions](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference-js.html).

For more information about Lambda mapping templates, see [JavaScript resolver function reference for Lambda](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-lambda-js.html).

In this step, you attach a resolver to the Lambda function for the following fields: `getPost(id:ID!): Post`, `allPosts: [Post]`, `addPost(id: ID!, author: String!, title: String, content: String, url: String): Post!`, and `Post.relatedPosts: [Post]`. From the **Schema** editor in the AWS AppSync console, in the **Resolvers** pane, choose **Attach** next to the `getPost(id:ID!): Post` field. Choose your Lambda data source. Next, provide the following code:

```
import { util } from '@aws-appsync/utils';

export function request(ctx) {
  const {source, args} = ctx
  return {
    operation: 'Invoke',
    payload: { field: ctx.info.fieldName, arguments: args, source },
  };
}

export function response(ctx) {
  return ctx.result;
}
```

This resolver code passes the field name, list of arguments, and context about the source object to the Lambda function when it invokes it. Choose **Save**.

You have successfully attached your first resolver. Repeat this operation for the remaining fields. 

## Test your GraphQL API
<a name="testing-your-graphql-api-js"></a>

Now that your Lambda function is connected to GraphQL resolvers, you can run some mutations and queries using the console or a client application.

On the left side of the AWS AppSync console, choose **Queries**, and then paste in the following code:

### addPost Mutation
<a name="addpost-mutation-js"></a>

```
mutation AddPost {
    addPost(
        id: 6
        author: "Author6"
        title: "Sixth book"
        url: "https://www.amazon.com/"
        content: "This is the book is a tutorial for using GraphQL with AWS AppSync."
    ) {
        id
        author
        title
        content
        url
        ups
        downs
    }
}
```

### getPost Query
<a name="getpost-query-js"></a>

```
query GetPost {
    getPost(id: "2") {
        id
        author
        title
        content
        url
        ups
        downs
    }
}
```

### allPosts Query
<a name="allposts-query-js"></a>

```
query AllPosts {
    allPosts {
        id
        author
        title
        content
        url
        ups
        downs
        relatedPosts {
            id
            title
        }
    }
}
```

## Returning errors
<a name="returning-errors-js"></a>

Any given field resolution can result in an error. With AWS AppSync, you can raise errors from the following sources:
+ Resolver response handler
+ Lambda function

### From the resolver response handler
<a name="from-the-resolver-response-handler-js"></a>

To raise intentional errors, you can use the `util.error` utility method. It takes an argument an `errorMessage`, an `errorType`, and an optional `data` value. The `data` is useful for returning extra data back to the client when an error occurs. The `data` object is added to the `errors` in the GraphQL final response.

The following example shows how to use it in the `Post.relatedPosts: [Post]` resolver response handler.

```
// the Post.relatedPosts response handler
export function response(ctx) {
    util.error("Failed to fetch relatedPosts", "LambdaFailure", ctx.result)
    return ctx.result;
}
```

This yields a GraphQL response similar to the following:

```
{
    "data": {
        "allPosts": [
            {
                "id": "2",
                "title": "Second book",
                "relatedPosts": null
            },
            ...
        ]
    },
    "errors": [
        {
            "path": [
                "allPosts",
                0,
                "relatedPosts"
            ],
            "errorType": "LambdaFailure",
            "locations": [
                {
                    "line": 5,
                    "column": 5
                }
            ],
            "message": "Failed to fetch relatedPosts",
            "data": [
                {
                  "id": "2",
                  "title": "Second book"
                },
                {
                  "id": "1",
                  "title": "First book"
                }
            ]
        }
    ]
}
```

Where `allPosts[0].relatedPosts` is *null* because of the error and the `errorMessage`, `errorType`, and `data` are present in the `data.errors[0]` object.

### From the Lambda function
<a name="from-the-lam-function-js"></a>

AWS AppSync also understands errors that the Lambda function throws. The Lambda programming model lets you raise *handled* errors. If the Lambda function throws an error, AWS AppSync fails to resolve the current field. Only the error message returned from Lambda is set in the response. Currently, you can't pass any extraneous data back to the client by raising an error from the Lambda function. 

**Note**  
If your Lambda function raises an *unhandled* error, AWS AppSync uses the error message that Lambda set.

The following Lambda function raises an error:

```
export const handler = async (event) => {
  console.log('Received event {}', JSON.stringify(event, 3))
  throw new Error('I always fail.')
}
```

The error is received in your response handler. You can send it back in the GraphQL response by appending the error to the response with `util.appendError`. To do so, change your AWS AppSync function response handler to this:

```
// the lambdaInvoke response handler
export function response(ctx) {
  const { error, result } = ctx;
  if (error) {
    util.appendError(error.message, error.type, result);
  }
  return result;
}
```

This returns a GraphQL response similar to the following:

```
{
  "data": {
    "allPosts": null
  },
  "errors": [
    {
      "path": [
        "allPosts"
      ],
      "data": null,
      "errorType": "Lambda:Unhandled",
      "errorInfo": null,
      "locations": [
        {
          "line": 2,
          "column": 3,
          "sourceName": null
        }
      ],
      "message": "I fail. always"
    }
  ]
}
```

## Advanced use case: Batching
<a name="advanced-use-case-batching-js"></a>

The Lambda function in this example has a `relatedPosts` field that returns a list of related posts for a given post. In the example queries, the `allPosts` field invocation from the Lambda function returns five posts. Because we specified that we also want to resolve `relatedPosts` for each returned post, the `relatedPosts` field operation is invoked five times.

```
query {
    allPosts {   // 1 Lambda invocation - yields 5 Posts
        id
        author
        title
        content
        url
        ups
        downs
        relatedPosts {   // 5 Lambda invocations - each yields 5 posts
            id
            title
        }
    }
}
```

While this might not sound substantial in this specific example, this compounded over-fetching can quickly undermine the application.

If you were to fetch `relatedPosts` again on the returned related `Posts` in the same query, the number of invocations would increase dramatically.

```
query {
    allPosts {   // 1 Lambda invocation - yields 5 Posts
        id
        author
        title
        content
        url
        ups
        downs
        relatedPosts {   // 5 Lambda invocations - each yield 5 posts = 5 x 5 Posts
            id
            title
            relatedPosts {  // 5 x 5 Lambda invocations - each yield 5 posts = 25 x 5 Posts
                id
                title
                author
            }
        }
    }
}
```

In this relatively simple query, AWS AppSync would invoke the Lambda function 1 \$1 5 \$1 25 = 31 times.

This is a fairly common challenge and is often called the N\$11 problem (in this case, N = 5), and it can incur increased latency and cost to the application.

One approach to solving this issue is to batch similar field resolver requests together. In this example, instead of having the Lambda function resolve a list of related posts for a single given post, it could instead resolve a list of related posts for a given batch of posts.

To demonstrate this, let's update the resolver for `relatedPosts` to handle batching.

```
import { util } from '@aws-appsync/utils';

export function request(ctx) {
  const {source, args} = ctx
  return {
    operation: ctx.info.fieldName === 'relatedPosts' ? 'BatchInvoke' : 'Invoke',
    payload: { field: ctx.info.fieldName, arguments: args, source },
  };
}

export function response(ctx) {
  const { error, result } = ctx;
  if (error) {
    util.appendError(error.message, error.type, result);
  }
  return result;
}
```

The code now changes the operation from `Invoke` to `BatchInvoke` when the `fieldName` being resolved is `relatedPosts`. Now, enable batching on the function in the **Configure Batching** section. Set the maximum batching size set to `5`. Choose **Save**.

With this change, when resolving `relatedPosts`, the Lambda function receives the following as input:

```
[
    {
        "field": "relatedPosts",
        "source": {
            "id": 1
        }
    },
    {
        "field": "relatedPosts",
        "source": {
            "id": 2
        }
    },
    ...
]
```

When `BatchInvoke` is specified in the request, the Lambda function receives a list of requests and returns a list of results.

Specifically, the list of results must match the size and order of the request payload entries so that AWS AppSync can match the results accordingly.

In this batching example, the Lambda function returns a batch of results as follows:

```
[
    [{"id":"2","title":"Second book"}, {"id":"3","title":"Third book"}],   // relatedPosts for id=1
    [{"id":"3","title":"Third book"}]                                     // relatedPosts for id=2
]
```

You can update your Lambda code to handle batching for `relatedPosts`:

```
export const handler = async (event) => {
  console.log('Received event {}', JSON.stringify(event, 3))
  //throw new Error('I fail. always')

  const posts = {
    1: { id: '1', title: 'First book', author: 'Author1', url: 'https://amazon.com/', content: 'SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1', ups: '100', downs: '10', },
    2: { id: '2', title: 'Second book', author: 'Author2', url: 'https://amazon.com', content: 'SAMPLE TEXT AUTHOR 2 SAMPLE TEXT AUTHOR 2 SAMPLE TEXT', ups: '100', downs: '10', },
    3: { id: '3', title: 'Third book', author: 'Author3', url: null, content: null, ups: null, downs: null },
    4: { id: '4', title: 'Fourth book', author: 'Author4', url: 'https://www.amazon.com/', content: 'SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4', ups: '1000', downs: '0', },
    5: { id: '5', title: 'Fifth book', author: 'Author5', url: 'https://www.amazon.com/', content: 'SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT', ups: '50', downs: '0', },
  }

  const relatedPosts = {
    1: [posts['4']],
    2: [posts['3'], posts['5']],
    3: [posts['2'], posts['1']],
    4: [posts['2'], posts['1']],
    5: [],
  }
  
  if (!event.field && event.length){
    console.log(`Got a BatchInvoke Request. The payload has ${event.length} items to resolve.`);
    return event.map(e => relatedPosts[e.source.id])
  }

  console.log('Got an Invoke Request.')
  let result
  switch (event.field) {
    case 'getPost':
      return posts[event.arguments.id]
    case 'allPosts':
      return Object.values(posts)
    case 'addPost':
      // return the arguments back
      return event.arguments
    case 'addPostErrorWithData':
      result = posts[event.arguments.id]
      // attached additional error information to the post
      result.errorMessage = 'Error with the mutation, data has changed'
      result.errorType = 'MUTATION_ERROR'
      return result
    case 'relatedPosts':
      return relatedPosts[event.source.id]
    default:
      throw new Error('Unknown field, unable to resolve ' + event.field)
  }
}
```

### Returning individual errors
<a name="returning-individual-errors-js"></a>

The previous examples show that it's possible to return a single error from the Lambda function or raise an error from your response handler. For batched invocations, raising an error from the Lambda function flags an entire batch as failed. This might be acceptable for specific scenarios where an irrecoverable error occurs, such as a failed connection to a data store. However, in cases where some items in the batch succeed and others fail, it's possible to return both errors and valid data. Because AWS AppSync requires the batch response to list elements matching the original size of the batch, you must define a data structure that can differentiate valid data from an error.

For example, if the Lambda function is expected to return a batch of related posts, you could choose to return a list of `Response` objects where each object has optional *data*, *errorMessage*, and *errorType* fields. If the *errorMessage* field is present, it means that an error occurred.

The following code shows how you could update the Lambda function:

```
export const handler = async (event) => {
console.log('Received event {}', JSON.stringify(event, 3))
  // throw new Error('I fail. always')
const posts = {
1: { id: '1', title: 'First book', author: 'Author1', url: 'https://amazon.com/', content: 'SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1 SAMPLE TEXT AUTHOR 1', ups: '100', downs: '10', },
    2: { id: '2', title: 'Second book', author: 'Author2', url: 'https://amazon.com', content: 'SAMPLE TEXT AUTHOR 2 SAMPLE TEXT AUTHOR 2 SAMPLE TEXT', ups: '100', downs: '10', },
    3: { id: '3', title: 'Third book', author: 'Author3', url: null, content: null, ups: null, downs: null },
    4: { id: '4', title: 'Fourth book', author: 'Author4', url: 'https://www.amazon.com/', content: 'SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4 SAMPLE TEXT AUTHOR 4', ups: '1000', downs: '0', },
    5: { id: '5', title: 'Fifth book', author: 'Author5', url: 'https://www.amazon.com/', content: 'SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT AUTHOR 5 SAMPLE TEXT', ups: '50', downs: '0', },
  }

  const relatedPosts = {
1: [posts['4']],
    2: [posts['3'], posts['5']],
    3: [posts['2'], posts['1']],
    4: [posts['2'], posts['1']],
    5: [],
  }
  
  if (!event.field && event.length){
console.log(`Got a BatchInvoke Request. The payload has ${event.length} items to resolve.`);
    return event.map(e => {
// return an error for post 2
if (e.source.id === '2') {
return { 'data': null, 'errorMessage': 'Error Happened', 'errorType': 'ERROR' }
      }
      return {data: relatedPosts[e.source.id]}
      })
  }

  console.log('Got an Invoke Request.')
  let result
  switch (event.field) {
case 'getPost':
      return posts[event.arguments.id]
    case 'allPosts':
      return Object.values(posts)
    case 'addPost':
      // return the arguments back
return event.arguments
    case 'addPostErrorWithData':
      result = posts[event.arguments.id]
      // attached additional error information to the post
      result.errorMessage = 'Error with the mutation, data has changed'
      result.errorType = 'MUTATION_ERROR'
return result
    case 'relatedPosts':
      return relatedPosts[event.source.id]
    default:
      throw new Error('Unknown field, unable to resolve ' + event.field)
  }
}
```

Update the `relatedPosts` resolver code:

```
import { util } from '@aws-appsync/utils';

export function request(ctx) {
  const {source, args} = ctx
  return {
    operation: ctx.info.fieldName === 'relatedPosts' ? 'BatchInvoke' : 'Invoke',
    payload: { field: ctx.info.fieldName, arguments: args, source },
  };
}

export function response(ctx) {
  const { error, result } = ctx;
  if (error) {
    util.appendError(error.message, error.type, result);
  } else if (result.errorMessage) {
    util.appendError(result.errorMessage, result.errorType, result.data)
  } else if (ctx.info.fieldName === 'relatedPosts') {
      return result.data
  } else {
      return result
  }
}
```

The response handler now checks for errors returned by the Lambda function on `Invoke` operations, checks for errors returned for individual items for `BatchInvoke` operations, and finally checks the `fieldName`. For `relatedPosts`, the function returns `result.data`. For all other fields, the function just returns `result`. For example, see the query below:

```
query AllPosts {
  allPosts {
    id
    title
    content
    url
    ups
    downs
    relatedPosts {
      id
    }
    author
  }
}
```

This query returns a GraphQL response similar to the following:

```
{
  "data": {
    "allPosts": [
      {
        "id": "1",
        "relatedPosts": [
          {
            "id": "4"
          }
        ]
      },
      {
        "id": "2",
        "relatedPosts": null
      },
      {
        "id": "3",
        "relatedPosts": [
          {
            "id": "2"
          },
          {
            "id": "1"
          }
        ]
      },
      {
        "id": "4",
        "relatedPosts": [
          {
            "id": "2"
          },
          {
            "id": "1"
          }
        ]
      },
      {
        "id": "5",
        "relatedPosts": []
      }
    ]
  },
  "errors": [
    {
      "path": [
        "allPosts",
        1,
        "relatedPosts"
      ],
      "data": null,
      "errorType": "ERROR",
      "errorInfo": null,
      "locations": [
        {
          "line": 4,
          "column": 5,
          "sourceName": null
        }
      ],
      "message": "Error Happened"
    }
  ]
}
```

### Configuring the maximum batching size
<a name="configure-max-batch-size-js"></a>

To configure the maximum batching size on a resolver, use the following command in the AWS Command Line Interface (AWS CLI):

```
$ aws appsync create-resolver --api-id <api-id> --type-name Query --field-name relatedPosts \
 --code "<code-goes-here>" \
 --runtime name=APPSYNC_JS,runtimeVersion=1.0.0 \
 --data-source-name "<lambda-datasource>" \ 
 --max-batch-size X
```

**Note**  
When providing a request mapping template, you must use the `BatchInvoke` operation to use batching.

# Using local resolvers in AWS AppSync
<a name="tutorial-local-resolvers-js"></a>

AWS AppSync allows you to use supported data sources (AWS Lambda, Amazon DynamoDB, or Amazon OpenSearch Service) to perform various operations. However, in certain scenarios, a call to a supported data source might not be necessary.

This is where the local resolver comes in handy. Instead of calling a remote data source, the local resolver will just **forward** the result of the request handler to the response handler. The field resolution will not leave AWS AppSync.

Local resolvers are useful in a plethora of situations. The most popular use case is to publish notifications without triggering a data source call. To demonstrate this use case, let’s build a pub/sub application in which users can publish and subscribe to messages. This example leverages *Subscriptions*, so if you aren’t familiar with *Subscriptions*, you can follow the [Real-Time Data](aws-appsync-real-time-data.md) tutorial.

## Creating the pub/sub app
<a name="create-the-pub-sub-application-js"></a>

First, create a blank GraphQL API by choosing the **Design from scratch** option and configuring the optional details when creating your GraphQL API.

In our pub/sub application, clients can subscribe to and publish messages. Each published message includes a name and data. Add this to the schema:

```
type Channel {
	name: String!
	data: AWSJSON!
}

type Mutation {
	publish(name: String!, data: AWSJSON!): Channel
}

type Query {
	getChannel: Channel
}

type Subscription {
	subscribe(name: String!): Channel
		@aws_subscribe(mutations: ["publish"])
}
```

Next, let’s attach a resolver to the `Mutation.publish` field. In the **Resolvers** pane next to the **Schema** pane, find the `Mutation` type, then the `publish(...): Channel` field, then click on **Attach**.

Create a *None* data source and name it *PageDataSource*. Attach it to your resolver.

Add your resolver implementation using the following snippet:

```
export function request(ctx) {
  return { payload: ctx.args };
}

export function response(ctx) {
  return ctx.result;
}
```

Make sure you create the resolver and save the changes you made.

## Send and subscribe to messages
<a name="send-and-subscribe-to-messages-js"></a>

For clients to receive messages, they must first be subscribed to an inbox.

In the **Queries** pane, execute the `SubscribeToData` subscription:

```
subscription SubscribeToData {
    subscribe(name:"channel") {
        name
        data
    }
}
```

 The subscriber will receive messages whenever the `publish` mutation is invoked but only when the message is sent to the `channel` subscription. Let’s try this in the **Queries** pane. While your subscription is still running in the console, open up another console and run the following request in the **Queries** pane:

**Note**  
We're using valid JSON strings in this example.

```
mutation PublishData {
    publish(data: "{\"msg\": \"hello world!\"}", name: "channel") {
        data
        name
    }
}
```

The result will look like this:

```
{
  "data": {
    "publish": {
      "data": "{\"msg\":\"hello world!\"}",
      "name": "channel"
    }
  }
}
```

We just demonstrated the use of local resolvers, by publishing a message and receiving it without leaving the AWS AppSync service.

# Combining GraphQL resolvers in AWS AppSync
<a name="tutorial-combining-graphql-resolvers-js"></a>

Resolvers and fields in a GraphQL schema have 1:1 relationships with a large degree of flexibility. Because a data source is configured on a resolver independently of a schema, you have the ability to resolve or manipulate your GraphQL types through different data sources, allowing you to mix and match a schema to best meet your needs.

The following scenarios demonstrate how to mix and match data sources in your schema. Before you begin, you should be familiar with configuring data sources and resolvers for AWS Lambda, Amazon DynamoDB, and Amazon OpenSearch Service.

## Example schema
<a name="example-schema-js"></a>

The following schema has a type of `Post` with three `Query` and `Mutation` operations each:

```
type Post {
    id: ID!
    author: String!
    title: String
    content: String
    url: String
    ups: Int
    downs: Int
    version: Int!
}

type Query {
    allPost: [Post]
    getPost(id: ID!): Post
    searchPosts: [Post]
}

type Mutation {
    addPost(
        id: ID!,
        author: String!,
        title: String,
        content: String,
        url: String
    ): Post
    updatePost(
        id: ID!,
        author: String!,
        title: String,
        content: String,
        url: String,
        ups: Int!,
        downs: Int!,
        expectedVersion: Int!
    ): Post
    deletePost(id: ID!): Post
}
```

In this example, you would have a total of six resolvers with each needing a data source. One way to solve this issue would be to hook these up to a single Amazon DynamoDB table, called `Posts`, in which the `AllPost` field runs a scan and the `searchPosts` field runs a query (see [JavaScript resolver function reference for DynamoDB](https://docs.aws.amazon.com/appsync/latest/devguide/js-resolver-reference-dynamodb.html)). However, you aren't limited to Amazon DynamoDB; different data sources like Lambda or OpenSearch Service exist to meet your business requirements. 

## Altering data through resolvers
<a name="alter-data-through-resolvers-js"></a>

You may need to return results from a third-party database that's not directly supported by AWS AppSync data sources. You may also have to perform complex modifications on the data before it's returned to the API client(s). This could be caused by the improper formatting of the data types, such as timestamp differences on clients, or the handling of backwards compatibility issues. In this case, connecting AWS Lambda functions as a data source to your AWS AppSync API is the appropriate solution. For illustrative purposes, in the following example, an AWS Lambda function manipulates data fetched from a third-party data store:

```
export const handler = (event, context, callback) => {
    // fetch data
    const result = fetcher()

    // apply complex business logic
    const data = transform(result)	

    // return to AppSync
    return data
};
```

This is a perfectly valid Lambda function and could be attached to the `AllPost` field in the GraphQL schema so that any query returning all the results gets random numbers for the ups/downs.

## DynamoDB and OpenSearch Service
<a name="ddb-and-es-js"></a>

For some applications, you might perform mutations or simple lookup queries against DynamoDB and have a background process transfer documents to OpenSearch Service. You could simply attach the `searchPosts` resolver to the OpenSearch Service data source and return search results (from data that originated in DynamoDB) using a GraphQL query. This can be extremely powerful when adding advanced search operations to your applications such keyword, fuzzy word matches, or even geospatial lookups. Transferring data from DynamoDB could be done through an ETL process, or alternatively, you could stream from DynamoDB using Lambda.

To get started with these particular data sources, see our [DynamoDB](https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-resolvers-js.html) and [Lambda](https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers-js.html) tutorials.

For example, using the schema from our previous tutorial, the following mutation adds an item to DynamoDB:

```
mutation addPost {
  addPost(
    id: 123
    author: "Nadia"
    title: "Our first post!"
    content: "This is our first post."
    url: "https://aws.amazon.com/appsync/"
  ) {
    id
    author
    title
    content
    url
    ups
    downs
    version
  }
}
```

This writes data to DynamoDB, which then streams data via Lambda to Amazon OpenSearch Service, which you then use to search for posts by different fields. For example, since the data is in Amazon OpenSearch Service, you can search either the author or content fields with free-form text, even with spaces, as follows:

```
query searchName{
    searchAuthor(name:"   Nadia   "){
        id
        title
        content
    }
}

---------- or ----------

query searchContent{
    searchContent(text:"test"){
        id
        title
        content
    }
}
```

Because the data is written directly to DynamoDB, you can still perform efficient list or item lookup operations against the table with the `allPost{...}` and `getPost{...}` queries. This stack uses the following example code for DynamoDB streams:

**Note**  
This Python code is an example and isn't meant to be used in production code.

```
import boto3
import requests
from requests_aws4auth import AWS4Auth

region = '' # e.g. us-east-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)

host = '' # the OpenSearch Service domain, e.g. https://search-mydomain.us-west-1.es.amazonaws.com
index = 'lambda-index'
datatype = '_doc'
url = host + '/' + index + '/' + datatype + '/'

headers = { "Content-Type": "application/json" }

def handler(event, context):
    count = 0
    for record in event['Records']:
        # Get the primary key for use as the OpenSearch ID
        id = record['dynamodb']['Keys']['id']['S']

        if record['eventName'] == 'REMOVE':
            r = requests.delete(url + id, auth=awsauth)
        else:
            document = record['dynamodb']['NewImage']
            r = requests.put(url + id, auth=awsauth, json=document, headers=headers)
        count += 1
    return str(count) + ' records processed.'
```

You can then use DynamoDB streams to attach this to a DynamoDB table with a primary key of `id`, and any changes to the source of DynamoDB would stream into your OpenSearch Service domain. For more information about configuring this, see the [DynamoDB Streams documentation](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html).

# Using Amazon OpenSearch Service resolvers in AWS AppSync
<a name="tutorial-elasticsearch-resolvers-js"></a>

AWS AppSync supports using Amazon OpenSearch Service from domains that you have provisioned in your own AWS account, provided they don’t exist inside a VPC. After your domains are provisioned, you can connect to them using a data source, at which point you can configure a resolver in the schema to perform GraphQL operations such as queries, mutations, and subscriptions. This tutorial will take you through some common examples.

For more information, see our [JavaScript resolver function reference for OpenSearch](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-elasticsearch-js.html).

## Create a new OpenSearch Service domain
<a name="create-a-new-es-domain-js"></a>

To get started with this tutorial, you need an existing OpenSearch Service domain. If you don’t have one, you can use the following sample. Note that it can take up to 15 minutes for an OpenSearch Service domain to be created before you can move on to integrating it with an AWS AppSync data source.

```
aws cloudformation create-stack --stack-name AppSyncOpenSearch \
--template-url https://s3.us-west-2.amazonaws.com/awsappsync/resources/elasticsearch/ESResolverCFTemplate.yaml \
--parameters ParameterKey=OSDomainName,ParameterValue=ddtestdomain ParameterKey=Tier,ParameterValue=development \
--capabilities CAPABILITY_NAMED_IAM
```

You can launch the following AWS CloudFormation stack in the US-West-2 (Oregon) Region in your AWS account:

 [https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/elasticsearch/ESResolverCFTemplate.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/elasticsearch/ESResolverCFTemplate.yaml)

## Configure a data source for OpenSearch Service
<a name="configure-data-source-for-es-js"></a>

After the OpenSearch Service domain is created, navigate to your AWS AppSync GraphQL API and choose the **Data Sources** tab. Choose **Create data source** and enter a friendly name for the data source such as “*oss*”. Then, choose **Amazon OpenSearch domain** for **Data source type**, choose the appropriate Region, and you should see your OpenSearch Service domain listed. After selecting it, you can either create a new role, and AWS AppSync will assign the role-appropriate permissions, or you can choose an existing role, which has the following inline policy:

You’ll also need to set up a trust relationship with AWS AppSync for that role:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "appsync.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

------

Additionally, the OpenSearch Service domain has its own **Access Policy** that you can modify through the Amazon OpenSearch Service console. You must add a policy similar to the one below with the appropriate actions and resources for the OpenSearch Service domain. Note that the **Principal** will be the AWS AppSync data source role, which can be found in the IAM console if you let said console create it.

## Connecting a resolver
<a name="connecting-a-resolver-js"></a>

Now that the data source is connected to your OpenSearch Service domain, you can connect it to your GraphQL schema with a resolver as shown in the following example:

```
 type Query {
   getPost(id: ID!): Post
   allPosts: [Post]
 }

 type Mutation {
   addPost(id: ID!, author: String, title: String, url: String, ups: Int, downs: Int, content: String): AWSJSON
 }

type Post {
  id: ID!
  author: String
  title: String
  url: String
  ups: Int
  downs: Int
  content: String
}
```

Note that there is a user-defined `Post` type with a field of `id`. In the following examples, we assume there is a process (which can be automated) for putting this type into your OpenSearch Service domain, which would map to a path root of `/post/_doc` where `post` is the index. From this root path, you can perform individual document searches, wildcard searches with `/id/post*`, or multi-document searches with a path of `/post/_search`. For example, if you have another type called `User`, you can index documents under a new index called `user`, then perform searches with a **path** of `/user/_search`. 

From the **Schema** editor in the AWS AppSync console, modify the preceding `Posts` schema to include a `searchPosts` query:

```
type Query {
  getPost(id: ID!): Post
  allPosts: [Post]
  searchPosts: [Post]
}
```

Save the schema. In the **Resolvers** pane, find `searchPosts` and choose **Attach**. Choose your OpenSearch Service data source and save the resolver. Update your resolver's code using the snippet below:

```
import { util } from '@aws-appsync/utils'

/**
 * Searches for documents by using an input term
 * @param {import('@aws-appsync/utils').Context} ctx the context
 * @returns {*} the request
 */
export function request(ctx) {
	return {
		operation: 'GET',
		path: `/post/_search`,
		params: { body: { from: 0, size: 50 } },
	}
}

/**
 * Returns the fetched items
 * @param {import('@aws-appsync/utils').Context} ctx the context
 * @returns {*} the result
 */
export function response(ctx) {
	if (ctx.error) {
		util.error(ctx.error.message, ctx.error.type)
	}
	return ctx.result.hits.hits.map((hit) => hit._source)
}
```

This assumes that the preceding schema has documents that have been indexed in OpenSearch Service under the `post` field. If you structure your data differently, you’ll need to update accordingly.

## Modifying your searches
<a name="modifying-your-searches-js"></a>

The preceding resolver request handler performs a simple query for all records. Suppose you want to search by a specific author. Furthermore, suppose you want that author to be an argument defined in your GraphQL query. In the **Schema** editor of the AWS AppSync console, add an `allPostsByAuthor` query:

```
type Query {
  getPost(id: ID!): Post
  allPosts: [Post]
  allPostsByAuthor(author: String!): [Post]
  searchPosts: [Post]
}
```

In the **Resolvers** pane, find `allPostsByAuthor` and choose **Attach**. Choose the OpenSearch Service data source and use the following code:

```
import { util } from '@aws-appsync/utils'

/**
 * Searches for documents by `author`
 * @param {import('@aws-appsync/utils').Context} ctx the context
 * @returns {*} the request
 */
export function request(ctx) {
	return {
		operation: 'GET',
		path: '/post/_search',
		params: {
			body: {
				from: 0,
				size: 50,
				query: { match: { author: ctx.args.author } },
			},
		},
	}
}

/**
 * Returns the fetched items
 * @param {import('@aws-appsync/utils').Context} ctx the context
 * @returns {*} the result
 */
export function response(ctx) {
	if (ctx.error) {
		util.error(ctx.error.message, ctx.error.type)
	}
	return ctx.result.hits.hits.map((hit) => hit._source)
}
```

Note that the `body` is populated with a term query for the `author` field, which is passed through from the client as an argument. Optionally, you could use prepopulated information, such as standard text.

## Adding data to OpenSearch Service
<a name="adding-data-to-es-js"></a>

You may want to add data to your OpenSearch Service domain as the result of a GraphQL mutation. This is a powerful mechanism for searching and other purposes. Because you can use GraphQL subscriptions to [make your data real-time](aws-appsync-real-time-data.md), it can serve as a mechanism for notifying clients of updates to data in your OpenSearch Service domain.

Return to the **Schema** page in the AWS AppSync console and select **Attach** for the `addPost()` mutation. Select the OpenSearch Service data source again and use the following code:

```
import { util } from '@aws-appsync/utils'

/**
 * Searches for documents by `author`
 * @param {import('@aws-appsync/utils').Context} ctx the context
 * @returns {*} the request
 */
export function request(ctx) {
	return {
		operation: 'PUT',
		path: `/post/_doc/${ctx.args.id}`,
		params: { body: ctx.args },
	}
}

/**
 * Returns the inserted post
 * @param {import('@aws-appsync/utils').Context} ctx the context
 * @returns {*} the result
 */
export function response(ctx) {
	if (ctx.error) {
		util.error(ctx.error.message, ctx.error.type)
	}
	return ctx.result
}
```

Like before, this is an example of how your data might be structured. If you have different field names or indices, you need to update the `path` and `body`. This example also shows how to use `context.arguments`, which can also be written as `ctx.args`, in your request handler.

## Retrieving a single document
<a name="retrieving-a-single-document-js"></a>

Finally, if you want to use the `getPost(id:ID)` query in your schema to return an individual document, find this query in the **Schema** editor of the AWS AppSync console and choose **Attach**. Select the OpenSearch Service data source again and use the following code:

```
import { util } from '@aws-appsync/utils'

/**
 * Searches for documents by `author`
 * @param {import('@aws-appsync/utils').Context} ctx the context
 * @returns {*} the request
 */
export function request(ctx) {
	return {
		operation: 'GET',
		path: `/post/_doc/${ctx.args.id}`,
	}
}

/**
 * Returns the post
 * @param {import('@aws-appsync/utils').Context} ctx the context
 * @returns {*} the result
 */
export function response(ctx) {
	if (ctx.error) {
		util.error(ctx.error.message, ctx.error.type)
	}
	return ctx.result._source
}
```

## Perform queries and mutations
<a name="tutorial-elasticsearch-resolvers-perform-queries-mutations-js"></a>

You should now be able to perform GraphQL operations against your OpenSearch Service domain. Navigate to the **Queries** tab of the AWS AppSync console and add a new record:

```
mutation AddPost {
    addPost (
        id:"12345"
        author: "Fred"
        title: "My first book"
        content: "This will be fun to write!"
        url: "publisher website",
        ups: 100,
        downs:20 
       )
}
```

You’ll see the result of the mutation on the right. Similarly, you can now run a `searchPosts` query against your OpenSearch Service domain:

```
query search {
    searchPosts {
        id
        title
        author
        content
    }
}
```

## Best practices
<a name="best-practices-js"></a>
+ OpenSearch Service should be for querying data, not as your primary database. You may want to use OpenSearch Service in conjunction with Amazon DynamoDB as outlined in [Combining GraphQL Resolvers](https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-combining-graphql-resolvers-js.html).
+ Only give access to your domain by allowing the AWS AppSync service role to access the cluster.
+ You can start small in development, with the lowest-cost cluster, and then move to a larger cluster with high availability (HA) as you move into production.

# Performing DynamoDB transactions in AWS AppSync
<a name="tutorial-dynamodb-transact-js"></a>

AWS AppSync supports using Amazon DynamoDB transaction operations across one or more tables in a single Region. Supported operations are `TransactGetItems` and `TransactWriteItems`. By using these features in AWS AppSync, you can perform tasks such as:
+ Passing a list of keys in a single query and returning the results from a table
+ Reading records from one or more tables in a single query
+ Writing records in transactions to one or more tables in an all-or-nothing way
+ Running transactions when some conditions are satisfied

## Permissions
<a name="permissions-js"></a>

Like other resolvers, you need to create a data source in AWS AppSync and either create a role or use an existing one. Because transaction operations require different permissions on DynamoDB tables, you need to grant the configured role permissions for read or write actions:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:PutItem",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:UpdateItem"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:dynamodb:us-east-1:111122223333:table/TABLENAME",
                "arn:aws:dynamodb:us-east-1:111122223333:table/TABLENAME/*"
            ]
        }
    ]
}
```

------

**Note**  
Roles are tied to data sources in AWS AppSync, and resolvers on fields are invoked against a data source. Data sources configured to fetch against DynamoDB only have one table specified to keep configurations simple. Therefore, when performing a transaction operation against multiple tables in a single resolver, which is a more advanced task, you must grant the role on that data source access to any tables the resolver will interact with. This would be done in the **Resource** field in the IAM policy above. Configuration of the transaction calls against the tables is done in the resolver code, which we describe below.

## Data source
<a name="data-source-js"></a>

For the sake of simplicity, we’ll use the same data source for all the resolvers used in this tutorial. 

We’ll have two tables called **savingAccounts** and **checkingAccounts**, both with the `accountNumber` as a partition key, and a **transactionHistory** table with `transactionId` as partition key. You can use the CLI commands below to create your tables. Make sure to replace `region` with your Region.

**With the CLI**

```
aws dynamodb create-table --table-name savingAccounts \
  --attribute-definitions AttributeName=accountNumber,AttributeType=S \
  --key-schema AttributeName=accountNumber,KeyType=HASH \
  --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
  --table-class STANDARD --region region

aws dynamodb create-table --table-name checkingAccounts \
  --attribute-definitions AttributeName=accountNumber,AttributeType=S \
  --key-schema AttributeName=accountNumber,KeyType=HASH \
  --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
  --table-class STANDARD --region region

aws dynamodb create-table --table-name transactionHistory \
  --attribute-definitions AttributeName=transactionId,AttributeType=S \
  --key-schema AttributeName=transactionId,KeyType=HASH \
  --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
  --table-class STANDARD --region region
```

In the AWS AppSync console, in **Data sources**, create a new DynamoDB data source and name it **TransactTutorial**. Select **savingAccounts** as the table (though the specific table does not matter when using transactions). Choose to create a new role and the data source. You can review the data source configuration to see the name of the generated role. In the IAM console, you can add an in-line policy that allows the data source to interact with all the tables.

Replace `region` and `accountID` with your Region and account ID:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:PutItem",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:UpdateItem"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:dynamodb:us-east-1:111122223333:table/savingAccounts",
                "arn:aws:dynamodb:us-east-1:111122223333:table/savingAccounts/*",
                "arn:aws:dynamodb:us-east-1:111122223333:table/checkingAccounts",
                "arn:aws:dynamodb:us-east-1:111122223333:table/checkingAccounts/*",
                "arn:aws:dynamodb:us-east-1:111122223333:table/transactionHistory",
                "arn:aws:dynamodb:us-east-1:111122223333:table/transactionHistory/*"
            ]
        }
    ]
}
```

------

## Transactions
<a name="transactions-js"></a>

For this example, the context is a classic banking transaction, where we’ll use `TransactWriteItems` to:
+ Transfer money from saving accounts to checking accounts
+ Generate new transaction records for each transaction

And then we’ll use `TransactGetItems` to retrieve details from saving accounts and checking accounts.

**Warning**  
`TransactWriteItems` is not supported when used with conflict detection and resolution. These settings must be disabled to prevent possible errors.

We define our GraphQL schema as follows:

```
type SavingAccount {
    accountNumber: String!
    username: String
    balance: Float
}

type CheckingAccount {
    accountNumber: String!
    username: String
    balance: Float
}

type TransactionHistory {
    transactionId: ID!
    from: String
    to: String
    amount: Float
}

type TransactionResult {
    savingAccounts: [SavingAccount]
    checkingAccounts: [CheckingAccount]
    transactionHistory: [TransactionHistory]
}

input SavingAccountInput {
    accountNumber: String!
    username: String
    balance: Float
}

input CheckingAccountInput {
    accountNumber: String!
    username: String
    balance: Float
}

input TransactionInput {
    savingAccountNumber: String!
    checkingAccountNumber: String!
    amount: Float!
}

type Query {
    getAccounts(savingAccountNumbers: [String], checkingAccountNumbers: [String]): TransactionResult
}

type Mutation {
    populateAccounts(savingAccounts: [SavingAccountInput], checkingAccounts: [CheckingAccountInput]): TransactionResult
    transferMoney(transactions: [TransactionInput]): TransactionResult
}
```

### TransactWriteItems - Populate accounts
<a name="transactwriteitems-populate-accounts-js"></a>

In order to transfer money between accounts, we need to populate the table with the details. We’ll use the GraphQL operation `Mutation.populateAccounts` to do so.

In the Schema section, click on **Attach** next to the `Mutation.populateAccounts` operation. Choose the `TransactTutorial` data source and choose **Create**.

Now use the following code:

```
import { util } from '@aws-appsync/utils'

export function request(ctx) {
	const { savingAccounts, checkingAccounts } = ctx.args

	const savings = savingAccounts.map(({ accountNumber, ...rest }) => {
		return {
			table: 'savingAccounts',
			operation: 'PutItem',
			key: util.dynamodb.toMapValues({ accountNumber }),
			attributeValues: util.dynamodb.toMapValues(rest),
		}
	})

	const checkings = checkingAccounts.map(({ accountNumber, ...rest }) => {
		return {
			table: 'checkingAccounts',
			operation: 'PutItem',
			key: util.dynamodb.toMapValues({ accountNumber }),
			attributeValues: util.dynamodb.toMapValues(rest),
		}
	})
	return {
		version: '2018-05-29',
		operation: 'TransactWriteItems',
		transactItems: [...savings, ...checkings],
	}
}

export function response(ctx) {
	if (ctx.error) {
		util.error(ctx.error.message, ctx.error.type, null, ctx.result.cancellationReasons)
	}
	const { savingAccounts: sInput, checkingAccounts: cInput } = ctx.args
	const keys = ctx.result.keys
	const savingAccounts = sInput.map((_, i) => keys[i])
	const sLength = sInput.length
	const checkingAccounts = cInput.map((_, i) => keys[sLength + i])
	return { savingAccounts, checkingAccounts }
}
```

Save the resolver and navigate to the **Queries** section of the AWS AppSync console to populate the accounts.

Execute the following mutation:

```
mutation populateAccounts {
  populateAccounts (
    savingAccounts: [
      {accountNumber: "1", username: "Tom", balance: 100},
      {accountNumber: "2", username: "Amy", balance: 90},
      {accountNumber: "3", username: "Lily", balance: 80},
    ]
    checkingAccounts: [
      {accountNumber: "1", username: "Tom", balance: 70},
      {accountNumber: "2", username: "Amy", balance: 60},
      {accountNumber: "3", username: "Lily", balance: 50},
    ]) {
    savingAccounts {
      accountNumber
    }
    checkingAccounts {
      accountNumber
    }
  }
}
```

We populated three saving accounts and three checking accounts in one mutation.

Use the DynamoDB console to validate that data shows up in both the **savingAccounts** and **checkingAccounts** tables.

### TransactWriteItems - Transfer money
<a name="transactwriteitems-transfer-money-js"></a>

Attach a resolver to the `transferMoney` mutation with the following code. For each transfer, we need a success modifier to both the checking and savings accounts, and we need to track the transfer in transactions.

```
import { util } from '@aws-appsync/utils'

export function request(ctx) {
	const transactions = ctx.args.transactions

	const savings = []
	const checkings = []
	const history = []
	transactions.forEach((t) => {
		const { savingAccountNumber, checkingAccountNumber, amount } = t
		savings.push({
			table: 'savingAccounts',
			operation: 'UpdateItem',
			key: util.dynamodb.toMapValues({ accountNumber: savingAccountNumber }),
			update: {
				expression: 'SET balance = balance - :amount',
				expressionValues: util.dynamodb.toMapValues({ ':amount': amount }),
			},
		})
		checkings.push({
			table: 'checkingAccounts',
			operation: 'UpdateItem',
			key: util.dynamodb.toMapValues({ accountNumber: checkingAccountNumber }),
			update: {
				expression: 'SET balance = balance + :amount',
				expressionValues: util.dynamodb.toMapValues({ ':amount': amount }),
			},
		})
		history.push({
			table: 'transactionHistory',
			operation: 'PutItem',
			key: util.dynamodb.toMapValues({ transactionId: util.autoId() }),
			attributeValues: util.dynamodb.toMapValues({
				from: savingAccountNumber,
				to: checkingAccountNumber,
				amount,
			}),
		})
	})

	return {
		version: '2018-05-29',
		operation: 'TransactWriteItems',
		transactItems: [...savings, ...checkings, ...history],
	}
}

export function response(ctx) {
	if (ctx.error) {
		util.error(ctx.error.message, ctx.error.type, null, ctx.result.cancellationReasons)
	}
	const tInput = ctx.args.transactions
	const tLength = tInput.length
	const keys = ctx.result.keys
	const savingAccounts = tInput.map((_, i) => keys[tLength * 0 + i])
	const checkingAccounts = tInput.map((_, i) => keys[tLength * 1 + i])
	const transactionHistory = tInput.map((_, i) => keys[tLength * 2 + i])
	return { savingAccounts, checkingAccounts, transactionHistory }
}
```

Now, navigate to the **Queries** section of the AWS AppSync console and execute the **transferMoney** mutation as follows:

```
mutation write {
  transferMoney(
    transactions: [
      {savingAccountNumber: "1", checkingAccountNumber: "1", amount: 7.5},
      {savingAccountNumber: "2", checkingAccountNumber: "2", amount: 6.0},
      {savingAccountNumber: "3", checkingAccountNumber: "3", amount: 3.3}
    ]) {
    savingAccounts {
      accountNumber
    }
    checkingAccounts {
      accountNumber
    }
    transactionHistory {
      transactionId
    }
  }
}
```

We sent three banking transactions in one mutation. Use the DynamoDB console to validate that data shows up in the **savingAccounts**, **checkingAccounts**, and **transactionHistory** tables.

### TransactGetItems - Retrieve accounts
<a name="transactgetitems-retrieve-accounts-js"></a>

In order to retrieve the details from savings and checking accounts in a single transactional request, we’ll attach a resolver to the `Query.getAccounts` GraphQL operation on our schema. Select **Attach**, pick the same `TransactTutorial` data source created at the beginning of the tutorial. Use the following code: 

```
import { util } from '@aws-appsync/utils'

export function request(ctx) {
	const { savingAccountNumbers, checkingAccountNumbers } = ctx.args

	const savings = savingAccountNumbers.map((accountNumber) => {
		return { table: 'savingAccounts', key: util.dynamodb.toMapValues({ accountNumber }) }
	})
	const checkings = checkingAccountNumbers.map((accountNumber) => {
		return { table: 'checkingAccounts', key: util.dynamodb.toMapValues({ accountNumber }) }
	})
	return {
		version: '2018-05-29',
		operation: 'TransactGetItems',
		transactItems: [...savings, ...checkings],
	}
}

export function response(ctx) {
	if (ctx.error) {
		util.error(ctx.error.message, ctx.error.type, null, ctx.result.cancellationReasons)
	}

	const { savingAccountNumbers: sInput, checkingAccountNumbers: cInput } = ctx.args
	const items = ctx.result.items
	const savingAccounts = sInput.map((_, i) => items[i])
	const sLength = sInput.length
	const checkingAccounts = cInput.map((_, i) => items[sLength + i])
	return { savingAccounts, checkingAccounts }
}
```

Save the resolver and navigate to the **Queries** sections of the AWS AppSync console. In order to retrieve the savings and checking accounts, execute the following query:

```
query getAccounts {
  getAccounts(
    savingAccountNumbers: ["1", "2", "3"],
    checkingAccountNumbers: ["1", "2"]
  ) {
    savingAccounts {
      accountNumber
      username
      balance
    }
    checkingAccounts {
      accountNumber
      username
      balance
    }
  }
}
```

We have successfully demonstrated the use of DynamoDB transactions using AWS AppSync.

# Using DynamoDB batch operations in AWS AppSync
<a name="tutorial-dynamodb-batch-js"></a>

AWS AppSync supports using Amazon DynamoDB batch operations across one or more tables in a single Region. Supported operations are `BatchGetItem`, `BatchPutItem`, and `BatchDeleteItem`. By using these features in AWS AppSync, you can perform tasks such as:
+ Passing a list of keys in a single query and returning the results from a table
+ Reading records from one or more tables in a single query
+ Writing records in bulk to one or more tables
+ Conditionally writing or deleting records in multiple tables that might have a relation

Batch operations in AWS AppSync have two key differences from non-batched operations:
+ The data source role must have permissions to all tables that the resolver will access.
+ The table specification for a resolver is part of the request object.

## Single table batches
<a name="single-table-batch-js"></a>

**Warning**  
`BatchPutItem` and `BatchDeleteItem` are not supported when used with conflict detection and resolution. These settings must be disabled to prevent possible errors.

To get started, let’s create a new GraphQL API. In the AWS AppSync console, choose **Create API**, **GraphQL APIs**, and **Design from scratch**. Name your API `BatchTutorial API`, choose **Next**, and on the **Specify GraphQL resources** step, choose **Create GraphQL resources later** and click **Next**. Review your details and create the API. Go to the **Schema** page and paste the following schema, noting that for the query, we’ll pass in a list of IDs:

```
type Post {
    id: ID!
    title: String
}

input PostInput {
    id: ID!
    title: String
}

type Query {
    batchGet(ids: [ID]): [Post]
}

type Mutation {
    batchAdd(posts: [PostInput]): [Post]
    batchDelete(ids: [ID]): [Post]
}
```

Save your schema and choose **Create Resources** at the top of the page. Choose **Use existing type** and select the `Post` type. Name your table `Posts`. Make sure the **Primary Key** is set to `id`, unselect **Automatically generate GraphQL** (you’ll provide your own code), and select **Create**. To get you started, AWS AppSync creates a new DynamoDB table and a data source connected to the table with the appropriate roles. However, there are still a couple of permissions you need to add to the role. Go to the **Data sources** page and choose the new data source. Under **Select an existing role**, you'll notice that a role was automatically created for the table. Take note of the role (should look something like `appsync-ds-ddb-aaabbbcccddd-Posts`) and then go to the IAM console ([https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/)). In the IAM console, choose **Roles**, then choose your role from the table. In your role, under **Permissions policies**, click on the "`+`" next to the policy (should have a similar name to the role name). Choose **Edit** at the top of the collapsible when the policy appears. You need to add batch permissions to your policy, specifically `dynamodb:BatchGetItem` and `dynamodb:BatchWriteItem`. It'll look something like this:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:PutItem",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:UpdateItem",
                "dynamodb:BatchWriteItem",
                "dynamodb:BatchGetItem"
            ],
            "Resource": [
                "arn:aws:dynamodb:us-east-1:111122223333:table/locationReadings",
                "arn:aws:dynamodb:us-east-1:111122223333:table/locationReadings/*",
                "arn:aws:dynamodb:us-east-1:111122223333:table/temperatureReadings",
                "arn:aws:dynamodb:us-east-1:111122223333:table/temperatureReadings/*"
            ]
        }
    ]
}
```

------

Choose **Next**, then **Save changes**. Your policy should allow batch processing now.

Back in the AWS AppSync console, go to the **Schema** page and select **Attach** next to the `Mutation.batchAdd` field. Create your resolver using the `Posts` table as the data source. In the code editor, replace the handlers with the snippet below. This snippet automatically takes each item in the GraphQL `input PostInput` type and builds a map, which is needed for the `BatchPutItem` operation:

```
import { util } from "@aws-appsync/utils";

export function request(ctx) {
  return {
    operation: "BatchPutItem",
    tables: {
      Posts: ctx.args.posts.map((post) => util.dynamodb.toMapValues(post)),
    },
  };
}

export function response(ctx) {
  if (ctx.error) {
    util.error(ctx.error.message, ctx.error.type);
  }
  return ctx.result.data.Posts;
}
```

Navigate to the **Queries** page of the AWS AppSync console and run the following `batchAdd` mutation:

```
mutation add {
    batchAdd(posts:[{
            id: 1 title: "Running in the Park"},{
            id: 2 title: "Playing fetch"
        }]){
            id
            title
    }
}
```

You should see the results printed on the screen; this can be validated by reviewing the DynamoDB console to scan for the values written to the `Posts` table.

Next, repeat the process of attaching a resolver but for the `Query.batchGet` field using the `Posts` table as the data source. Replace the handlers with the code below. This automatically takes each item in the GraphQL `ids:[]` type and builds a map that is needed for the `BatchGetItem` operation:

```
import { util } from "@aws-appsync/utils";

export function request(ctx) {
  return {
    operation: "BatchGetItem",
    tables: {
      Posts: {
        keys: ctx.args.ids.map((id) => util.dynamodb.toMapValues({ id })),
        consistentRead: true,
      },
    },
  };
}

export function response(ctx) {
  if (ctx.error) {
    util.error(ctx.error.message, ctx.error.type);
  }
  return ctx.result.data.Posts;
}
```

Now, go back to the **Queries** page of the AWS AppSync console and run the following `batchGet` query:

```
query get {
    batchGet(ids:[1,2,3]){
        id
        title
    }
}
```

This should return the results for the two `id` values that you added earlier. Note that a `null` value was returned for the `id` with a value of `3`. This is because there was no record in your `Posts` table with that value yet. Also note that AWS AppSync returns the results in the same order as the keys passed to the query, which is an additional feature that AWS AppSync performs on your behalf. So, if you switch to `batchGet(ids:[1,3,2])`, you’ll see that the order changed. You’ll also know which `id` returned a `null` value.

Finally, attach one more resolver to the `Mutation.batchDelete` field using the `Posts` table as the data source. Replace the handlers with the code below. This automatically takes each item in the GraphQL `ids:[]` type and builds a map that is needed for the `BatchGetItem` operation:

```
import { util } from "@aws-appsync/utils";

export function request(ctx) {
  return {
    operation: "BatchDeleteItem",
    tables: {
      Posts: ctx.args.ids.map((id) => util.dynamodb.toMapValues({ id })),
    },
  };
}

export function response(ctx) {
  if (ctx.error) {
    util.error(ctx.error.message, ctx.error.type);
  }
  return ctx.result.data.Posts;
}
```

Now, go back to the **Queries** page of the AWS AppSync console and run the following `batchDelete` mutation:

```
mutation delete {
    batchDelete(ids:[1,2]){ id }
}
```

The records with `id` `1` and `2` should now be deleted. If you re-run the `batchGet()` query from earlier, these should return `null`.

## Multi-table batch
<a name="multi-table-batch-js"></a>

**Warning**  
`BatchPutItem` and `BatchDeleteItem` are not supported when used with conflict detection and resolution. These settings must be disabled to prevent possible errors.

AWS AppSync also enables you to perform batch operations across tables. Let’s build a more complex application. Imagine we are building a pet health app wherein sensors report the pet's location and body temperature. The sensors are battery powered and attempt to connect to the network every few minutes. When a sensor establishes a connection, it sends its readings to our AWS AppSync API. Triggers then analyze the data so a dashboard can be presented to the pet owner. Let’s focus on representing the interactions between the sensor and the backend data store.

In the AWS AppSync console, choose **Create API**, **GraphQL APIs**, and **Design from scratch**. Name your API `MultiBatchTutorial API`, choose **Next**, and on the **Specify GraphQL resources** step, choose **Create GraphQL resources later** and click **Next**. Review your details and create the API. Go to the **Schema** page and paste and save the following schema:

```
type Mutation {
    # Register a batch of readings
    recordReadings(tempReadings: [TemperatureReadingInput], locReadings: [LocationReadingInput]): RecordResult
    # Delete a batch of readings
    deleteReadings(tempReadings: [TemperatureReadingInput], locReadings: [LocationReadingInput]): RecordResult
}

type Query {
    # Retrieve all possible readings recorded by a sensor at a specific time
    getReadings(sensorId: ID!, timestamp: String!): [SensorReading]
}

type RecordResult {
    temperatureReadings: [TemperatureReading]
    locationReadings: [LocationReading]
}

interface SensorReading {
    sensorId: ID!
    timestamp: String!
}

# Sensor reading representing the sensor temperature (in Fahrenheit)
type TemperatureReading implements SensorReading {
    sensorId: ID!
    timestamp: String!
    value: Float
}

# Sensor reading representing the sensor location (lat,long)
type LocationReading implements SensorReading {
    sensorId: ID!
    timestamp: String!
    lat: Float
    long: Float
}

input TemperatureReadingInput {
    sensorId: ID!
    timestamp: String
    value: Float
}

input LocationReadingInput {
    sensorId: ID!
    timestamp: String
    lat: Float
    long: Float
}
```

We need to create two DynamoDB tables:
+ `locationReadings` will store sensor location readings.
+ `temperatureReadings` will store sensor temperature readings.

Both tables will share the same primary key structure: `sensorId (String)` as the partition key and `timestamp (String)` as the sort key.

Choose **Create Resources** at the top of the page. Choose **Use existing type** and select the `locationReadings` type. Name your table `locationReadings`. Make sure the **Primary Key** is set to `sensorId` and the sort key to `timestamp`. Unselect **Automatically generate GraphQL** (you’ll provide your own code), and select **Create**. Repeat this process for `temperatureReadings` using the `temperatureReadings` as the type and table name. Use the same keys as above.

Your new tables will contain automatically generated roles. There are still a couple of permissions you need to add to those roles. Go to the **Data sources** page and choose `locationReadings`. Under **Select an existing role**, you can see the role. Take note of the role (should look something like `appsync-ds-ddb-aaabbbcccddd-locationReadings`) and then go to the IAM console ([https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/)). In the IAM console, choose **Roles**, then choose your role from the table. In your role, under **Permissions policies**, click on the "`+`" next to the policy (should have a similar name to the role name). Choose **Edit** at the top of the collapsible when the policy appears. You need to add permissions to this policy. It'll look something like this:

Choose **Next**, then **Save changes**. Repeat this process for the `temperatureReadings` data source using the same policy snippet above.

### BatchPutItem - Recording sensor readings
<a name="batchputitem-recording-sensor-readings-js"></a>

Our sensors need to be able to send their readings once they connect to the internet. The GraphQL field `Mutation.recordReadings` is the API they will use to do so. We'll need to add a resolver to this field.

In the AWS AppSync console's **Schema** page, select **Attach** next to the `Mutation.recordReadings` field. On the next screen, create your resolver using the `locationReadings` table as the data source.

After creating your resolver, replace the handlers with the following code in the editor. This `BatchPutItem` operation allows us to specify multiple tables: 

```
import { util } from '@aws-appsync/utils'

export function request(ctx) {
	const { locReadings, tempReadings } = ctx.args
	const locationReadings = locReadings.map((loc) => util.dynamodb.toMapValues(loc))
	const temperatureReadings = tempReadings.map((tmp) => util.dynamodb.toMapValues(tmp))

	return {
		operation: 'BatchPutItem',
		tables: {
			locationReadings,
			temperatureReadings,
		},
	}
}

export function response(ctx) {
	if (ctx.error) {
		util.appendError(ctx.error.message, ctx.error.type)
	}
	return ctx.result.data
}
```

With batch operations, there can be both errors and results returned from the invocation. In that case, we’re free to do some extra error handling.

**Note**  
The use of `utils.appendError()` is similar to the `util.error()`, with the major distinction that it doesn’t interrupt the evaluation of the request or response handler. Instead, it signals there was an error with the field but allows the handler to be evaluated and consequently return data back to the caller. We recommend that you use `utils.appendError()` when your application needs to return partial results.

Save the resolver and navigate to the **Queries** page in the AWS AppSync console. We can now send some sensor readings.

Execute the following mutation:

```
mutation sendReadings {
  recordReadings(
    tempReadings: [
      {sensorId: 1, value: 85.5, timestamp: "2018-02-01T17:21:05.000+08:00"},
      {sensorId: 1, value: 85.7, timestamp: "2018-02-01T17:21:06.000+08:00"},
      {sensorId: 1, value: 85.8, timestamp: "2018-02-01T17:21:07.000+08:00"},
      {sensorId: 1, value: 84.2, timestamp: "2018-02-01T17:21:08.000+08:00"},
      {sensorId: 1, value: 81.5, timestamp: "2018-02-01T17:21:09.000+08:00"}
    ]
    locReadings: [
      {sensorId: 1, lat: 47.615063, long: -122.333551, timestamp: "2018-02-01T17:21:05.000+08:00"},
      {sensorId: 1, lat: 47.615163, long: -122.333552, timestamp: "2018-02-01T17:21:06.000+08:00"},
      {sensorId: 1, lat: 47.615263, long: -122.333553, timestamp: "2018-02-01T17:21:07.000+08:00"},
      {sensorId: 1, lat: 47.615363, long: -122.333554, timestamp: "2018-02-01T17:21:08.000+08:00"},
      {sensorId: 1, lat: 47.615463, long: -122.333555, timestamp: "2018-02-01T17:21:09.000+08:00"}
    ]) {
    locationReadings {
      sensorId
      timestamp
      lat
      long
    }
    temperatureReadings {
      sensorId
      timestamp
      value
    }
  }
}
```

We sent ten sensor readings in one mutation with readings split up across two tables. Use the DynamoDB console to validate that the data shows up in both the `locationReadings` and `temperatureReadings` tables.

### BatchDeleteItem - Deleting sensor readings
<a name="batchdeleteitem-deleting-sensor-readings-js"></a>

Similarly, we would also need to be able to delete batches of sensor readings. Let’s use the `Mutation.deleteReadings` GraphQL field for this purpose. In the AWS AppSync console's **Schema** page, select **Attach** next to the `Mutation.deleteReadings` field. On the next screen, create your resolver using the `locationReadings` table as the data source.

After creating your resolver, replace the handlers in the code editor with the snippet below. In this resolver, we use a helper function mapper that extracts the `sensorId` and the `timestamp` from the provided inputs. 

```
import { util } from '@aws-appsync/utils'

export function request(ctx) {
	const { locReadings, tempReadings } = ctx.args
	const mapper = ({ sensorId, timestamp }) => util.dynamodb.toMapValues({ sensorId, timestamp })

	return {
		operation: 'BatchDeleteItem',
		tables: {
			locationReadings: locReadings.map(mapper),
			temperatureReadings: tempReadings.map(mapper),
		},
	}
}

export function response(ctx) {
	if (ctx.error) {
		util.appendError(ctx.error.message, ctx.error.type)
	}
	return ctx.result.data
}
```

Save the resolver and navigate to the **Queries** page in the AWS AppSync console. Now, let’s delete a couple of sensor readings.

Execute the following mutation:

```
mutation deleteReadings {
  # Let's delete the first two readings we recorded
  deleteReadings(
    tempReadings: [{sensorId: 1, timestamp: "2018-02-01T17:21:05.000+08:00"}]
    locReadings: [{sensorId: 1, timestamp: "2018-02-01T17:21:05.000+08:00"}]) {
    locationReadings {
      sensorId
      timestamp
      lat
      long
    }
    temperatureReadings {
      sensorId
      timestamp
      value
    }
  }
}
```

**Note**  
Contrary to the `DeleteItem` operation, the fully deleted item isn’t returned in the response. Only the passed key is returned. To learn more, see the [BatchDeleteItem in JavaScript resolver function reference for DynamoDB](https://docs.aws.amazon.com/appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-batch-delete-item) .

Validate through the DynamoDB console that these two readings have been deleted from the `locationReadings` and `temperatureReadings` tables.

### BatchGetItem - Retrieve readings
<a name="batchgetitem-retrieve-readings-js"></a>

Another common operation for our app would be to retrieve the readings for a sensor at a specific point in time. Let’s attach a resolver to the `Query.getReadings` GraphQL field on our schema. In the AWS AppSync console's **Schema** page, select **Attach** next to the `Query.getReadings` field. On the next screen, create your resolver using the `locationReadings` table as the data source.

Let’s use the following code: 

```
import { util } from '@aws-appsync/utils'

export function request(ctx) {
	const keys = [util.dynamodb.toMapValues(ctx.args)]
	const consistentRead = true
	return {
		operation: 'BatchGetItem',
		tables: {
			locationReadings: { keys, consistentRead },
			temperatureReadings: { keys, consistentRead },
		},
	}
}

export function response(ctx) {
	if (ctx.error) {
		util.appendError(ctx.error.message, ctx.error.type)
	}
	const { locationReadings: locs, temperatureReadings: temps } = ctx.result.data

	return [
		...locs.map((l) => ({ ...l, __typename: 'LocationReading' })),
		...temps.map((t) => ({ ...t, __typename: 'TemperatureReading' })),
	]
}
```

Save the resolver and navigate to the **Queries** page in the AWS AppSync console. Now, let’s retrieve our sensor readings.

Execute the following query:

```
query getReadingsForSensorAndTime {
  # Let's retrieve the very first two readings
  getReadings(sensorId: 1, timestamp: "2018-02-01T17:21:06.000+08:00") {
    sensorId
    timestamp
    ...on TemperatureReading {
      value
    }
    ...on LocationReading {
      lat
      long
    }
  }
}
```

We have successfully demonstrated the use of DynamoDB batch operations using AWS AppSync.

## Error handling
<a name="error-handling-js"></a>

In AWS AppSync, data source operations can sometimes return partial results. Partial results is the term we will use to denote when the output of an operation is comprised of some data and an error. Because error handling is inherently application specific, AWS AppSync gives you the opportunity to handle errors in the response handler. The resolver invocation error, if present, is available from the context as `ctx.error`. Invocation errors always include a message and a type, accessible as properties `ctx.error.message` and `ctx.error.type`. In the response handler, you can handle partial results in three ways:

1. Swallow the invocation error by just returning data.

1. Raise an error (using `util.error(...)`) by stopping the handler evaluation, which won’t return any data.

1. Append an error (using `util.appendError(...)`) and also return data.

Let’s demonstrate each of the three points above with DynamoDB batch operations.

### DynamoDB Batch operations
<a name="dynamodb-batch-operations-js"></a>

With DynamoDB batch operations, it is possible that a batch partially completes. That is, it is possible that some of the requested items or keys are left unprocessed. If AWS AppSync is unable to complete a batch, unprocessed items and an invocation error will be set on the context.

We will implement error handling using the `Query.getReadings` field configuration from the `BatchGetItem` operation from the previous section of this tutorial. This time, let’s pretend that while executing the `Query.getReadings` field, the `temperatureReadings` DynamoDB table ran out of provisioned throughput. DynamoDB raised a `ProvisionedThroughputExceededException` during the second attempt by AWS AppSync to process the remaining elements in the batch.

The following JSON represents the serialized context after the DynamoDB batch invocation but before the response handler was called:

```
{
  "arguments": {
    "sensorId": "1",
    "timestamp": "2018-02-01T17:21:05.000+08:00"
  },
  "source": null,
  "result": {
    "data": {
      "temperatureReadings": [
        null
      ],
      "locationReadings": [
        {
          "lat": 47.615063,
          "long": -122.333551,
          "sensorId": "1",
          "timestamp": "2018-02-01T17:21:05.000+08:00"
        }
      ]
    },
    "unprocessedKeys": {
      "temperatureReadings": [
        {
          "sensorId": "1",
          "timestamp": "2018-02-01T17:21:05.000+08:00"
        }
      ],
      "locationReadings": []
    }
  },
  "error": {
    "type": "DynamoDB:ProvisionedThroughputExceededException",
    "message": "You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes. (...)"
  },
  "outErrors": []
}
```

A few things to note on the context:
+ The invocation error has been set on the context at `ctx.error` by AWS AppSync, and the error type has been set to `DynamoDB:ProvisionedThroughputExceededException`.
+ Results are mapped per table under `ctx.result.data` even though an error is present.
+ Keys that were left unprocessed are available at `ctx.result.data.unprocessedKeys`. Here, AWS AppSync was unable to retrieve the item with key (sensorId:1, timestamp:2018-02-01T17:21:05.000\$108:00) because of insufficient table throughput.

**Note**  
For `BatchPutItem`, it is `ctx.result.data.unprocessedItems`. For `BatchDeleteItem`, it is `ctx.result.data.unprocessedKeys`.

Let’s handle this error in three different ways.

#### 1. Swallowing the invocation error
<a name="swallowing-the-invocation-error-js"></a>

Returning data without handling the invocation error effectively swallows the error, making the result for the given GraphQL field always successful.

The code we write is familiar and only focuses on the result data.

**Response handler**

```
export function response(ctx) {
  return ctx.result.data
}
```

**GraphQL response**

```
{
  "data": {
    "getReadings": [
      {
        "sensorId": "1",
        "timestamp": "2018-02-01T17:21:05.000+08:00",
        "lat": 47.615063,
        "long": -122.333551
      },
      {
        "sensorId": "1",
        "timestamp": "2018-02-01T17:21:05.000+08:00",
        "value": 85.5
      }
    ]
  }
}
```

No errors will be added to the error response as only data was acted on.

#### 2. Raising an error to abort the response handler execution
<a name="raising-an-error-to-abort-the-response-execution-js"></a>

When partial failures should be treated as complete failures from the client’s perspective, you can abort the response handler execution to prevent returning data. The `util.error(...)` utility method achieves exactly this behavior.

**Response handler code**

```
export function response(ctx) {
  if (ctx.error) {
    util.error(ctx.error.message, ctx.error.type, null, ctx.result.data.unprocessedKeys);
  }
  return ctx.result.data;
}
```

**GraphQL response**

```
{
  "data": {
    "getReadings": null
  },
  "errors": [
    {
      "path": [
        "getReadings"
      ],
      "data": null,
      "errorType": "DynamoDB:ProvisionedThroughputExceededException",
      "errorInfo": {
        "temperatureReadings": [
          {
            "sensorId": "1",
            "timestamp": "2018-02-01T17:21:05.000+08:00"
          }
        ],
        "locationReadings": []
      },
      "locations": [
        {
          "line": 58,
          "column": 3
        }
      ],
      "message": "You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes. (...)"
    }
  ]
}
```

Even though some results might have been returned from the DynamoDB batch operation, we chose to raise an error such that the `getReadings` GraphQL field is null and the error has been added to the GraphQL response *errors* block.

#### 3. Appending an error to return both data and errors
<a name="appending-an-error-to-return-both-data-and-errors-js"></a>

In certain cases, to provide a better user experience, applications can return partial results and notify their clients of the unprocessed items. The clients can decide to either implement a retry or translate the error back to the end user. The `util.appendError(...)` is the utility method that enables this behavior by letting the application designer append errors on the context without interfering with the evaluation of the response handler. After evaluating the response handler, AWS AppSync will process any context errors by appending them to the errors block of the GraphQL response.

**Response handler code**

```
export function response(ctx) {
  if (ctx.error) {
    util.appendError(ctx.error.message, ctx.error.type, null, ctx.result.data.unprocessedKeys);
  }
  return ctx.result.data;
}
```

We forwarded both the invocation error and `unprocessedKeys` element inside the errors block of the GraphQL response. The `getReadings` field also return partial data from the `locationReadings` table as you can see in the response below.

**GraphQL response**

```
{
  "data": {
    "getReadings": [
      null,
      {
        "sensorId": "1",
        "timestamp": "2018-02-01T17:21:05.000+08:00",
        "value": 85.5
      }
    ]
  },
  "errors": [
    {
      "path": [
        "getReadings"
      ],
      "data": null,
      "errorType": "DynamoDB:ProvisionedThroughputExceededException",
      "errorInfo": {
        "temperatureReadings": [
          {
            "sensorId": "1",
            "timestamp": "2018-02-01T17:21:05.000+08:00"
          }
        ],
        "locationReadings": []
      },
      "locations": [
        {
          "line": 58,
          "column": 3
        }
      ],
      "message": "You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes. (...)"
    }
  ]
}
```

# Using HTTP resolvers in AWS AppSync
<a name="tutorial-http-resolvers-js"></a>

AWS AppSync enables you to use supported data sources (that is, AWS Lambda, Amazon DynamoDB, Amazon OpenSearch Service, or Amazon Aurora) to perform various operations, in addition to any arbitrary HTTP endpoints to resolve GraphQL fields. After your HTTP endpoints are available, you can connect to them using a data source. Then, you can configure a resolver in the schema to perform GraphQL operations such as queries, mutations, and subscriptions. This tutorial walks you through some common examples.

In this tutorial you use a REST API (created using Amazon API Gateway and Lambda) with an AWS AppSync GraphQL endpoint.

## Creating a REST API
<a name="creating-a-rest-api"></a>

You can use the following AWS CloudFormation template to set up a REST endpoint that works for this tutorial:

[https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/http/http-api-gw.yaml](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?templateURL=https://s3.us-west-2.amazonaws.com/awsappsync/resources/http/http-api-gw.yaml)

The AWS CloudFormation stack performs the following steps:

1. Sets up a Lambda function that contains your business logic for your microservice.

1. Sets up an API Gateway REST API with the following endpoint/method/content type combination:


****  

| API Resource Path | HTTP Method | Supported Content Type | 
| --- | --- | --- | 
|  /v1/users  |  POST  |  application/json  | 
|  /v1/users  |  GET  |  application/json  | 
|  /v1/users/1  |  GET  |  application/json  | 
|  /v1/users/1  |  PUT  |  application/json  | 
|  /v1/users/1  |  DELETE  |  application/json  | 

## Creating your GraphQL API
<a name="creating-your-graphql-api"></a>

To create the GraphQL API in AWS AppSync:

1. Open the AWS AppSync console and choose **Create API**.

1. Choose **GraphQL APIs** and then choose **Design from scratch**. Choose **Next**.

1. For the API name, type `UserData`. Choose **Next**.

1. Choose `Create GraphQL resources later`. Choose **Next**.

1. Review your inputs and choose **Create API**.

The AWS AppSync console creates a new GraphQL API for you using the API key authentication mode. You can use the console to further configure your GraphQL API and run requests.

## Creating a GraphQL schema
<a name="creating-a-graphql-schema"></a>

Now that you have a GraphQL API, let’s create a GraphQL schema. In the **Schema** editor in the AWS AppSync console, use the snippet below:

```
type Mutation {
    addUser(userInput: UserInput!): User
    deleteUser(id: ID!): User
}

type Query {
    getUser(id: ID): User
    listUser: [User!]!
}

type User {
    id: ID!
    username: String!
    firstname: String
    lastname: String
    phone: String
    email: String
}

input UserInput {
    id: ID!
    username: String!
    firstname: String
    lastname: String
    phone: String
    email: String
}
```

## Configure your HTTP data source
<a name="configure-your-http-data-source"></a>

To configure your HTTP data source, do the following:

1. In the **Data sources** page in your AWS AppSync GraphQL API, choose **Create data source**.

1. Enter a name for the data source like `HTTP_Example`.

1. In **Data source type**, choose **HTTP endpoint**.

1. Set the endpoint to the API Gateway endpoint that was created at the beginning of the tutorial. You can find your stack-generated endpoint if you navigate to the Lambda console and find your application under **Applications**. Inside of your application's settings, you should see an **API endpoint** which will be your endpoint in AWS AppSync. Make sure you don’t include the stage name as part of the endpoint. For instance, if your endpoint were `https://aaabbbcccd.execute-api.us-east-1.amazonaws.com/v1`, you would type in `https://aaabbbcccd.execute-api.us-east-1.amazonaws.com`.

**Note**  
At this time, only public endpoints are supported by AWS AppSync.  
For more information about the certifying authorities that are recognized by the AWS AppSync service, see [Certificate Authorities (CA) Recognized by AWS AppSync for HTTPS Endpoints](http-cert-authorities.md#aws-appsync-http-certificate-authorities).

## Configuring resolvers
<a name="configuring-resolvers"></a>

In this step, you will connect the HTTP data source to the `getUser` and `addUser` queries.

To set up the `getUser` resolver:

1. In your AWS AppSync GraphQL API, choose the **Schema** tab.

1. To the right of the **Schema** editor, in the **Resolvers** pane and under the **Query** type, find the `getUser` field and choose **Attach**.

1. Keep the resolver type to `Unit` and the runtime to `APPSYNC_JS`.

1. In **Data source name**, choose the HTTP endpoint you made earlier.

1. Choose **Create**.

1. In the **Resolver** code editor, add the following snippet as your request handler:

   ```
   import { util } from '@aws-appsync/utils'
   
   export function request(ctx) {
   	return {
   		version: '2018-05-29',
   		method: 'GET',
   		params: {
   			headers: {
   				'Content-Type': 'application/json',
   			},
   		},
   		resourcePath: `/v1/users/${ctx.args.id}`,
   	}
   }
   ```

1. Add the following snippet as your response handler:

   ```
   export function response(ctx) {
   	const { statusCode, body } = ctx.result
   	// if response is 200, return the response
   	if (statusCode === 200) {
   		return JSON.parse(body)
   	}
   	// if response is not 200, append the response to error block.
   	util.appendError(body, statusCode)
   }
   ```

1. Choose the **Query** tab, and then run the following query:

   ```
   query GetUser{
       getUser(id:1){
           id
           username
       }
   }
   ```

   This should return the following response:

   ```
   {
       "data": {
           "getUser": {
               "id": "1",
               "username": "nadia"
           }
       }
   }
   ```

To set up the `addUser` resolver:

1. Choose the **Schema** tab.

1. To the right of the **Schema** editor, in the **Resolvers** pane and under the **Query** type, find the `addUser` field and choose **Attach**.

1. Keep the resolver type to `Unit` and the runtime to `APPSYNC_JS`.

1. In **Data source name**, choose the HTTP endpoint you made earlier.

1. Choose **Create**.

1. In the **Resolver** code editor, add the following snippet as your request handler:

   ```
   export function request(ctx) {
       return {
           "version": "2018-05-29",
           "method": "POST",
           "resourcePath": "/v1/users",
           "params":{
               "headers":{
                   "Content-Type": "application/json"
               },
           "body": ctx.args.userInput
           }
       }
   }
   ```

1. Add the following snippet as your response handler:

   ```
   export function response(ctx) {
       if(ctx.error) {
           return util.error(ctx.error.message, ctx.error.type)
       }
       if (ctx.result.statusCode == 200) {
           return ctx.result.body
       } else {
           return util.appendError(ctx.result.body, "ctx.result.statusCode")
       }
   }
   ```

1. Choose the **Query** tab, and then run the following query:

   ```
   mutation addUser{
       addUser(userInput:{
           id:"2",
           username:"shaggy"
       }){
           id
           username
       }
   }
   ```

   If you run the `getUser` query again, it should return the following response:

   ```
   {
       "data": {
           "getUser": {
           "id": "2",
           "username": "shaggy"
           }
       }
   }
   ```

## Invoking AWS Services
<a name="invoking-aws-services-js"></a>

You can use HTTP resolvers to set up a GraphQL API interface for AWS services. HTTP requests to AWS must be signed with the [Signature Version 4 process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) so that AWS can identify who sent them. AWS AppSync calculates the signature on your behalf when you associate an IAM role with the HTTP data source.

You provide two additional components to invoke AWS services with HTTP resolvers:
+ An IAM role with permissions to call the AWS service APIs
+ Signing configuration in the data source

For example, if you want to call the [ListGraphqlApis operation](https://docs.aws.amazon.com/appsync/latest/APIReference/API_ListGraphqlApis.html) with HTTP resolvers, you first [create an IAM role](attaching-a-data-source.md#aws-appsync-getting-started-build-a-schema-from-scratch) that AWS AppSync assumes with the following policy attached:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "appsync:ListGraphqlApis"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
```

------

Next, create the HTTP data source for AWS AppSync. In this example, you call AWS AppSync in the US West (Oregon) Region. Set up the following HTTP configuration in a file named `http.json`, which includes the signing region and service name:

```
{
    "endpoint": "https://appsync.us-west-2.amazonaws.com/",
    "authorizationConfig": {
        "authorizationType": "AWS_IAM",
        "awsIamConfig": {
            "signingRegion": "us-west-2",
            "signingServiceName": "appsync"
        }
    }
}
```

Then, use the AWS CLI to create the data source with an associated role as follows:

```
aws appsync create-data-source --api-id <API-ID> \
                               --name AWSAppSync \
                               --type HTTP \
                               --http-config file:///http.json \
                               --service-role-arn <ROLE-ARN>
```

When you attach a resolver to the field in the schema, use the following request mapping template to call AWS AppSync:

```
{
    "version": "2018-05-29",
    "method": "GET",
    "resourcePath": "/v1/apis"
}
```

When you run a GraphQL query for this data source, AWS AppSync signs the request using the role you provided and includes the signature in the request. The query returns a list of AWS AppSync GraphQL APIs in your account in that AWS Region.

# Using Aurora PostgreSQL with Data API in AWS AppSync
<a name="aurora-serverless-tutorial-js"></a>

 

Learn how to connect your GraphQL API to Aurora PostgreSQL databases using AWS AppSync. This integration enables you to build scalable, data-driven applications by executing SQL queries and mutations through GraphQL operations. AWS AppSync provides a data source for executing SQL statements against Amazon Aurora clusters that are enabled with a Data API. You can use AWS AppSync resolvers to run SQL statements against the data API with GraphQL queries, mutations, and subscriptions.

Before starting this tutorial, you should have basic familiarity with AWS services and GraphQL concepts.

**Note**  
This tutorial uses the `US-EAST-1` Region. 

**Topics**
+ [Set up your Aurora PostgreSQL database](#creating-clusters)
+ [Creating the database and table](#creating-db-table)
+ [Creating a GraphQL schema](#rds-graphql-schema)
+ [Resolvers for RDS](#rds-resolvers)
+ [Deleting your cluster](#rds-delete-cluster)

## Set up your Aurora PostgreSQL database
<a name="creating-clusters"></a>

Before adding an Amazon RDS data source to AWS AppSync, do the following.

1. Enable a Data API on an Aurora Serverless v2 cluster.

1. Configure a secret using AWS Secrets Manager

1. Create the cluster using the following AWS CLI command.

   ```
   aws rds create-db-cluster \
               --db-cluster-identifier appsync-tutorial \
               --engine aurora-postgresql \
               --engine-version 16.6 \
               --serverless-v2-scaling-configuration MinCapacity=0,MaxCapacity=1 \
               --master-username USERNAME \
               --master-user-password COMPLEX_PASSWORD \
               --enable-http-endpoint
   ```

This will return an ARN for the cluster. After creating a cluster, you must add a Serverless v2 instance with the following AWS CLI command.

```
aws rds create-db-instance \
    --db-cluster-identifier appsync-tutorial \
    --db-instance-identifier appsync-tutorial-instance-1 \
    --db-instance-class db.serverless \
    --engine aurora-postgresql
```

**Note**  
These endpoints take time to become activate. You can check their status in the RDS console in the **Connectivity & security** tab for the cluster.

Check the cluster status with the following AWS CLI command.

```
aws rds describe-db-clusters \
    --db-cluster-identifier appsync-tutorial \
    --query "DBClusters[0].Status"
```

Create a Secret via the AWS Secrets Manager Console or the AWS CLI with an input file such as the following using the `USERNAME` and `COMPLEX_PASSWORD` from the previous step:

```
{
    "username": "USERNAME",
    "password": "COMPLEX_PASSWORD"
}
```

Pass this as a parameter to the AWS CLI:

```
aws secretsmanager create-secret \
    --name appsync-tutorial-rds-secret \
    --secret-string file://creds.json
```

This will return an ARN for the secret. **Take note** of the ARN of your Aurora Serverless v2 cluster and Secret for later when creating a data source in the AWS AppSync console. 

## Creating the database and table
<a name="creating-db-table"></a>

First, create a database named `TESTDB`. In PostgreSQL, a database is a container that holds tables and other SQL objects. Validate that your Aurora Serverless v2 cluster is configured correctly before adding it to your AWS AppSync API. First, create a *TESTDB* database with the `--sql` parameter as follows.

```
aws rds-data execute-statement \
    --resource-arn "arn:aws:rds:us-east-1:111122223333 ISN:cluster:appsync-tutorial" \
    --secret-arn "arn:aws:secretsmanager:us-east-1:111122223333 ISN:secret:appsync-tutorial-rds-secret" \
    --sql "create DATABASE \"testdb\"" \
    --database "postgres"
```

 If this runs without any errors, add two tables with the `create table` command:

```
 aws rds-data execute-statement \
    --resource-arn "arn:aws:rds:us-east-1:111122223333 ISN:cluster:appsync-tutorial" \
    --secret-arn "arn:aws:secretsmanager:us-east-1:111122223333 ISN:secret:appsync-tutorial-rds-secret" \
    --database "testdb" \
    --sql 'create table public.todos (id serial constraint todos_pk primary key, description text not null, due date not null, "createdAt" timestamp default now());'

aws rds-data execute-statement \
    --resource-arn "arn:aws:rds:us-east-1:111122223333 ISN:cluster:appsync-tutorial" \
    --secret-arn "arn:aws:secretsmanager:us-east-1:111122223333 ISN:secret:appsync-tutorial-rds-secret" \
    --database "testdb" \
    --sql 'create table public.tasks (id serial constraint tasks_pk primary key, description varchar, "todoId" integer not null constraint tasks_todos_id_fk references public.todos);'
```

If successful, add the cluster as a data source in your API.

## Creating a GraphQL schema
<a name="rds-graphql-schema"></a>

Now that your Aurora Serverless v2 Data API is running with configured tables, we'll create a GraphQL schema. You can quickly create your API by importing table configurations from an existing database using the API creation wizard.

To begin: 

1. In the AWS AppSync console, choose **Create API**, then **Start with an Amazon Aurora cluster**. 

1. Specify API details like **API name**, then select your database to generate the API.

1. Choose your database. If needed, update the Region, then choose your Aurora cluster and *TESTDB* database. 

1. Choose your Secret, then choose **Import**. 

1. Once tables have been discovered, update the type names. Change `Todos` to `Todo` and `Tasks` to `Task`. 

1. Preview the generated schema by choosing **Preview Schema**. Your schema will look something like this: 

   ```
   type Todo {
     id: Int!
     description: String!
     due: AWSDate!
     createdAt: String
   }
   
   type Task {
     id: Int!
     todoId: Int!
     description: String
   }
   ```

1. For the role, you can either have AWS AppSync create a new role or create one with a policy similar to the one below:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "rds-data:ExecuteStatement"
               ],
               "Resource": [
                   "arn:aws:rds:us-east-1:111122223333:cluster:appsync-tutorial",
                   "arn:aws:rds:us-east-1:111122223333:cluster:appsync-tutorial:*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "secretsmanager:GetSecretValue"
               ],
               "Resource": [
               "arn:aws:secretsmanager:us-east-1:111122223333:secret:appsync-tutorial-rds-secret",
               "arn:aws:secretsmanager:us-east-1:111122223333:secret:appsync-tutorial-rds-secret:*"
               ]
           }
       ]
   }
   ```

------

   Note that there are two statements in this policy to which you are granting role access. The first resource is your Aurora cluster and the second is your AWS Secrets Manager ARN. 

   Choose **Next**, review the configuration details, then choose **Create API**. You now have a fully operational API. You can review the full details of your API on the **Schema** page. 

## Resolvers for RDS
<a name="rds-resolvers"></a>

The API creation flow automatically created the resolvers to interact with our types. If you look at **Schema** page, you will find resolvers some of the follwoing resolvers.
+ Create a `todo` via the `Mutation.createTodo` field.
+ Update a `todo` via the `Mutation.updateTodo` field.
+ Delete a `todo` via the `Mutation.deleteTodo` field.
+ Get a single `todo` via the `Query.getTodo` field.
+ List all `todos` via the `Query.listTodos` field.

You will find similar fields and resolvers attached for the `Task` type. Let's take a closer look at some of the resolvers. 

### Mutation.createTodo
<a name="createtodo"></a>

From the schema editor in the AWS AppSync console, on the right side, choose `testdb` next to `createTodo(...): Todo`. The resolver code uses the `insert` function from the `rds` module to dynamically create an insert statement that adds data to the `todos` table. Because we are working with Postgres, we can leverage the `returning` statement to get the inserted data back.

Update the following resolver to properly specify the `DATE` type of the `due` field.

```
import { util } from '@aws-appsync/utils';
import { insert, createPgStatement, toJsonObject, typeHint } from '@aws-appsync/utils/rds';

export function request(ctx) {
    const { input } = ctx.args;
    // if a due date is provided, cast is as `DATE`
    if (input.due) {
        input.due = typeHint.DATE(input.due)
    }
    const insertStatement = insert({
        table: 'todos',
        values: input,
        returning: '*',
    });
    return createPgStatement(insertStatement)
}

export function response(ctx) {
    const { error, result } = ctx;
    if (error) {
        return util.appendError(
            error.message,
            error.type,
            result
        )
    }
    return toJsonObject(result)[0][0]
}
```

Save the resolver. The type hint marks the `due` properly in our input object as a `DATE` type. This allows the Postgres engine to properly interpret the value. Next, update your schema to remove the `id` from the `CreateTodo` input. Because our Postgres database can return the generated ID, you can rely on it for creation and returning the result as a single request as follows.

```
input CreateTodoInput {
    due: AWSDate!
    createdAt: String
    description: String!
}
```

Make the change and update your schema. Head to the **Queries** editor to add an item to the database as follows.

```
mutation CreateTodo {
  createTodo(input: {description: "Hello World!", due: "2023-12-31"}) {
    id
    due
    description
    createdAt
  }
}
```

You get the following result.

```
{
  "data": {
    "createTodo": {
      "id": 1,
      "due": "2023-12-31",
      "description": "Hello World!",
      "createdAt": "2023-11-14 20:47:11.875428"
    }
  }
}
```

### Query.listTodos
<a name="listtodo"></a>

From the schema editor in the console, on the right side, choose `testdb` next to `listTodos(id: ID!): Todo`. The request handler uses the select utility function to build a request dynamically at run time.

```
export function request(ctx) {
    const { filter = {}, limit = 100, nextToken } = ctx.args;
    const offset = nextToken ? +util.base64Decode(nextToken) : 0;
    const statement = select({
        table: 'todos',
        columns: '*',
        limit,
        offset,
        where: filter,
    });
    return createPgStatement(statement)
}
```

We want to filter `todos` based on the `due` date. Let's update the resolver to cast `due` values to `DATE`. Update the list of imports and the request handler as follows.

```
import { util } from '@aws-appsync/utils';
import * as rds from '@aws-appsync/utils/rds';

export function request(ctx) {
  const { filter: where = {}, limit = 100, nextToken } = ctx.args;
  const offset = nextToken ? +util.base64Decode(nextToken) : 0;

  // if `due` is used in a filter, CAST the values to DATE.
  if (where.due) {
    Object.entries(where.due).forEach(([k, v]) => {
      if (k === 'between') {
        where.due[k] = v.map((d) => rds.typeHint.DATE(d));
      } else {
        where.due[k] = rds.typeHint.DATE(v);
      }
    });
  }

  const statement = rds.select({
    table: 'todos',
    columns: '*',
    limit,
    offset,
    where,
  });
  return rds.createPgStatement(statement);
}

export function response(ctx) {
  const {
    args: { limit = 100, nextToken },
    error,
    result,
  } = ctx;
  if (error) {
    return util.appendError(error.message, error.type, result);
  }
  const offset = nextToken ? +util.base64Decode(nextToken) : 0;
  const items = rds.toJsonObject(result)[0];
  const endOfResults = items?.length < limit;
  const token = endOfResults ? null : util.base64Encode(`${offset + limit}`);
  return { items, nextToken: token };
}
```

In the **Queries** editor do the following.

```
query LIST {
  listTodos(limit: 10, filter: {due: {between: ["2021-01-01", "2025-01-02"]}}) {
    items {
      id
      due
      description
    }
  }
}
```

### Mutation.updateTodo
<a name="updatetodo"></a>

You can also `update` a `Todo`. From the **Queries** editor, let's update our first `Todo` item of `id` `1`.

```
mutation UPDATE {
  updateTodo(input: {id: 1, description: "edits"}) {
    description
    due
    id
  }
}
```

Note that you must specify the `id` of the item you are updating. You can also specify a condition to only update an item that meets specific conditions. For example, we may only want to edit the item if the description starts with `edits` as follows.

```
mutation UPDATE {
  updateTodo(input: {id: 1, description: "edits: make a change"}, condition: {description: {beginsWith: "edits"}}) {
    description
    due
    id
  }
}
```

Just like how we handled our `create` and `list` operations, we can update our resolver to cast the `due` field to a `DATE`. Save these changes to `updateTodo` as follows.

```
import { util } from '@aws-appsync/utils';
import * as rds from '@aws-appsync/utils/rds';

export function request(ctx) {
  const { input: { id, ...values }, condition = {}, } = ctx.args;
  const where = { ...condition, id: { eq: id } };

  // if `due` is used in a condition, CAST the values to DATE.
  if (condition.due) {
    Object.entries(condition.due).forEach(([k, v]) => {
      if (k === 'between') {
        condition.due[k] = v.map((d) => rds.typeHint.DATE(d));
      } else {
        condition.due[k] = rds.typeHint.DATE(v);
      }
    });
  }

  // if a due date is provided, cast is as `DATE`
  if (values.due) {
    values.due = rds.typeHint.DATE(values.due);
  }

  const updateStatement = rds.update({
    table: 'todos',
    values,
    where,
    returning: '*',
  });
  return rds.createPgStatement(updateStatement);
}

export function response(ctx) {
  const { error, result } = ctx;
  if (error) {
    return util.appendError(error.message, error.type, result);
  }
  return rds.toJsonObject(result)[0][0];
}
```

Now try an update with a condition:

```
mutation UPDATE {
  updateTodo(
    input: {
        id: 1, description: "edits: make a change", due: "2023-12-12"},
    condition: {
        description: {beginsWith: "edits"}, due: {ge: "2023-11-08"}})
    {
          description
          due
          id
        }
}
```

### Mutation.deleteTodo
<a name="deletetodo"></a>

You can `delete` a `Todo` with the `deleteTodo` mutation. This works like the `updateTodo` mutation, and you must specify the `id` of the item you want to delete as follows.

```
mutation DELETE {
  deleteTodo(input: {id: 1}) {
    description
    due
    id
  }
}
```

### Writing custom queries
<a name="writing-custom-queries"></a>

We've used the `rds` module utilities to create our SQL statements. We can also write our own custom static statement to interact with our database. First, update the schema to remove the `id` field from the `CreateTask` input.

```
input CreateTaskInput {
    todoId: Int!
    description: String
}
```

Next, create a couple of tasks. A task has a foreign key relationship with `Todo`as follows.

```
mutation TASKS {
  a: createTask(input: {todoId: 2, description: "my first sub task"}) { id }
  b:createTask(input: {todoId: 2, description: "another sub task"}) { id }
  c: createTask(input: {todoId: 2, description: "a final sub task"}) { id }
}
```

Create a new field in your `Query` type called `getTodoAndTasks`as follows.

```
getTodoAndTasks(id: Int!): Todo
```

Add a `tasks` field to the `Todo` type as follows.

```
type Todo {
    due: AWSDate!
    id: Int!
    createdAt: String
    description: String!
    tasks:TaskConnection
}
```

Save the schema. From the schema editor in the console, on the right side, choose **Attach Resolver** for `getTodosAndTasks(id: Int!): Todo`. Choose your Amazon RDS data source. Update your resolver with the following code.

```
import { sql, createPgStatement,toJsonObject } from '@aws-appsync/utils/rds';

export function request(ctx) {
    return createPgStatement(
        sql`SELECT * from todos where id = ${ctx.args.id}`,
        sql`SELECT * from tasks where "todoId" = ${ctx.args.id}`);
}

export function response(ctx) {
    const result = toJsonObject(ctx.result);
    const todo = result[0][0];
    if (!todo) {
        return null;
    }
    todo.tasks = { items: result[1] };
    return todo;
}
```

In this code, we use the `sql` tag template to write a SQL statement that we can safely pass a dynamic value to at run time. `createPgStatement` can take up to two SQL requests at a time. We use that to send one query for our `todo` and another for our `tasks`. You could have done this with a `JOIN` statement or any other method for that matter. The idea is being able to write your own SQL statement to implement your business logic. To use the query in the **Queries** editor, do the following.

```
query TodoAndTasks {
  getTodosAndTasks(id: 2) {
    id
    due
    description
    tasks {
      items {
        id
        description
      }
    }
  }
}
```

## Deleting your cluster
<a name="rds-delete-cluster"></a>

**Important**  
Deleting a cluster is permanent. Review your project thoroughly before carrying out this action.

To delete your cluster:

```
$ aws rds delete-db-cluster \
    --db-cluster-identifier appsync-tutorial \
    --skip-final-snapshot
```