

# Designing GraphQL APIs with AWS AppSync
<a name="designing-a-graphql-api"></a>

AWS AppSync allows you to create GraphQL APIs using the console experience. You caught a glimpse of this in the [Launching a sample schema](https://docs.aws.amazon.com/appsync/latest/devguide/quickstart.html) section. However, that guide didn't show the entire catalog of options and configurations that you could leverage in AWS AppSync. 

When you choose to create a GraphQL API in the console, there are several options to explore. If you followed our [Launching a sample schema](https://docs.aws.amazon.com/appsync/latest/devguide/quickstart.html) guide, we showed you how to create an API from a predefined model. In the following sections, we will guide you through the rest of the options and configurations for creating GraphQL APIs in AWS AppSync.

In this section, you'll review the following concepts:

1. [Blank APIs or imports](blank-import-api.md#aws-appsync-blank-import-api): This guide will run through the entire creation process for creating a GraphQL API. You'll learn how to create a GraphQL from a blank template with no model, configure data sources for your schema, and add your first resolver to a field.

1. [Real-time data](aws-appsync-real-time-data.md#aws-appsync-real-time-data-anchor): This guide will show you the potential options for creating an API using AWS AppSync's WebSocket engine.

1. [Merged APIs](merged-api.md#aws-appsync-merged-api): This guide will show you how to create new GraphQL APIs by associating and merging data from multiple existing GraphQL APIs.

1. [Building GraphQL APIs with RDS introspection](rds-introspection.md): This guide will show you how to integrate your Amazon RDS tables using a Data API.

# Structuring a GraphQL API (blank or imported APIs)
<a name="blank-import-api"></a>

Before you create your GraphQL API from a blank template, it would help to review the concepts surrounding GraphQL. There are three fundamental components of a GraphQL API:

1. The **schema** is the file containing the shape and definition of your data. When a request is made by a client to your GraphQL service, the data returned will follow the specification of the schema. For more information, see [GraphQL schemas](schema-components.md#aws-appsync-schema-components).

1. The **data source** is attached to your schema. When a request is made, this is where the data is retrieved and modified. For more information, see [Data sources](data-source-components.md#aws-appsync-data-source-components).

1. The **resolver** sits between the schema and the data source. When a request is made, the resolver performs the operation on the data from the source, then returns the result as a response. For more information, see [Resolvers](resolver-components.md#aws-appsync-resolver-components).

![\[GraphQL API architecture showing schema, resolvers, and data sources connected via AppSync.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/appsync-architecture-graphql-api.png)


AWS AppSync manages your APIs by allowing you to create, edit, and store the code for your schemas and resolvers. Your data sources will come from external repositories such as databases, DynamoDB tables, and Lambda functions. If you're using an AWS service to store your data or are planning on doing so, AWS AppSync provides a near-seamless experience when associating data from your AWS accounts to your GraphQL APIs.

In the next section, you will learn how to create each of these components using the AWS AppSync service.

**Topics**
+ [Designing your GraphQL schema](designing-your-schema.md)
+ [Attaching a data source](attaching-a-data-source.md)
+ [Configuring AWS AppSync resolvers](resolver-config-overview.md)
+ [Using APIs with the CDK](using-your-api.md)

# Designing your GraphQL schema
<a name="designing-your-schema"></a>

The GraphQL schema is the foundation of any GraphQL server implementation. Each GraphQL API is defined by a **single** schema that contains types and fields describing how the data from requests will be populated. The data flowing through your API and the operations performed must be validated against the schema.

In general, the [GraphQL type system](https://graphql.org/learn/schema/#type-system) describes the capabilities of a GraphQL server and is used to determine if a query is valid. A server’s type system is often referred to as that server’s schema and can consist of different object types, scalar types, input types, and more. GraphQL is both declarative and strongly typed, meaning the types will be well-defined at runtime and will only return what was specified.

AWS AppSync allows you to define and configure GraphQL schemas. The following section describes how to create GraphQL schemas from scratch using AWS AppSync's services.

## Structuring a GraphQL Schema
<a name="schema-structure"></a>

**Tip**  
We recommend reviewing the [Schemas](https://docs.aws.amazon.com//appsync/latest/devguide/schema-components.html) section before continuing.

GraphQL is a powerful tool for implementing API services. According to [GraphQL's website](https://graphql.org/), GraphQL is the following:

"*GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.*"

This section covers the very first part of your GraphQL implementation, the schema. Using the quote above, a schema plays the role of "providing a complete and understandable description of the data in your API". In other words, a GraphQL schema is a textual representation of your service's data, operations, and the relations between them. The schema is considered the main entry point for your GraphQL service implementation. Unsurprisingly, it's often one of the first things you make in your project. We recommend reviewing the [Schemas](https://docs.aws.amazon.com//appsync/latest/devguide/schema-components.html) section before continuing.

To quote the [Schemas](https://docs.aws.amazon.com//appsync/latest/devguide/schema-components.html) section, GraphQL schemas are written in the *Schema Definition Language* (SDL). SDL is composed of types and fields with an established structure:
+ **Types**: Types are how GraphQL defines the shape and behavior of the data. GraphQL supports a multitude of types that will be explained later in this section. Each type that's defined in your schema will contain its own scope. Inside the scope will be one or more fields that can contain a value or logic that will be used in your GraphQL service. Types fill many different roles, the most common being objects or scalars (primitive value types).
+ **Fields**: Fields exist within the scope of a type and hold the value that's requested from the GraphQL service. These are very similar to variables in other programming languages. The shape of the data you define in your fields will determine how the data is structured in a request/response operation. This allows developers to predict what will be returned without knowing how the backend of the service is implemented.

The simplest schemas will contain three different data categories:

1. **Schema roots**: Roots define the entry points of your schema. It points to the fields that will be performing some operation on the data like adding, deleting, or modifying something.

1. **Types**: These are base types that are used to represent the shape of the data. You can almost think of these as objects or abstract representations of something with defined characteristics. For example, you could make a `Person` object that represents a person in a database. Each person's characteristics will be defined inside the `Person` as fields. They can be anything like the person's name, age, job, address, etc.

1. **Special object types**: These are the types that define the behavior of the operations in your schema. Each special object type is defined once per schema. They are first placed in the schema root, then defined in the schema body. Each field in a special object type defines a single operation to be implemented by your resolver.

To put this into perspective, imagine you're creating a service that stores authors and the books they've written. Each author has a name and an array of books they've authored. Each book has a name and a list of associated authors. We also want the ability to add or retrieve books and authors. A simple UML representation of this relationship may look like this:

![\[UML diagram showing Author and Book classes with attributes and methods, linked by association.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/GraphQL-UML-1.png)


In GraphQL, the entities `Author` and `Book` represent two different object types in your schema:

```
type Author {
}

type Book {
}
```

`Author` contains `authorName` and `Books`, while `Book` contains `bookName` and `Authors`. These can be represented as the fields within the scope of your types:

```
type Author {
  authorName: String
  Books: [Book]
}

type Book {
  bookName: String
  Authors: [Author]
}
```

As you can see, the type representations are very close to the diagram. However, the methods are where it gets a bit trickier. These will be placed in one of a few special object types as a field. Their special object categorization depends on their behavior. GraphQL contains three fundamental special object types: queries, mutations, and subscriptions. For more information, see [Special objects](https://docs.aws.amazon.com//appsync/latest/devguide/graphql-types.html#special-object-components).

Because `getAuthor` and `getBook` are both requesting data, they will be placed in a `Query` special object type:

```
type Author {
  authorName: String
  Books: [Book]
}

type Book {
  bookName: String
  Authors: [Author]
}

type Query {
  getAuthor(authorName: String): Author
  getBook(bookName: String): Book
}
```

The operations are linked to the query, which itself is linked to the schema. Adding a schema root will define the special object type (`Query` in this case) as one of your entry points. This can be done using the `schema` keyword:

```
schema {
  query: Query
}

type Author {
  authorName: String
  Books: [Book]
}

type Book {
  bookName: String
  Authors: [Author]
}

type Query {
  getAuthor(authorName: String): Author
  getBook(bookName: String): Book
}
```

Looking at the final two methods, `addAuthor` and `addBook` are adding data to your database, so they will be defined in a `Mutation` special object type. However, from the [Types](https://docs.aws.amazon.com/appsync/latest/devguide/graphql-types.html#input-components) page, we also know that inputs directly referencing Objects aren't allowed because they're strictly output types. In this case, we can't use `Author` or `Book`, so we need to make an input type with the same fields. In this example, we added `AuthorInput` and `BookInput`, both of which accept the same fields of their respective types. Then, we create our mutation using the inputs as our parameters:

```
schema {
  query: Query
  mutation: Mutation
}

type Author {
  authorName: String
  Books: [Book]
}

input AuthorInput {
  authorName: String
  Books: [BookInput]
}

type Book {
  bookName: String
  Authors: [Author]
}

input BookInput {
  bookName: String
  Authors: [AuthorInput]
}

type Query {
  getAuthor(authorName: String): Author
  getBook(bookName: String): Book
}

type Mutation {
  addAuthor(input: [BookInput]): Author
  addBook(input: [AuthorInput]): Book
}
```

Let's review what we just did:

1. We created a schema with the `Book` and `Author` types to represent our entities.

1. We added the fields containing the characteristics of our entities.

1. We added a query to retrieve this information from the database.

1. We added a mutation to manipulate data in the database.

1. We added input types to replace our object parameters in the mutation to comply with GraphQL's rules.

1. We added the query and mutation to our root schema so that the GraphQL implementation understands the root type location.

As you can see, the process of creating a schema takes a lot of concepts from data modeling (especially database modeling) in general. You can think of the schema as fitting the shape of the data from the source. It also serves as the model that the resolver will implement. In the following, sections, you'll learn how to make a schema using various AWS-backed tools and services.

**Note**  
The examples in the following sections are not meant to run in a real application. They are only there to showcase the commands so you can build your own applications.

## Creating schemas
<a name="creating-schema"></a>

Your schema will be in a file called `schema.graphql`. AWS AppSync allows users to create new schemas for their GraphQL APIs using various methods. In this example, we'll be creating a blank API along with a blank schema.

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **Dashboard**, choose **Create API**.

   1. Under **API options**, choose **GraphQL APIs**, **Design from scratch**, then **Next**.

      1. For **API name**, change the prepopulated name to what your application needs.

      1. For **contact details**, you can enter a point of contact to identify a manager for the API. This is an optional field.

      1. Under **Private API configuration**, you can enable private API features. A private API can only be accessed from a configured VPC endpoint (VPCE). For more information, see [Private APIs](https://docs.aws.amazon.com/appsync/latest/devguide/using-private-apis.html).

         We don't recommend enabling this feature for this example. Choose **Next** after reviewing your inputs.

   1. Under **Create a GraphQL type**, you can choose to create a DynamoDB table to use as a data source or skip this and do it later.

      For this example, choose **Create GraphQL resources later**. We will be creating a resource in a separate section.

   1. Review your inputs, then choose **Create API**.

1. You will be in the dashboard of your specific API. You can tell because the API's name will be at the top of the dashboard. If this isn't the case, you can select **APIs** in the **Sidebar**, then choose your API in the **APIs dashboard**.

   1. In the **Sidebar** underneath your API's name, choose **Schema**.

1. In the **Schema editor**, you can configure your `schema.graphql` file. It may be empty or filled with types generated from a model. On the right, you have the **Resolvers** section for attaching resolvers to your schema fields. We won't be looking at resolvers in this section.

------
#### [ CLI ]

**Note**  
When using the CLI, make sure you have the correct permissions to access and create resources in the service. You may want to set [least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) policies for non-admin users who need to access the service. For more information about AWS AppSync policies, see [Identity and access management for AWS AppSync](https://docs.aws.amazon.com//appsync/latest/devguide/security-iam.html).  
Additionally, we recommend reading the console version first if you haven't done so already.

1. If you haven't already done so, [install](https://docs.aws.amazon.com//cli/latest/userguide/cli-chap-getting-started.html) the AWS CLI, then add your [configuration](https://docs.aws.amazon.com//cli/latest/userguide/cli-configure-quickstart.html).

1. Create a GraphQL API object by running the [https://docs.aws.amazon.com/cli/latest/reference/appsync/create-graphql-api.html](https://docs.aws.amazon.com/cli/latest/reference/appsync/create-graphql-api.html) command.

   You'll need to type in two parameters for this particular command:

   1. The `name` of your API.

   1. The `authentication-type`, or the type of credentials used to access the API (IAM, OIDC, etc.).
**Note**  
Other parameters such as `Region` must be configured but will usually default to your CLI configuration values.

   An example command may look like this:

   ```
   aws appsync create-graphql-api --name testAPI123 --authentication-type API_KEY
   ```

   An output will be returned in the CLI. Here's an example:

   ```
   {
       "graphqlApi": {
           "xrayEnabled": false,
           "name": "testAPI123",
           "authenticationType": "API_KEY",
           "tags": {},
           "apiId": "abcdefghijklmnopqrstuvwxyz",
           "uris": {
               "GRAPHQL": "https://zyxwvutsrqponmlkjihgfedcba.appsync-api.us-west-2.amazonaws.com/graphql",
               "REALTIME": "wss://zyxwvutsrqponmlkjihgfedcba.appsync-realtime-api.us-west-2.amazonaws.com/graphql"
           },
           "arn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz"
       }
   }
   ```

1. 
**Note**  
This is an optional command that takes an existing schema and uploads it to the AWS AppSync service using a base-64 blob. We will not be using this command for the sake of this example.

   Run the [https://docs.aws.amazon.com/cli/latest/reference/appsync/start-schema-creation.html](https://docs.aws.amazon.com/cli/latest/reference/appsync/start-schema-creation.html) command.

   You'll need to type in two parameters for this particular command:

   1. Your `api-id` from the previous step.

   1. The schema `definition` is a base-64 encoded binary blob.

   An example command may look like this:

   ```
    aws appsync start-schema-creation --api-id abcdefghijklmnopqrstuvwxyz --definition "aa1111aa-123b-2bb2-c321-12hgg76cc33v"
   ```

   An output will be returned:

   ```
   {
       "status": "PROCESSING"
   }
   ```

   This command will not return the final output after processing. You must use a separate command, [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/get-schema-creation-status.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/get-schema-creation-status.html), to see the result. Note that these two commands are asynchronous, so you can check the output status even while the schema is still being created.

------
#### [ CDK ]

**Tip**  
Before you use the CDK, we recommend reviewing the CDK's [official documentation](https://docs.aws.amazon.com/cdk/v2/guide/home.html) along with AWS AppSync's [CDK reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync-readme.html).  
The steps listed below will only show a general example of the snippet used to add a particular resource. This is **not** meant to be a working solution in your production code. We also assume you already have a working app.

1. The starting point for the CDK is a bit different. Ideally, your `schema.graphql` file should already be created. You just need to create a new file with the `.graphql` file extension. This can be an empty file.

1. In general, you may have to add the import directive to the service you're using. For example, it may follow the forms:

   ```
   import * as x from 'x'; # import wildcard as the 'x' keyword from 'x-service'
   import {a, b, ...} from 'c'; # import {specific constructs} from 'c-service'
   ```

   To add a GraphQL API, your stack file needs to import the AWS AppSync service:

   ```
   import * as appsync from 'aws-cdk-lib/aws-appsync';
   ```
**Note**  
This means we're importing the entire service under the `appsync` keyword. To use this in your app, your AWS AppSync constructs will use the format `appsync.construct_name`. For instance, if we wanted to make a GraphQL API, we would say `new appsync.GraphqlApi(args_go_here)`. The following step depicts this.

1. The most basic GraphQL API will include a `name` for the API and the `schema` path.

   ```
   const add_api = new appsync.GraphqlApi(this, 'API_ID', {
     name: 'name_of_API_in_console',
     schema: appsync.SchemaFile.fromAsset(path.join(__dirname, 'schema_name.graphql')),
   });
   ```
**Note**  
Let's review what this snippet does. Inside the scope of `api`, we're creating a new GraphQL API by calling `appsync.GraphqlApi(scope: Construct, id: string, props: GraphqlApiProps)`. The scope is `this`, which refers to the current object. The id is *API\$1ID*, which will be your GraphQL API's resource name in CloudFormation when it's created. The `GraphqlApiProps` contains the `name` of your GraphQL API and the `schema`. The `schema` will generate a schema (`SchemaFile.fromAsset`) by searching the absolute path (`__dirname`) for the `.graphql` file (*schema\$1name.graphql*). In a real scenario, your schema file will probably be inside the CDK app.  
To use changes made to your GraphQL API, you'll have to redeploy the app.

------

## Adding types to schemas
<a name="adding-schema-types"></a>

Now that you've added your schema, you can start adding both your input and output types. Note that the types here shouldn't be used in real code; they're just examples to help you understand the process.

First, we'll create an object type. In real code, you don't have to start with these types. You can make any type you want at any time so long as you follow GraphQL's rules and syntax.

**Note**  
These next few sections will be using the **schema editor**, so keep this open.

------
#### [ Console ]
+ You can create an object type using the `type` keyword along with the type's name:

  ```
  type Type_Name_Goes_Here {}
  ```

  Inside the type's scope, you can add fields that represent the object's characteristics:

  ```
  type Type_Name_Goes_Here {
    # Add fields here
  }
  ```

  Here's an example:

  ```
  type Obj_Type_1 {
    id: ID!
    title: String
    date: AWSDateTime
  }
  ```
**Note**  
In this step, we added a generic object type with a required `id` field stored as `ID`, a `title` field stored as a `String`, and a `date` field stored as an `AWSDateTime`. To see a list of types and fields and what they do, see [Schemas](https://docs.aws.amazon.com//appsync/latest/devguide/schema-components.html). To see a list of scalars and what they do, see the [Type reference](https://docs.aws.amazon.com/appsync/latest/devguide/type-reference.html).

------
#### [ CLI ]

**Note**  
We recommend reading the console version first if you haven't done so already.
+ You can create an object type by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-type.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-type.html) command.

  You'll need to enter a few parameters for this particular command:

  1. The `api-id` of your API.

  1. The `definition`, or the content of your type. In the console example, this was:

     ```
     type Obj_Type_1 {
       id: ID!
       title: String
       date: AWSDateTime
     }
     ```

  1. The `format` of your input. In this example, we're using `SDL`.

  An example command may look like this:

  ```
  aws appsync create-type --api-id abcdefghijklmnopqrstuvwxyz --definition "type Obj_Type_1{id: ID! title: String date: AWSDateTime}" --format SDL
  ```

  An output will be returned in the CLI. Here's an example:

  ```
  {
      "type": {
          "definition": "type Obj_Type_1{id: ID! title: String date: AWSDateTime}",
          "name": "Obj_Type_1",
          "arn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/types/Obj_Type_1",
          "format": "SDL"
      }
  }
  ```
**Note**  
In this step, we added a generic object type with a required `id` field stored as `ID`, a `title` field stored as a `String`, and a `date` field stored as an `AWSDateTime`. To see a list of types and fields and what they do, see [Schemas](https://docs.aws.amazon.com//appsync/latest/devguide/schema-components.html). To see a list of scalars and what they do, see [Type reference](https://docs.aws.amazon.com/appsync/latest/devguide/type-reference.html).  
On a further note, you may have realized that entering the definition directly works for smaller types but is infeasible for adding larger or multiple types. You can opt to add everything in a `.graphql` file and then [pass it as the input](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-file.html).

------
#### [ CDK ]

**Tip**  
Before you use the CDK, we recommend reviewing the CDK's [official documentation](https://docs.aws.amazon.com/cdk/v2/guide/home.html) along with AWS AppSync's [CDK reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync-readme.html).  
The steps listed below will only show a general example of the snippet used to add a particular resource. This is **not** meant to be a working solution in your production code. We also assume you already have a working app.

To add a type, you need to add it to your `.graphql` file. For instance, the console example was:

```
type Obj_Type_1 {
  id: ID!
  title: String
  date: AWSDateTime
}
```

You can add your types directly to the schema like any other file.

**Note**  
To use changes made to your GraphQL API, you'll have to redeploy the app.

------

The [object type](https://graphql.org/learn/schema/#object-types-and-fields) has fields that are [scalar types](https://graphql.org/learn/schema/#scalar-types) such as strings and integers. AWS AppSync also allows you to use enhanced scalar types like `AWSDateTime` in addition to the base GraphQL scalars. Also, any field that ends in an exclamation point is required. 

The `ID` scalar type in particular is a unique identifier that can be either `String` or `Int`. You can control these in your resolver code for automatic assignment.

There are similarities between special object types like `Query` and "regular" object types like the example above in that they both use the `type` keyword and are considered objects. However, for the special object types (`Query`, `Mutation`, and `Subscription`), their behavior is vastly different because they are exposed as the entry points for your API. They're also more about shaping operations rather than data. For more information, see [The query and mutation types](https://graphql.org/learn/schema/#the-query-and-mutation-types).

On the topic of special object types, the next step could be to add one or more of them to perform operations on the shaped data. In a real scenario, every GraphQL schema must at least have a root query type for requesting data. You can think of the query as one of the entry points (or endpoints) for your GraphQL server. Let's add a query as an example.

------
#### [ Console ]
+ To create a query, you can simply add it to the schema file like any other type. A query would require a `Query` type and an entry in the root like this:

  ```
  schema {
    query: Name_of_Query
  }
  
  type Name_of_Query {
    # Add field operation here
  }
  ```

  Note that *Name\$1of\$1Query* in a production environment will simply be called `Query` in most cases. We recommend keeping it at this value. Inside the query type, you can add fields. Each field will perform an operation in the request. As a result, most, if not all, of these fields will be attached to a resolver. However, we're not concerned with that in this section. Regarding the format of the field operation, it might look like this:

  ```
  Name_of_Query(params): Return_Type # version with params
  Name_of_Query: Return_Type # version without params
  ```

  Here's an example:

  ```
  schema {
    query: Query
  }
  
  type Query {
    getObj: [Obj_Type_1]
  }
  
  type Obj_Type_1 {
    id: ID!
    title: String
    date: AWSDateTime
  }
  ```
**Note**  
In this step, we added a `Query` type and defined it in our `schema` root. Our `Query` type defined a `getObj` field that returns a list of `Obj_Type_1` objects. Note that `Obj_Type_1` is the object of the previous step. In production code, your field operations will normally be working with data shaped by objects like `Obj_Type_1`. In addition, fields like `getObj` will normally have a resolver to perform the business logic. That will be covered in a different section.  
As an additional note, AWS AppSync automatically adds a schema root during exports, so technically you don't have to add it directly to the schema. Our service will automatically process duplicate schemas. We're adding it here as a best practice.

------
#### [ CLI ]

**Note**  
We recommend reading the console version first if you haven't done so already.

1. Create a `schema` root with a `query` definition by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-type.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-type.html) command.

   You'll need to enter a few parameters for this particular command:

   1. The `api-id` of your API.

   1. The `definition`, or the content of your type. In the console example, this was:

      ```
      schema {
        query: Query
      }
      ```

   1. The `format` of your input. In this example, we're using `SDL`.

   An example command may look like this:

   ```
   aws appsync create-type --api-id abcdefghijklmnopqrstuvwxyz --definition "schema {query: Query}" --format SDL
   ```

   An output will be returned in the CLI. Here's an example:

   ```
   {
       "type": {
           "definition": "schema {query: Query}",
           "name": "schema",
           "arn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/types/schema",
           "format": "SDL"
       }
   }
   ```
**Note**  
Note that if you didn't input something correctly in the `create-type` command, you can update your schema root (or any type in the schema) by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/update-type.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/update-type.html) command. In this example, we'll be temporarily changing the schema root to contain a `subscription` definition.  
You'll need to enter a few parameters for this particular command:  
The `api-id` of your API.
The `type-name` of your type. In the console example, this was `schema`.
The `definition`, or the content of your type. In the console example, this was:  

      ```
      schema {
        query: Query
      }
      ```
The schema after adding a `subscription` will look like this:  

      ```
      schema {
        query: Query
        subscription: Subscription
      }
      ```
The `format` of your input. In this example, we're using `SDL`.
An example command may look like this:  

   ```
   aws appsync update-type --api-id abcdefghijklmnopqrstuvwxyz --type-name schema --definition "schema {query: Query subscription: Subscription}" --format SDL
   ```
An output will be returned in the CLI. Here's an example:  

   ```
   {
       "type": {
           "definition": "schema {query: Query subscription: Subscription}",
           "arn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/types/schema",
           "format": "SDL"
       }
   }
   ```
Adding preformatted files will still work in this example.

1. Create a `Query` type by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-type.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-type.html) command.

   You'll need to enter a few parameters for this particular command:

   1. The `api-id` of your API.

   1. The `definition`, or the content of your type. In the console example, this was:

      ```
      type Query {
        getObj: [Obj_Type_1]
      }
      ```

   1. The `format` of your input. In this example, we're using `SDL`.

   An example command may look like this:

   ```
   aws appsync create-type --api-id abcdefghijklmnopqrstuvwxyz --definition "type Query {getObj: [Obj_Type_1]}" --format SDL
   ```

   An output will be returned in the CLI. Here's an example:

   ```
   {
       "type": {
           "definition": "Query {getObj: [Obj_Type_1]}",
           "name": "Query",
           "arn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/types/Query",
           "format": "SDL"
       }
   }
   ```
**Note**  
In this step, we added a `Query` type and defined it in your `schema` root. Our `Query` type defined a `getObj` field that returned a list of `Obj_Type_1` objects.  
In the `schema` root code `query: Query`, the `query:` part indicates that a query was defined in your schema, while the `Query` part indicates the actual special object name. 

------
#### [ CDK ]

**Tip**  
Before you use the CDK, we recommend reviewing the CDK's [official documentation](https://docs.aws.amazon.com/cdk/v2/guide/home.html) along with AWS AppSync's [CDK reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync-readme.html).  
The steps listed below will only show a general example of the snippet used to add a particular resource. This is **not** meant to be a working solution in your production code. We also assume you already have a working app.

You'll need to add your query and the schema root to the `.graphql` file. Our example looked like the example below, but you'll want to replace it with your actual schema code:

```
schema {
  query: Query
}

type Query {
  getObj: [Obj_Type_1]
}

type Obj_Type_1 {
  id: ID!
  title: String
  date: AWSDateTime
}
```

You can add your types directly to the schema like any other file.

**Note**  
Updating the schema root is optional. We added it to this example as a best practice.  
To use changes made to your GraphQL API, you'll have to redeploy the app.

------

You've now seen an example of creating both objects and special objects (queries). You've also seen how these can be interconnected to describe data and operations. You can have schemas with only the data description and one or more queries. However, we'd like to add another operation to add data to the data source. We'll add another special object type called `Mutation` that modifies data.

------
#### [ Console ]
+ A mutation will be called `Mutation`. Like `Query`, the field operations inside `Mutation` will describe an operation and will be attached to a resolver. Also, note that we need to define it in the `schema` root because it's a special object type. Here's an example of a mutation:

  ```
  schema {
    mutation: Name_of_Mutation
  }
  
  type Name_of_Mutation {
    # Add field operation here
  }
  ```

  A typical mutation will be listed in the root like a query. The mutation is defined using the `type` keyword along with the name. *Name\$1of\$1Mutation* will usually be called `Mutation`, so we recommend keeping it that way. Each field will also perform an operation. Regarding the format of the field operation, it might look like this:

  ```
  Name_of_Mutation(params): Return_Type # version with params
  Name_of_Mutation: Return_Type # version without params
  ```

  Here's an example:

  ```
  schema {
    query: Query
    mutation: Mutation
  }
  
  type Obj_Type_1 {
    id: ID!
    title: String
    date: AWSDateTime
  }
  
  type Query {
    getObj: [Obj_Type_1]
  }
  
  type Mutation {
    addObj(id: ID!, title: String, date: AWSDateTime): Obj_Type_1
  }
  ```
**Note**  
In this step, we added a `Mutation` type with an `addObj` field. Let's summarize what this field does:  

  ```
  addObj(id: ID!, title: String, date: AWSDateTime): Obj_Type_1
  ```
`addObj` is using the `Obj_Type_1` object to perform an operation. This is apparent due to the fields, but the syntax proves this in the `: Obj_Type_1` return type. Inside `addObj`, it's accepting the `id`, `title`, and `date` fields from the `Obj_Type_1` object as parameters. As you may see, it looks a lot like a method declaration. However, we haven't described the behavior of our method yet. As stated earlier, the schema is only there to define what the data and operations will be and not how they operate. Implementing the actual business logic will come later when we create our first resolvers.  
Once you're done with your schema, there's an option to export it as a `schema.graphql` file. In the **Schema editor**, you can choose **Export schema** to download the file in a supported format.  
As an additional note, AWS AppSync automatically adds a schema root during exports, so technically you don't have to add it directly to the schema. Our service will automatically process duplicate schemas. We're adding it here as a best practice.

------
#### [ CLI ]

**Note**  
We recommend reading the console version first if you haven't done so already.

1. Update your root schema by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/update-type.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/update-type.html) command.

   You'll need to enter a few parameters for this particular command:

   1. The `api-id` of your API.

   1. The `type-name` of your type. In the console example, this was `schema`.

   1. The `definition`, or the content of your type. In the console example, this was:

      ```
      schema {
        query: Query
        mutation: Mutation
      }
      ```

   1. The `format` of your input. In this example, we're using `SDL`.

   An example command may look like this:

   ```
   aws appsync update-type --api-id abcdefghijklmnopqrstuvwxyz --type-name schema --definition "schema {query: Query mutation: Mutation}" --format SDL
   ```

   An output will be returned in the CLI. Here's an example:

   ```
   {
       "type": {
           "definition": "schema {query: Query mutation: Mutation}",
           "arn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/types/schema",
           "format": "SDL"
       }
   }
   ```

1. Create a `Mutation` type by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-type.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-type.html) command.

   You'll need to enter a few parameters for this particular command:

   1. The `api-id` of your API.

   1. The `definition`, or the content of your type. In the console example, this was

      ```
      type Mutation {
        addObj(id: ID!, title: String, date: AWSDateTime): Obj_Type_1
      }
      ```

   1. The `format` of your input. In this example, we're using `SDL`.

   An example command may look like this:

   ```
   aws appsync create-type --api-id abcdefghijklmnopqrstuvwxyz --definition "type Mutation {addObj(id: ID! title: String date: AWSDateTime): Obj_Type_1}" --format SDL
   ```

   An output will be returned in the CLI. Here's an example:

   ```
   {
       "type": {
           "definition": "type Mutation {addObj(id: ID! title: String date: AWSDateTime): Obj_Type_1}",
           "name": "Mutation",
           "arn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/types/Mutation",
           "format": "SDL"
       }
   }
   ```

------
#### [ CDK ]

**Tip**  
Before you use the CDK, we recommend reviewing the CDK's [official documentation](https://docs.aws.amazon.com/cdk/v2/guide/home.html) along with AWS AppSync's [CDK reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync-readme.html).  
The steps listed below will only show a general example of the snippet used to add a particular resource. This is **not** meant to be a working solution in your production code. We also assume you already have a working app.

You'll need to add your query and the schema root to the `.graphql` file. Our example looked like the example below, but you'll want to replace it with your actual schema code:

```
schema {
  query: Query
  mutation: Mutation
}

type Obj_Type_1 {
  id: ID!
  title: String
  date: AWSDateTime
}

type Query {
  getObj: [Obj_Type_1]
}

type Mutation {
  addObj(id: ID!, title: String, date: AWSDateTime): Obj_Type_1
}
```

**Note**  
Updating the schema root is optional. We added it to this example as a best practice.  
To use changes made to your GraphQL API, you'll have to redeploy the app.

------

## Optional considerations - Using enums as statuses
<a name="optional-consideration-enums"></a>

At this point, you know how to make a basic schema. However, there are many things you could add to increase the schema's functionality. One common thing found in applications is the use of enums as statuses. You can use an enum to force a specific value from a set of values to be chosen when called. This is good for things that you know will not change drastically over long periods of time. Hypothetically speaking, we could add an enum that returns the status code or String in the response. 

As an example, let's assume we're making a social media app that's storing a user's post data in the backend. Our schema contains a `Post` type that represents an individual post's data:

```
type Post {
  id: ID!
  title: String
  date: AWSDateTime
  poststatus: PostStatus
}
```

Our `Post` will contain a unique `id`, post `title`, `date` of posting, and an enum called `PostStatus` that represents the post's state as it's processed by the app. For our operations, we'll have a query that returns all post data:

```
type Query {
  getPosts: [Post]
}
```

We'll also have a mutation that adds posts to the data source:

```
type Mutation {
  addPost(id: ID!, title: String, date: AWSDateTime, poststatus: PostStatus): Post
}
```

Looking at our schema, the `PostStatus` enum could have several statuses. We might want the three basic states called `success` (post successfully processed), `pending` (post being processed), and `error` (post unable to be processed). To add the enum, we could do this:

```
enum PostStatus {
  success
  pending
  error
}
```

The full schema might look like this:

```
schema {
  query: Query
  mutation: Mutation
}

type Post {
  id: ID!
  title: String
  date: AWSDateTime
  poststatus: PostStatus
}

type Mutation {
  addPost(id: ID!, title: String, date: AWSDateTime, poststatus: PostStatus): Post
}

type Query {
  getPosts: [Post]
}

enum PostStatus {  
  success
  pending
  error
}
```

If a user adds a `Post` in the application, the `addPost` operation will be called to process that data. As the resolver attached to `addPost` processes the data, it will continually update the `poststatus` with the status of the operation. When queried, the `Post` will contain the final status of the data. Keep in mind, we're only describing how we want the data to work in the schema. We're assuming a lot about the implementation of our resolver(s), which will implement the actual business logic for handling the data to fulfill the request.

## Optional considerations - Subscriptions
<a name="optional-consideration-subscriptions"></a>

Subscriptions in AWS AppSync are invoked as a response to a mutation. You configure this with a `Subscription` type and `@aws_subscribe()` directive in the schema to denote which mutations invoke one or more subscriptions. For more information about configuring subscriptions, see [Real-time data](https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-data.html).

## Optional considerations - Relations and pagination
<a name="optional-consideration-relations-and-pagination"></a>

Suppose you had a million `Posts` stored in a DynamoDB table, and you wanted to return some of that data. However, the example query given above only returns all posts. You wouldn’t want to fetch all of these every time you made a request. Instead, you would want to [paginate](https://graphql.org/learn/pagination/) through them. Make the following changes to your schema:
+ In the `getPosts` field, add two input arguments: `nextToken` (iterator) and `limit` (iteration limit).
+ Add a new `PostIterator` type containing `Posts` (retrieves the list of `Post` objects) and `nextToken` (iterator) fields.
+ Change `getPosts` so that it returns `PostIterator` and not a list of `Post` objects.

```
schema {
  query: Query
  mutation: Mutation
}

type Post {
  id: ID!
  title: String
  date: AWSDateTime
  poststatus: PostStatus
}

type Mutation {
  addPost(id: ID!, title: String, date: AWSDateTime, poststatus: PostStatus): Post
}

type Query {
  getPosts(limit: Int, nextToken: String): PostIterator
}

enum PostStatus {
  success
  pending
  error
}

type PostIterator {
  posts: [Post]
  nextToken: String
}
```

The `PostIterator` type allows you to return a portion of the list of `Post` objects and a `nextToken` for getting the next portion. Inside `PostIterator`, there is a list of `Post` items (`[Post]`) that is returned with a pagination token (`nextToken`). In AWS AppSync, this would be connected to Amazon DynamoDB through a resolver and automatically generated as an encrypted token. This converts the value of the `limit` argument to the `maxResults` parameter and the `nextToken` argument to the `exclusiveStartKey` parameter. For examples and the built-in template samples in the AWS AppSync console, see [Resolver reference (JavaScript)](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-js-version.html).

# Attaching a data source in AWS AppSync
<a name="attaching-a-data-source"></a>

Data sources are resources in your AWS account that GraphQL APIs can interact with. AWS AppSync supports a multitude of data sources like AWS Lambda, Amazon DynamoDB, relational databases (Amazon Aurora Serverless), Amazon OpenSearch Service, and HTTP endpoints. An AWS AppSync API can be configured to interact with multiple data sources, enabling you to aggregate data in a single location. AWS AppSync can use existing AWS resources from your account or provision DynamoDB tables on your behalf from a schema definition.

The following section will show you how to attach a data source to your GraphQL API.

## Types of data sources
<a name="data-source-types"></a>

Now that you have created a schema in the AWS AppSync console, you can attach a data source to it. When you initially create an API, there's an option to provision an Amazon DynamoDB table during the creation of the predefined schema. However, we won't be covering that option in this section. You can see an example of this in the [Launching a schema](https://docs.aws.amazon.com//appsync/latest/devguide/schema-launch-start.html) section.

Instead, we'll be looking at all of the data sources AWS AppSync supports. There are many factors that go into picking the right solution for your application. The sections below will provide some additional context for each data source. For general information about data sources, see [Data sources](https://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html).

### Amazon DynamoDB
<a name="data-source-type-ddb"></a>

Amazon DynamoDB is one of AWS' main storage solutions for scalable applications. The core component of DynamoDB is the **table**, which is simply a collection of data. You will typically create tables based on entities like `Book` or `Author`. Table entry information is stored as **items**, which are groups of fields that are unique to each entry. A full item represents a row/record in the database. For example, an item for a `Book` entry might include `title` and `author` along with their values. The individual fields like the `title` and `author` are called **attributes**, which are akin to column values in relational databases. 

As you can guess, tables will be used to store data from your application. AWS AppSync allows you to hook up your DynamoDB tables to your GraphQL API to manipulate data. Take this [use case](https://aws.amazon.com/blogs/mobile/new-real-time-multi-group-app-with-aws-amplify-graphql-build-a-twitter-community-clone/) from the *Front-end web and mobile blog*. This application lets users sign up for a social media app. Users can join groups and upload posts that are broadcasted to other users subscribed to the group. Their application stores user, post, and user group information in DynamoDB. The GraphQL API (managed by AWS AppSync) interfaces with the DynamoDB table. When a user makes a change in the system that will be reflected on the front-end, the GraphQL API retrieves these changes and broadcasts them to other users in real time.

### AWS Lambda
<a name="data-source-type-lam"></a>

Lambda is an event-driven service that automatically builds the necessary resources to run code as a response to an event. Lambda uses **functions**, which are group statements containing the code, dependencies, and configurations for executing a resource. Functions automatically execute when they detect a **trigger**, a group of activities that invoke your function. A trigger could be anything like an application making an API call, an AWS service in your account spinning up a resource, etc. When triggered, functions will process **events**, which are JSON documents containing the data to modify.

Lambda is good for running code without having to provision the resources to run it. Take this [use case](https://aws.amazon.com/blogs/mobile/building-a-graphql-api-with-java-and-aws-lambda/) from the *Front-end web and mobile blog*. This use case is a bit similar to the one showcased in the DynamoDB section. In this application, the GraphQL API is responsible for defining the operations for things like adding posts (mutations) and fetching that data (queries). To implement the functionality of their operations (e.g., `getPost ( id: String ! ) : Post`, `getPostsByAuthor ( author: String ! ) : [ Post ]`), they use Lambda functions to process inbound requests. Under *Option 2: AWS AppSync with Lambda resolver*, they use the AWS AppSync service to maintain their schema and link a Lambda data source to one of the operations. When the operation is called, Lambda interfaces with the Amazon RDS proxy to perform the business logic on the database.

### Amazon RDS
<a name="data-source-type-RDS"></a>

Amazon RDS lets you quickly build and configure relational databases. In Amazon RDS, you'll create a generic **database instance** that will serve as the isolated database environment in the cloud. In this instance, you'll use a **DB engine**, which is the actual RDBMS software (PostgreSQL, MySQL, etc.). The service offloads much of the backend work by providing scalability using AWS' infrastructure, security services such as patching and encryption, and lowered administrative costs for deployments.

Take the same [use case](https://aws.amazon.com/blogs/mobile/building-a-graphql-api-with-java-and-aws-lambda/) from the Lambda section. Under *Option 3: AWS AppSync with Amazon RDS resolver*, another option presented is linking the GraphQL API in AWS AppSync to Amazon RDS directly. Using a [data API](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html), they associate the database with the GraphQL API. A resolver is attached to a field (usually a query, mutation, or subscription) and implements the SQL statements needed to access the database. When a request calling the field is made by the client, the resolver executes the statements and returns the response.

### Amazon EventBridge
<a name="data-source-type-eventbridge"></a>

In EventBridge, you'll create **event buses**, which are pipelines that receive events from services or applications you attach (the **event source**) and process them based on a set of rules. An **event** is some state change in an execution environment, while a **rule** is a set of filters for events. A rule follows an **event pattern**, or metadata of an event's state change (id, Region, account number, ARN(s), etc.). When an event matches the event pattern, EventBridge will send the event across the pipeline to the destination service (**target**) and trigger the action specified in the rule.

EventBridge is good for routing state-changing operations to some other service. Take this [use case](https://aws.amazon.com/blogs/mobile/appsync-eventbridge/) from the *Front-end web and mobile blog*. The example depicts an e-commerce solution that has several teams maintaining different services. One of these services provides order updates to the customer at each step of the delivery (order placed, in progress, shipped, delivered, etc.) on the front-end. However, the front-end team managing this service doesn't have direct access to the ordering system data as that's maintained by a separate backend team. The backend team's ordering system is also described as a black box, so it's hard to glean information about the way they're structuring their data. However, the backend team did set up a system that published order data through an event bus managed by EventBridge. To access the data coming from the event bus and route it to the front-end, the front-end team created a new target pointing to their GraphQL API sitting in AWS AppSync. They also created a rule to only send data relevant to the order update. When an update is made, the data from the event bus is sent to the GraphQL API. The schema in the API processes the data, then passes it to the front-end.

### None data sources
<a name="data-source-type-none"></a>

If you aren't planning on using a data source, you can set it to `none`. A `none` data source, while still explicitly categorized as a data source, isn't a storage medium. Typically, a resolver will invoke one or more data sources at some point to process the request. However, there are situations where you may not need to manipulate a data source. Setting the data source to `none` will run the request, skip the data invocation step, then run the response.

Take the same [use case](https://aws.amazon.com/blogs/mobile/appsync-eventbridge/) from the EventBridge section. In the schema, the mutation processes the status update, then sends it out to subscribers. Recalling how resolvers work, there's usually at least one data source invocation. However, the data in this scenario was already sent automatically by the event bus. This means there's no need for the mutation to perform a data source invocation; the order status can simply be handled locally. The mutation is set to `none`, which acts as a pass-through value with no data source invocation. The schema is then populated with the data, which is sent out to subscribers.

### OpenSearch
<a name="data-source-type-opensearch"></a>

Amazon OpenSearch Service is a suite of tools to implement full-text searching, data visualization, and logging. You can use this service to query the structured data you've uploaded.

In this service, you'll create instances of OpenSearch. These are called **nodes**. In a node, you'll be adding at least one **index**. Indices conceptually are a bit like tables in relational databases. (However, OpenSearch isn't ACID compliant, so it shouldn't be used that way). You'll populate your index with data that you upload to the OpenSearch service. When your data is uploaded, it will be indexed in one or more shards that exist in the index. A **shard** is like a partition of your index that contains some of your data and can be queried separately from other shards. Once uploaded, your data will be structured as JSON files called **documents**. You can then query the node for data in the document.

### HTTP endpoints
<a name="data-source-type-http"></a>

You can use HTTP endpoints as data sources. AWS AppSync can send requests to the endpoints with the relevant information like params and payload. The HTTP response will be exposed to the resolver, which will return the final response after it finishes its operation(s).

## Adding a data source
<a name="adding-a-data-source"></a>

If you created a data source, you can link it to the AWS AppSync service and, more specifically, the API.

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. Choose your API in the **Dashboard**.

   1. In the **Sidebar**, choose **Data Sources**.

1. Choose **Create data source**.

   1. Give your data source a name. You can also give it a description, but that's optional.

   1. Choose your **Data source type**.

   1. For DynamoDB, you'll have to choose your Region, then the table in the Region. You can dictate interaction rules with your table by choosing to make a new generic table role or importing an existing role for the table. You can enable [versioning](https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html), which can automatically create versions of data for each request when multiple clients are trying to update data at the same time. Versioning is used to keep and maintain multiple variants of data for conflict detection and resolution purposes. You can also enable automatic schema generation, which takes your data source and generates some of the CRUD, `List`, and `Query` operations needed to access it in your schema. 

      For OpenSearch, you'll have to choose your Region, then the domain (cluster) in the Region. You can dictate interaction rules with your domain by choosing to make a new generic table role or importing an existing role for the table. 

      For Lambda, you'll have to choose your Region, then the ARN of the Lambda function in the Region. You can dictate interaction rules with your Lambda function by choosing to make a new generic table role or importing an existing role for the table. 

      For HTTP, you'll have to enter your HTTP endpoint.

      For EventBridge, you'll have to choose your Region, then the event bus in the Region. You can dictate interaction rules with your event bus by choosing to make a new generic table role or importing an existing role for the table. 

      For RDS, you'll have to choose your Region, then the secret store (username and password), database name, and schema.

      For none, you will add a data source with no actual data source. This is for handling resolvers locally rather than through an actual data source.
**Note**  
If you're importing existing roles, they need a trust policy. For more information, see the [IAM trust policy](#iam-trust-policy.title).

1. Choose **Create**.
**Note**  
Alternatively, if you're creating a DynamoDB data source, you can go to the **Schema** page in the console, choose **Create Resources** at the top of the page, then fill out a predefined model to convert into a table. In this option, you will fill out or import the base type, configure the basic table data including the partition key, and review the schema changes.

------
#### [ CLI ]
+ Create your data source by running the [https://docs.aws.amazon.com/cli/latest/reference/appsync/create-data-source.html](https://docs.aws.amazon.com/cli/latest/reference/appsync/create-data-source.html) command.

  You'll need to enter a few parameters for this particular command:

  1. The `api-id` of your API.

  1. The `name` of your table.

  1. The `type` of data source. Depending on the data source type you choose, you may need to enter a `service-role-arn` and a `-config` tag.

  An example command may look like this:

  ```
   aws appsync create-data-source --api-id abcdefghijklmnopqrstuvwxyz --name data_source_name --type data_source_type --service-role-arn arn:aws:iam::107289374856:role/role_name --[data_source_type]-config {params}
  ```

------
#### [ CDK ]

**Tip**  
Before you use the CDK, we recommend reviewing the CDK's [official documentation](https://docs.aws.amazon.com/cdk/v2/guide/home.html) along with AWS AppSync's [CDK reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync-readme.html).  
The steps listed below will only show a general example of the snippet used to add a particular resource. This is **not** meant to be a working solution in your production code. We also assume you already have a working app.

To add your particular data source, you'll need to add the construct to your stack file. A list of data source types can be found here:
+  [ DynamoDbDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.DynamoDbDataSource.html) 
+  [ EventBridgeDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.EventBridgeDataSource.html) 
+  [ HttpDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.HttpDataSource.html) 
+  [ LambdaDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.LambdaDataSource.html) 
+  [ NoneDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.NoneDataSource.html) 
+  [ OpenSearchDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.OpenSearchDataSource.html) 
+  [ RdsDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.RdsDataSource.html) 

1. In general, you may have to add the import directive to the service you're using. For example, it may follow the forms:

   ```
   import * as x from 'x'; # import wildcard as the 'x' keyword from 'x-service'
   import {a, b, ...} from 'c'; # import {specific constructs} from 'c-service'
   ```

   For example, here's how you could import the AWS AppSync and DynamoDB services:

   ```
   import * as appsync from 'aws-cdk-lib/aws-appsync';
   import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
   ```

1. Some services like RDS require some additional setup in the stack file before creating the data source (e.g., VPC creation, roles, and access credentials). Consult the examples in the relevant CDK pages for more information.

1. For most data sources, especially AWS services, you'll be creating a new instance of the data source in your stack file. Typically, this will look like the following:

   ```
   const add_data_source_func = new service_scope.resource_name(scope: Construct, id: string, props: data_source_props);
   ```

   For example, here's an example Amazon DynamoDB table:

   ```
   const add_ddb_table = new dynamodb.Table(this, 'Table_ID', {
     partitionKey: {
       name: 'id',
       type: dynamodb.AttributeType.STRING,
     },
     sortKey: {
       name: 'id',
       type: dynamodb.AttributeType.STRING,
     },
     tableClass: dynamodb.TableClass.STANDARD,
   });
   ```
**Note**  
Most data sources will have at least one required prop (will be denoted **without** a `?` symbol). Consult the CDK documentation to see which props are needed.

1. Next, you need to link the data source to the GraphQL API. The recommended method is to add it when you make a function for your pipeline resolver. For instance, the snippet below is a function that scans all elements in a DynamoDB table:

   ```
   const add_func = new appsync.AppsyncFunction(this, 'func_ID', {
     name: 'func_name_in_console',
     add_api,
     dataSource: add_api.addDynamoDbDataSource('data_source_name_in_console', add_ddb_table),
     code: appsync.Code.fromInline(`
         export function request(ctx) {
           return { operation: 'Scan' };
         }
   
         export function response(ctx) {
           return ctx.result.items;
         }
     `),
     runtime: appsync.FunctionRuntime.JS_1_0_0,
   });
   ```

   In the `dataSource` props, you can call the GraphQL API (`add_api`) and use one of its built-in methods (`addDynamoDbDataSource`) to make the association between the table and the GraphQL API. The arguments are the name of this link that will exist in the AWS AppSync console (`data_source_name_in_console` in this example) and the table method (`add_ddb_table`). More on this topic will be revealed in the next section when you start making resolvers.

   There are alternative methods for linking a data source. You could technically add `api` to the props list in the table function. For example, here's the snippet from step 3 but with an `api` props containing a GraphQL API:

   ```
   const add_api = new appsync.GraphqlApi(this, 'API_ID', {
     ...
   });
   
   const add_ddb_table = new dynamodb.Table(this, 'Table_ID', {
   
    ...
   
     api: add_api
   });
   ```

   Alternatively, you can call the `GraphqlApi` construct separately:

   ```
   const add_api = new appsync.GraphqlApi(this, 'API_ID', {
     ...
   });
   
   const add_ddb_table = new dynamodb.Table(this, 'Table_ID', {
     ...
   });
   
   const link_data_source = add_api.addDynamoDbDataSource('data_source_name_in_console', add_ddb_table);
   ```

   We recommend only creating the association in the function's props. Otherwise, you'll either have to link your resolver function to the data source manually in the AWS AppSync console (if you want to keep using the console value `data_source_name_in_console`) or create a separate association in the function under another name like `data_source_name_in_console_2`. This is due to limitations in how the props process information.
**Note**  
You'll have to redeploy the app to see your changes.

------

### IAM trust policy
<a name="iam-trust-policy"></a>

If you’re using an existing IAM role for your data source, you need to grant that role the appropriate permissions to perform operations on your AWS resource, such as `PutItem` on an Amazon DynamoDB table. You also need to modify the trust policy on that role to allow AWS AppSync to use it for resource access as shown in the following example policy:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
        "Effect": "Allow",
        "Principal": {
            "Service": "appsync.amazonaws.com"
        },
        "Action": "sts:AssumeRole"
        }
    ]
}
```

------

You can also add conditions to your trust policy to limit access to the data source as desired. Currently, `SourceArn` and `SourceAccount` keys can be used in these conditions. For example, the following policy limits access to your data source to the account `123456789012`:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "appsync.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": "123456789012"
        }
      }
    }
  ]
}
```

------

Alternatively, you can limit access to a data source to a specific API, such as `abcdefghijklmnopq`, using the following policy:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "appsync.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "ArnEquals": {
          "aws:SourceArn": "arn:aws:appsync:us-west-2:123456789012:apis/abcdefghijklmnopq"
        }
      }
    }
  ]
}
```

------

You can limit access to all AWS AppSync APIs from a specific region, such as `us-east-1`, using the following policy:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "appsync.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "ArnEquals": {
          "aws:SourceArn": "arn:aws:appsync:us-east-1:123456789012:apis/*"
        }
      }
    }
  ]
}
```

------

In the next section ([Configuring Resolvers](https://docs.aws.amazon.com//appsync/latest/devguide/resolver-config-overview.html)), we'll add our resolver business logic and attach it to the fields in our schema to process the data in our data source.

For more information regarding role policy configuration, see [Modifying a role](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_manage_modify.html) in the *IAM User Guide*.

For more information regarding cross-account access of AWS Lambda resolvers for AWS AppSync, see [Building cross-account AWS Lambda resolvers for AWS AppSync](https://aws.amazon.com/blogs/mobile/appsync-lambda-cross-account/).

# Configuring resolvers in AWS AppSync
<a name="resolver-config-overview"></a>

In the previous sections, you learned how to create your GraphQL schema and data source, then linked them together in the AWS AppSync service. In your schema, you may have established one or more fields (operations) in your query and mutation. While the schema described the kinds of data the operations would request from the data source, it never implemented how those operations would behave around the data. 

An operation's behavior is always implemented in the resolver, which will be linked to the field performing the operation. For more information about how resolvers work in general, see the [Resolvers](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-components.html) page.

In AWS AppSync, your resolver is tied to a runtime, which is the environment in which your resolver executes. Runtimes dictate the language that your resolver will be written in. There are currently two supported runtimes: APPSYNC\$1JS (JavaScript) and Apache Velocity Template Language (VTL). 

When implementing resolvers, there is a general structure they follow:
+ **Before step**: When a request is made by the client, the resolvers for the schema fields being used (typically your queries, mutations, subscriptions) are passed the request data. The resolver will begin processing the request data with a before step handler, which allows some preprocessing operations to be performed before the data moves through the resolver.
+ **Function(s)**: After the before step runs, the request is passed to the functions list. The first function in the list will execute against the data source. A function is a subset of your resolver's code containing its own request and response handler. A request handler will take the request data and perform operations against the data source. The response handler will process the data source's response before passing it back to the list. If there is more than one function, the request data will be sent to the next function in the list to be executed. Functions in the list will be executed serially in the order defined by the developer. Once all functions have been executed, the final result is passed to the after step.
+ **After step**: The after step is a handler function that allows you to perform some final operations on the final function's response before passing it to the GraphQL response.

This flow is an example of a pipeline resolver. Pipeline resolvers are supported in both runtimes. However, this is a simplified explanation of what pipeline resolvers can do. Also, we're describing only one possible resolver configuration. For more information about supported resolver configurations, see the [JavaScript resolvers overview](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-overview-js.html) for APPSYNC\$1JS or the [Resolver mapping template overview](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-overview.html) for VTL.

As you can see, resolvers are modular. In order for the components of the resolver to work properly, they must be able to peer into the state of the execution from other components. From the [Resolvers](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-components.html) section, you know that each component in the resolver can be passed vital information about the state of the execution as a set of arguments (`args`, `context`, etc.). In AWS AppSync, this is handled strictly by the `context`. It's a container for the information about the field being resolved. This can include everything from arguments being passed, results, authorization data, header data, etc. For more information about the context, see the [Resolver context object reference](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference-js.html) for APPSYNC\$1JS or the [Resolver mapping template context reference](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference.html) for VTL.

The context isn't the only tool you can use to implement your resolver. AWS AppSync supports a wide range of utilities for value generation, error handling, parsing, conversion, etc. You can see a list of utilities [here](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference-js.html) for APPSYNC\$1JS or [here](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference.html) for VTL.

In the following sections, you will learn how to configure resolvers in your GraphQL API.

**Topics**
+ [Creating basic queries (JavaScript)](configuring-resolvers-js.md)
+ [Creating basic queries (VTL)](configuring-resolvers.md)

# Creating basic queries (JavaScript)
<a name="configuring-resolvers-js"></a>

GraphQL resolvers connect the fields in a type’s schema to a data source. Resolvers are the mechanism by which requests are fulfilled.

Resolvers in AWS AppSync use JavaScript to convert a GraphQL expression into a format the data source can use. Alternatively, mapping templates can be written in [Apache Velocity Template Language (VTL)](https://velocity.apache.org/engine/2.0/vtl-reference.html) to convert a GraphQL expression into a format the data source can use.

This section describes how to configure resolvers using JavaScript. The [Resolver tutorials (JavaScript)](https://docs.aws.amazon.com/appsync/latest/devguide/tutorials-js.html) section provides in-depth tutorials on how to implement resolvers using JavaScript. The [Resolver reference (JavaScript)](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-js-version.html) section provides an explanation of utility operations that can be used with JavaScript resolvers.

We recommend following this guide before attempting to use any of the aforementioned tutorials.

In this section, we will walk through how to create and configure resolvers for queries and mutations.

**Note**  
This guide assumes you have created your schema and have at least one query or mutation. If you're looking for subscriptions (real-time data), then see [this](https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-data.html) guide.

In this section, we'll provide some general steps for configuring resolvers along with an example that uses the schema below:

```
// schema.graphql file

input CreatePostInput {
  title: String
  date: AWSDateTime
}

type Post {
  id: ID!
  title: String
  date: AWSDateTime
}

type Mutation {
  createPost(input: CreatePostInput!): Post
}

type Query {
  getPost: [Post]
}
```

## Creating basic query resolvers
<a name="create-basic-query-resolver-js"></a>

This section will show you how to make a basic query resolver.

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Schema**.

1. Enter the details of your schema and data source. See the [Designing your schema](https://docs.aws.amazon.com/appsync/latest/devguide/designing-your-schema.html) and [Attaching a data source](https://docs.aws.amazon.com/appsync/latest/devguide/attaching-a-data-source.html) sections for more information.

1. Next to the **Schema** editor, There's a window called **Resolvers**. This box contains a list of the types and fields as defined in your **Schema** window. You can attach resolvers to fields. You will most likely be attaching resolvers to your field operations. In this section, we'll look at simple query configurations. Under the **Query** type, choose **Attach** next to your query's field.

1. On the **Attach resolver** page, under **Resolver type**, you can choose between pipeline or unit resolvers. For more information about these types, see [Resolvers](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-components.html). This guide will make use of `pipeline resolvers`.
**Tip**  
When creating pipeline resolvers, your data source(s) will be attached to the pipeline function(s). Functions are created after you create the pipeline resolver itself, which is why there's no option to set it in this page. If you're using a unit resolver, the data source is tied directly to the resolver, so you would set it in this page.

   For **Resolver runtime**, choose `APPSYNC_JS` to enable the JavaScript runtime.

1. You can enable [caching](https://docs.aws.amazon.com/appsync/latest/devguide/enabling-caching.html) for this API. We recommend turning this feature off for now. Choose **Create**.

1. On the **Edit resolver** page, there's a code editor called **Resolver code** that allows you to implement the logic for the resolver handler and response (before and after steps). For more information, see the [JavaScript resolvers overview](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-overview-js.html). 
**Note**  
In our example, we're just going to leave the request blank and the response set to return the last data source result from the [context](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference-js.html):  

   ```
   import {util} from '@aws-appsync/utils';
   
   export function request(ctx) {
       return {};
   }
   
   export function response(ctx) {
       return ctx.prev.result;
   }
   ```

   Below this section, there's a table called **Functions**. Functions allow you to implement code that can be reused across multiple resolvers. Instead of constantly rewriting or copying code, you can store the source code as a function to be added to a resolver whenever you need it. 

   Functions make up the bulk of a pipeline's operation list. When using multiple functions in a resolver, you set the order of the functions, and they will be run in that order sequentially. They are executed after the request function runs and before the response function begins.

   To add a new function, under **Functions**, choose **Add function**, then **Create new function**. Alternatively, you may see a **Create function** button to choose instead.

   1. Choose a data source. This will be the data source on which the resolver acts.
**Note**  
In our example, we're attaching a resolver for `getPost`, which retrieves a `Post` object by `id`. Let's assume we already set up a DynamoDB table for this schema. Its partition key is set to the `id` and is empty.

   1. Enter a `Function name`.

   1. Under **Function code**, you'll need to implement the function's behavior. This might be confusing, but each function will have its own local request and response handler. The request runs, then the data source invocation is made to handle the request, then the data source response is processed by the response handler. The result is stored in the [context](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference-js.html) object. Afterward, the next function in the list will run or will be passed to the after step response handler if it's the last one. 
**Note**  
In our example, we're attaching a resolver to `getPost`, which gets a list of `Post` objects from the data source. Our request function will request the data from our table, the table will pass its response to the context (ctx), then the response will return the result in the context. AWS AppSync's strength lies in its interconnectedness with other AWS services. Because we're using DynamoDB, we have a [suite of operations](https://docs.aws.amazon.com/appsync/latest/devguide/js-resolver-reference-dynamodb.html) to simplify things like this. We have some boilerplate examples for other data source types as well.  
Our code will look like this:  

      ```
      import { util } from '@aws-appsync/utils';
      
      /**
       * Performs a scan on the dynamodb data source
       */
      export function request(ctx) {
        return { operation: 'Scan' };
      }
      
      /**
       * return a list of scanned post items
       */
      export function response(ctx) {
        return ctx.result.items;
      }
      ```
In this step, we added two functions:  
`request`: The request handler performs the retrieval operation against the data source. The argument contains the context object (`ctx`), or some data that is available to all resolvers performing a particular operation. For example, it might contain authorization data, the field names being resolved, etc. The return statement performs a [https://docs.aws.amazon.com//appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-scan](https://docs.aws.amazon.com//appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-scan) operation (see [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html) for examples). Because we're working with DynamoDB, we're allowed to use some of the operations from that service. The scan performs a basic fetch of all items in our table. The result of this operation is stored in the context object as a `result` container before being passed to the response handler. The `request` is run before the response in the pipeline.
`response`: The response handler that returns the output of the `request`. The argument is the updated context object, and the return statement is `ctx.prev.result`. At this point in the guide, you may not be familiar with this value. `ctx` refers to the context object. `prev` refers to the previous operation in the pipeline, which was our `request`. The `result` contains the result(s) of the resolver as it moves through the pipeline. If you put it all together, `ctx.prev.result` is returning the result of the last operation performed, which was the request handler.

   1. Choose **Create** after you're done.

1. Back on the resolver screen, under **Functions**, choose the **Add function** drop-down and add your function to your functions list.

1. Choose **Save** to update the resolver.

------
#### [ CLI ]

**To add your function**
+ Create a function for your pipeline resolver using the `[create-function](https://docs.aws.amazon.com/cli/latest/reference/appsync/create-function.html)` command.

  You'll need to enter a few parameters for this particular command:

  1. The `api-id` of your API.

  1. The `name` of the function in the AWS AppSync console.

  1. The `data-source-name`, or the name of the data source the function will use. It must already be created and linked to your GraphQL API in the AWS AppSync service.

  1. The `runtime`, or environment and language of the function. For JavaScript, the name must be `APPSYNC_JS`, and the runtime, `1.0.0`.

  1. The `code`, or request and response handlers of your function. While you can type it in manually, it's far easier to add it to a .txt file (or a similar format) and then pass it in as the argument. 
**Note**  
Our query code will be in a file passed in as the argument:  

     ```
     import { util } from '@aws-appsync/utils';
     
     /**
      * Performs a scan on the dynamodb data source
      */
     export function request(ctx) {
       return { operation: 'Scan' };
     }
     
     /**
      * return a list of scanned post items
      */
     export function response(ctx) {
       return ctx.result.items;
     }
     ```

  An example command may look like this:

  ```
  aws appsync create-function \
  --api-id abcdefghijklmnopqrstuvwxyz \
  --name get_posts_func_1 \
  --data-source-name table-for-posts \
  --runtime name=APPSYNC_JS,runtimeVersion=1.0.0 \
  --code file://~/path/to/file/{filename}.{fileType}
  ```

  An output will be returned in the CLI. Here's an example:

  ```
  {
      "functionConfiguration": {
          "functionId": "ejglgvmcabdn7lx75ref4qeig4",
          "functionArn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/functions/ejglgvmcabdn7lx75ref4qeig4",
          "name": "get_posts_func_1",
          "dataSourceName": "table-for-posts",
          "maxBatchSize": 0,
          "runtime": {
              "name": "APPSYNC_JS",
              "runtimeVersion": "1.0.0"
          },
          "code": "Code output goes here"
      }
  }
  ```
**Note**  
Make sure you record the `functionId` somewhere as this will be used to attach the function to the resolver.

**To create your resolver**
+ Create a pipeline function for `Query` by running the `[create-resolver](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-resolver.html)` command.

  You'll need to enter a few parameters for this particular command:

  1. The `api-id` of your API.

  1. The `type-name`, or the special object type in your schema (Query, Mutation, Subscription).

  1. The `field-name`, or the field operation inside the special object type you want to attach the resolver to.

  1. The `kind`, which specifies a unit or pipeline resolver. Set this to `PIPELINE` to enable pipeline functions.

  1. The `pipeline-config`, or the function(s) to attach to the resolver. Make sure you know the `functionId` values of your functions. Order of listing matters.

  1. The `runtime`, which was `APPSYNC_JS` (JavaScript). The `runtimeVersion` currently is `1.0.0`.

  1. The `code`, which contains the before and after step handlers.
**Note**  
Our query code will be in a file passed in as the argument:  

     ```
     import { util } from '@aws-appsync/utils';
     
     /**
      * Sends a request to `put` an item in the DynamoDB data source
      */
     export function request(ctx) {
       const { id, ...values } = ctx.args;
       return {
         operation: 'PutItem',
         key: util.dynamodb.toMapValues({ id }),
         attributeValues: util.dynamodb.toMapValues(values),
       };
     }
     
     /**
      * returns the result of the `put` operation
      */
     export function response(ctx) {
       return ctx.result;
     }
     ```

  An example command may look like this:

  ```
  aws appsync create-resolver \
  --api-id abcdefghijklmnopqrstuvwxyz \
  --type-name Query \
  --field-name getPost \
  --kind PIPELINE \
  --pipeline-config functions=ejglgvmcabdn7lx75ref4qeig4 \
  --runtime name=APPSYNC_JS,runtimeVersion=1.0.0 \
  --code file:///path/to/file/{filename}.{fileType}
  ```

  An output will be returned in the CLI. Here's an example:

  ```
  {
      "resolver": {
          "typeName": "Mutation",
          "fieldName": "getPost",
          "resolverArn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/types/Mutation/resolvers/getPost",
          "kind": "PIPELINE",
          "pipelineConfig": {
              "functions": [
                  "ejglgvmcabdn7lx75ref4qeig4"
              ]
          },
          "maxBatchSize": 0,
          "runtime": {
              "name": "APPSYNC_JS",
              "runtimeVersion": "1.0.0"
          },
          "code": "Code output goes here"
      }
  }
  ```

------
#### [ CDK ]

**Tip**  
Before you use the CDK, we recommend reviewing the CDK's [official documentation](https://docs.aws.amazon.com/cdk/v2/guide/home.html) along with AWS AppSync's [CDK reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync-readme.html).  
The steps listed below will only show a general example of the snippet used to add a particular resource. This is **not** meant to be a working solution in your production code. We also assume you already have a working app.

A basic app will need the following things:

1. Service import directives

1. Schema code

1. Data source generator

1. Function code

1. Resolver code

From the [Designing your schema](https://docs.aws.amazon.com/appsync/latest/devguide/designing-your-schema.html) and [Attaching a data source](https://docs.aws.amazon.com/appsync/latest/devguide/attaching-a-data-source.html) sections, we know that the stack file will include the import directives of the form:

```
import * as x from 'x'; # import wildcard as the 'x' keyword from 'x-service'
import {a, b, ...} from 'c'; # import {specific constructs} from 'c-service'
```

**Note**  
In previous sections, we only stated how to import AWS AppSync constructs. In real code, you'll have to import more services just to run the app. In our example, if we were to create a very simple CDK app, we would at least import the AWS AppSync service along with our data source, which was a DynamoDB table. We would also need to import some additional constructs to deploy the app:  

```
import * as cdk from 'aws-cdk-lib';
import * as appsync from 'aws-cdk-lib/aws-appsync';
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
import { Construct } from 'constructs';
```
To summarize each of these:  
`import * as cdk from 'aws-cdk-lib';`: This allows you to define your CDK app and constructs such as the stack. It also contains some useful utility functions for our application like manipulating metadata. If you're familiar with this import directive, but are wondering why the cdk core library is not being used here, see the [Migration](https://docs.aws.amazon.com/cdk/v2/guide/migrating-v2.html) page.
`import * as appsync from 'aws-cdk-lib/aws-appsync';`: This imports the [AWS AppSync service](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync-readme.html).
`import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';`: This imports the [DynamoDB service](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_dynamodb-readme.html).
`import { Construct } from 'constructs';`: We need this to define the root [construct](https://docs.aws.amazon.com/cdk/v2/guide/constructs.html).

The type of import depends on the services you're calling. We recommend looking at the CDK documentation for examples. The schema at the top of the page will be a separate file in your CDK app as a `.graphql` file. In the stack file, we can associate it with a new GraphQL using the form:

```
const add_api = new appsync.GraphqlApi(this, 'graphQL-example', {
  name: 'my-first-api',
  schema: appsync.SchemaFile.fromAsset(path.join(__dirname, 'schema.graphql')),
});
```

**Note**  
In the scope `add_api`, we're adding a new GraphQL API using the `new` keyword followed by `appsync.GraphqlApi(scope: Construct, id: string , props: GraphqlApiProps)`. Our scope is `this`, the CFN id is `graphQL-example`, and our props are `my-first-api` (name of the API in the console) and `schema.graphql` (the absolute path to the schema file).

To add a data source, you'll first have to add your data source to the stack. Then, you need to associate it with the GraphQL API using the source-specific method. The association will happen when you make your resolver function. In the meantime, let's use an example by creating the DynamoDB table using `dynamodb.Table`:

```
const add_ddb_table = new dynamodb.Table(this, 'posts-table', {
  partitionKey: {
    name: 'id',
    type: dynamodb.AttributeType.STRING,
  },
});
```

**Note**  
If we were to use this in our example, we'd be adding a new DynamoDB table with the CFN id of `posts-table` and a partition key of `id (S)`.

Next, we need to implement our resolver in the stack file. Here's an example of a simple query that scans for all items in a DynamoDB table:

```
const add_func = new appsync.AppsyncFunction(this, 'func-get-posts', {
  name: 'get_posts_func_1',
  add_api,
  dataSource: add_api.addDynamoDbDataSource('table-for-posts', add_ddb_table),
  code: appsync.Code.fromInline(`
      export function request(ctx) {
        return { operation: 'Scan' };
      }

      export function response(ctx) {
        return ctx.result.items;
      }
  `),
  runtime: appsync.FunctionRuntime.JS_1_0_0,
});

new appsync.Resolver(this, 'pipeline-resolver-get-posts', {
  add_api,
  typeName: 'Query',
  fieldName: 'getPost',
  code: appsync.Code.fromInline(`
      export function request(ctx) {
        return {};
      }

      export function response(ctx) {
        return ctx.prev.result;
      }
 `),
  runtime: appsync.FunctionRuntime.JS_1_0_0,
  pipelineConfig: [add_func],
});
```

**Note**  
First, we created a function called `add_func`. This order of creation may seem a bit counterintuitive, but you have to create the functions in your pipeline resolver before you make the resolver itself. A function follows the form:  

```
AppsyncFunction(scope: Construct, id: string, props: AppsyncFunctionProps)
```
Our scope was `this`, our CFN id was `func-get-posts`, and our props contained the actual function details. Inside props, we included:  
The `name` of the function that will be present in the AWS AppSync console (`get_posts_func_1`).
The GraphQL API we created earlier (`add_api`).
The data source; this is the point where we link the data source to the GraphQL API value, then attach it to the function. We take the table we created (`add_ddb_table`) and attach it to the GraphQL API (`add_api`) using one of the `GraphqlApi` methods ([https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.GraphqlApi.html#addwbrdynamowbrdbwbrdatawbrsourceid-table-options](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.GraphqlApi.html#addwbrdynamowbrdbwbrdatawbrsourceid-table-options)). The id value (`table-for-posts`) is the name of the data source in the AWS AppSync console. For a list of source-specific methods, see the following pages:  
[ DynamoDbDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.DynamoDbDataSource.html) 
 [ EventBridgeDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.EventBridgeDataSource.html) 
 [ HttpDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.HttpDataSource.html) 
 [ LambdaDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.LambdaDataSource.html) 
 [ NoneDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.NoneDataSource.html) 
 [ OpenSearchDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.OpenSearchDataSource.html) 
 [ RdsDataSource ](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.RdsDataSource.html) 
The code contains our function's request and response handlers, which is a simple scan and return.
The runtime specifies that we want to use the APPSYNC\$1JS runtime version 1.0.0. Note that this is currently the only version available for APPSYNC\$1JS.
Next, we need to attach the function to the pipeline resolver. We created our resolver using the form:  

```
Resolver(scope: Construct, id: string, props: ResolverProps)
```
Our scope was `this`, our CFN id was `pipeline-resolver-get-posts`, and our props contained the actual function details. Inside the props, we included:  
The GraphQL API we created earlier (`add_api`).
The special object type name; this is a query operation, so we simply added the value `Query`.
The field name (`getPost`) is the name of the field in the schema under the `Query` type.
The code contains your before and after handlers. Our example just returns whatever results were in the context after the function performed its operation.
The runtime specifies that we want to use the APPSYNC\$1JS runtime version 1.0.0. Note that this is currently the only version available for APPSYNC\$1JS.
The pipeline config contains the reference to the function we created (`add_func`).

------

To summarize what happened in this example, you saw an AWS AppSync function that implemented a request and response handler. The function was responsible for interacting with your data source. The request handler sent a `Scan` operation to AWS AppSync, instructing it on what operation to perform against your DynamoDB data source. The response handler returned the list of items (`ctx.result.items`). The list of items was then mapped to the `Post` GraphQL type automatically. 

## Creating basic mutation resolvers
<a name="creating-basic-mutation-resolvers-js"></a>

This section will show you how to make a basic mutation resolver.

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Schema**.

1. Under the **Resolvers** section and the **Mutation** type, choose **Attach** next to your field.
**Note**  
In our example, we're attaching a resolver for `createPost`, which adds a `Post` object to our table. Let's assume we're using the same DynamoDB table from the last section. Its partition key is set to the `id` and is empty.

1. On the **Attach resolver** page, under **Resolver type**, choose `pipeline resolvers`. As a reminder, you can find more information about resolvers [here](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-components.html). For **Resolver runtime**, choose `APPSYNC_JS` to enable the JavaScript runtime.

1. You can enable [caching](https://docs.aws.amazon.com/appsync/latest/devguide/enabling-caching.html) for this API. We recommend turning this feature off for now. Choose **Create**.

1. Choose **Add function**, then choose **Create new function**. Alternatively, you may see a **Create function** button to choose instead.

   1. Choose your data source. This should be the source whose data you will manipulate with the mutation.

   1. Enter a `Function name`.

   1. Under **Function code**, you'll need to implement the function's behavior. This is a mutation, so the request will ideally perform some state-changing operation on the invoked data source. The result will be processed by the response function.
**Note**  
`createPost` is adding, or "putting", a new `Post` in the table with our parameters as the data. We could add something like this:   

      ```
      import { util } from '@aws-appsync/utils';
      
      /**
       * Sends a request to `put` an item in the DynamoDB data source
       */
      export function request(ctx) {
        return {
          operation: 'PutItem',
          key: util.dynamodb.toMapValues({id: util.autoId()}),
          attributeValues: util.dynamodb.toMapValues(ctx.args.input),
        };
      }
      
      /**
       * returns the result of the `put` operation
       */
      export function response(ctx) {
        return ctx.result;
      }
      ```
In this step, we also added `request` and `response` functions:  
`request`: The request handler accepts the context as the argument. The request handler return statement performs a [https://docs.aws.amazon.com//appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-putitem](https://docs.aws.amazon.com//appsync/latest/devguide/js-resolver-reference-dynamodb.html#js-aws-appsync-resolver-reference-dynamodb-putitem) command, which is a built-in DynamoDB operation (see [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-2.html) or [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html#WorkingWithItems.WritingData) for examples). The `PutItem` command adds a `Post` object to our DynamoDB table by taking the partition `key` value (automatically generated by `util.autoid()`) and `attributes` from the context argument input (these are the values we will pass in our request). The `key` is the `id` and `attributes` are the `date` and `title` field arguments. They're both preformatted through the [https://docs.aws.amazon.com//appsync/latest/devguide/dynamodb-helpers-in-util-dynamodb-js.html#utility-helpers-in-toMap-js](https://docs.aws.amazon.com//appsync/latest/devguide/dynamodb-helpers-in-util-dynamodb-js.html#utility-helpers-in-toMap-js) helper to work with the DynamoDB table.
`response`: The response accepts the updated context and returns the result of the request handler.

   1. Choose **Create** after you're done.

1. Back on the resolver screen, under **Functions**, choose the **Add function** drop-down and add your function to your functions list.

1. Choose **Save** to update the resolver.

------
#### [ CLI ]

**To add your function**
+ Create a function for your pipeline resolver using the `[create-function](https://docs.aws.amazon.com/cli/latest/reference/appsync/create-function.html)` command.

  You'll need to enter a few parameters for this particular command:

  1. The `api-id` of your API.

  1. The `name` of the function in the AWS AppSync console.

  1. The `data-source-name`, or the name of the data source the function will use. It must already be created and linked to your GraphQL API in the AWS AppSync service.

  1. The `runtime`, or environment and language of the function. For JavaScript, the name must be `APPSYNC_JS`, and the runtime, `1.0.0`.

  1. The `code`, or request and response handlers of your function. While you can type it in manually, it's far easier to add it to a .txt file (or a similar format) then pass it in as the argument. 
**Note**  
Our query code will be in a file passed in as the argument:  

     ```
     import { util } from '@aws-appsync/utils';
     
     /**
      * Sends a request to `put` an item in the DynamoDB data source
      */
     export function request(ctx) {
       return {
         operation: 'PutItem',
         key: util.dynamodb.toMapValues({id: util.autoId()}),
         attributeValues: util.dynamodb.toMapValues(ctx.args.input),
       };
     }
     
     /**
      * returns the result of the `put` operation
      */
     export function response(ctx) {
       return ctx.result;
     }
     ```

  An example command may look like this:

  ```
  aws appsync create-function \
  --api-id abcdefghijklmnopqrstuvwxyz \
  --name add_posts_func_1 \
  --data-source-name table-for-posts \
  --runtime name=APPSYNC_JS,runtimeVersion=1.0.0 \
  --code file:///path/to/file/{filename}.{fileType}
  ```

  An output will be returned in the CLI. Here's an example:

  ```
  {
      "functionConfiguration": {
          "functionId": "vulcmbfcxffiram63psb4dduoa",
          "functionArn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/functions/vulcmbfcxffiram63psb4dduoa",
          "name": "add_posts_func_1",
          "dataSourceName": "table-for-posts",
          "maxBatchSize": 0,
          "runtime": {
              "name": "APPSYNC_JS",
              "runtimeVersion": "1.0.0"
          },
          "code": "Code output foes here"
      }
  }
  ```
**Note**  
Make sure you record the `functionId` somewhere as this will be used to attach the function to the resolver.

**To create your resolver**
+ Create a pipeline function for `Mutation` by running the `[create-resolver](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-resolver.html)` command.

  You'll need to enter a few parameters for this particular command:

  1. The `api-id` of your API.

  1. The `type-name`, or the special object type in your schema (Query, Mutation, Subscription).

  1. The `field-name`, or the field operation inside the special object type you want to attach the resolver to.

  1. The `kind`, which specifies a unit or pipeline resolver. Set this to `PIPELINE` to enable pipeline functions.

  1. The `pipeline-config`, or the function(s) to attach to the resolver. Make sure you know the `functionId` values of your functions. Order of listing matters.

  1. The `runtime`, which was `APPSYNC_JS` (JavaScript). The `runtimeVersion` currently is `1.0.0`.

  1. The `code`, which contains the before and after step.
**Note**  
Our query code will be in a file passed in as the argument:  

     ```
     import { util } from '@aws-appsync/utils';
     
     /**
      * Sends a request to `put` an item in the DynamoDB data source
      */
     export function request(ctx) {
       const { id, ...values } = ctx.args;
       return {
         operation: 'PutItem',
         key: util.dynamodb.toMapValues({ id }),
         attributeValues: util.dynamodb.toMapValues(values),
       };
     }
     
     /**
      * returns the result of the `put` operation
      */
     export function response(ctx) {
       return ctx.result;
     }
     ```

  An example command may look like this:

  ```
  aws appsync create-resolver \
  --api-id abcdefghijklmnopqrstuvwxyz \
  --type-name Mutation \
  --field-name createPost \
  --kind PIPELINE \
  --pipeline-config functions=vulcmbfcxffiram63psb4dduoa \
  --runtime name=APPSYNC_JS,runtimeVersion=1.0.0 \
  --code file:///path/to/file/{filename}.{fileType}
  ```

  An output will be returned in the CLI. Here's an example:

  ```
  {
      "resolver": {
          "typeName": "Mutation",
          "fieldName": "createPost",
          "resolverArn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/types/Mutation/resolvers/createPost",
          "kind": "PIPELINE",
          "pipelineConfig": {
              "functions": [
                  "vulcmbfcxffiram63psb4dduoa"
              ]
          },
          "maxBatchSize": 0,
          "runtime": {
              "name": "APPSYNC_JS",
              "runtimeVersion": "1.0.0"
          },
          "code": "Code output goes here"
      }
  }
  ```

------
#### [ CDK ]

**Tip**  
Before you use the CDK, we recommend reviewing the CDK's [official documentation](https://docs.aws.amazon.com/cdk/v2/guide/home.html) along with AWS AppSync's [CDK reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync-readme.html).  
The steps listed below will only show a general example of the snippet used to add a particular resource. This is **not** meant to be a working solution in your production code. We also assume you already have a working app.
+ To make a mutation, assuming you're in the same project, you can add it to the stack file like the query. Here's a modified function and resolver for a mutation that adds a new `Post` to the table:

  ```
  const add_func_2 = new appsync.AppsyncFunction(this, 'func-add-post', {
    name: 'add_posts_func_1',
    add_api,
    dataSource: add_api.addDynamoDbDataSource('table-for-posts-2', add_ddb_table),
        code: appsync.Code.fromInline(`
            export function request(ctx) {
              return {
                operation: 'PutItem',
                key: util.dynamodb.toMapValues({id: util.autoId()}),
                attributeValues: util.dynamodb.toMapValues(ctx.args.input),
              };
            }
  
            export function response(ctx) {
              return ctx.result;
            }
        `), 
    runtime: appsync.FunctionRuntime.JS_1_0_0,
  });
  
  new appsync.Resolver(this, 'pipeline-resolver-create-posts', {
    add_api,
    typeName: 'Mutation',
    fieldName: 'createPost',
        code: appsync.Code.fromInline(`
            export function request(ctx) {
              return {};
            }
  
            export function response(ctx) {
              return ctx.prev.result;
            }
        `),
    runtime: appsync.FunctionRuntime.JS_1_0_0,
    pipelineConfig: [add_func_2],
  });
  ```
**Note**  
Since this mutation and the query are similarly structured, we'll just explain the changes we made to make the mutation.   
In the function, we changed the CFN id to `func-add-post` and name to `add_posts_func_1` to reflect the fact that we're adding `Posts` to the table. In the data source, we made a new association to our table (`add_ddb_table`) in the AWS AppSync console as `table-for-posts-2` because the `addDynamoDbDataSource` method requires it. Keep in mind, this new association is still using the same table we created earlier, but we now have two connections to it in the AWS AppSync console: one for the query as `table-for-posts` and one for the mutation as `table-for-posts-2`. The code was changed to add a `Post` by generating its `id` value automatically and accepting a client's input for the rest of the fields.  
In the resolver, we changed the id value to `pipeline-resolver-create-posts` to reflect the fact that we're adding `Posts` to the table. To reflect the mutation in the schema, the type name was changed to `Mutation`, and the name, `createPost`. The pipeline config was set to our new mutation function `add_func_2`.

------

To summarize what's happening in this example, AWS AppSync automatically converts arguments defined in the `createPost` field from your GraphQL schema into DynamoDB operations. The example stores records in DynamoDB using a key of `id`, which is automatically created using our `util.autoId()` helper. All of the other fields you pass to the context arguments (`ctx.args.input`) from requests made in the AWS AppSync console or otherwise will be stored as the table's attributes. Both the key and the attributes are automatically mapped to a compatible DynamoDB format using the `util.dynamodb.toMapValues(values)` helper.

AWS AppSync also supports test and debug workflows for editing resolvers. You can use a mock `context` object to see the transformed value of the template before invoking it. Optionally, you can view the full request to a data source interactively when you run a query. For more information, see [Test and debug resolvers (JavaScript)](https://docs.aws.amazon.com/appsync/latest/devguide/test-debug-resolvers-js.html) and [Monitoring and logging](https://docs.aws.amazon.com/appsync/latest/devguide/monitoring.html#aws-appsync-monitoring).

## Advanced resolvers
<a name="advanced-resolvers-js"></a>

If you are following the optional pagination section in [Designing your schema](designing-your-schema.md#aws-appsync-designing-your-schema), you still need to add your resolver to your request to make use of pagination. Our example used a query pagination called `getPosts` to return only a portion of the things requested at a time. Our resolver's code on that field may look like this:

```
/**
 * Performs a scan on the dynamodb data source
 */
export function request(ctx) {
  const { limit = 20, nextToken } = ctx.args;
  return { operation: 'Scan', limit, nextToken };
}

/**
 * @returns the result of the `put` operation
 */
export function response(ctx) {
  const { items: posts = [], nextToken } = ctx.result;
  return { posts, nextToken };
}
```

In the request, we pass in the context of the request. Our `limit` is *20*, meaning we return up to 20 `Posts` in the first query. Our `nextToken` cursor is fixed to the first `Post` entry in the data source. These are passed to the args. The request then performs a scan from the first `Post` up to the scan limit number. The data source stores the result in the context, which is passed to the response. The response returns the `Posts` it retrieved, then sets the `nextToken` is set to the `Post` entry right after the limit. The next request is sent out to do the exact same thing but starting at the offset right after the first query. Keep in mind that these sorts of requests are done sequentially and not in parallel.

# Testing and debugging resolvers in AWS AppSync (JavaScript)
<a name="test-debug-resolvers-js"></a>

AWS AppSync executes resolvers on a GraphQL field against a data source. When working with pipeline resolvers, functions interact with your data sources. As described in the [JavaScript resolvers overview](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-overview-js.html), functions communicate with data sources by using request and response handlers written in JavaScript and running on the `APPSYNC_JS` runtime. This enables you to provide custom logic and conditions before and after communicating with the data source.

To help developers write, test, and debug these resolvers, the AWS AppSync console also provides tools to create a GraphQL request and response with mock data down to the individual field resolver. Additionally, you can perform queries, mutations, and subscriptions in the AWS AppSync console and see a detailed log stream of the entire request from Amazon CloudWatch. This includes results from the data source.

## Testing with mock data
<a name="testing-with-mock-data-js"></a>

When a GraphQL resolver is invoked, it contains a `context` object that has relevant information about the request. This includes arguments from a client, identity information, and data from the parent GraphQL field. It also stores the results from the data source, which can be used in the response handler. For more information about this structure and the available helper utilities to use when programming, see the [Resolver context object reference](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference-js.html).

When writing or editing a resolver function, you can pass a *mock* or *test context* object into the console editor. This enables you to see how both the request and the response handlers evaluate without actually running against a data source. For example, you can pass a test `firstname: Shaggy` argument and see how it evaluates when using `ctx.args.firstname` in your template code. You could also test the evaluation of any utility helpers such as `util.autoId()` or `util.time.nowISO8601()`.

### Testing resolvers
<a name="test-a-resolver-js"></a>

This example will use the AWS AppSync console to test resolvers.

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Functions**.

1. Choose an existing function.

1. At the top of the **Update function** page, choose **Select test context**, then choose **Create new context**.

1. Select a sample context object or populate the JSON manually in the **Configure test context** window below.

1. Enter a **Text context name**.

1. Choose the **Save** button.

1. To evaluate your resolver using this mocked context object, choose **Run Test**.

For a more practical example, suppose you have an app storing a GraphQL type of `Dog` that uses automatic ID generation for objects and stores them in Amazon DynamoDB. You also want to write some values from the arguments of a GraphQL mutation and allow only specific users to see a response. The following snippet shows what the schema might look like:

```
type Dog {
  breed: String
  color: String
}

type Mutation {
  addDog(firstname: String, age: Int): Dog
}
```

You can write an AWS AppSync function and add it to your `addDog` resolver to handle the mutation. To test your AWS AppSync function, you can populate a context object like the following example. The following has arguments from the client of `name` and `age`, and a `username` populated in the `identity` object:

```
{
    "arguments" : {
        "firstname": "Shaggy",
        "age": 4
    },
    "source" : {},
    "result" : {
        "breed" : "Miniature Schnauzer",
        "color" : "black_grey"
    },
    "identity": {
        "sub" : "uuid",
        "issuer" : " https://cognito-idp.{region}.amazonaws.com/{userPoolId}",
        "username" : "Nadia",
        "claims" : { },
        "sourceIp" :[  "x.x.x.x" ],
        "defaultAuthStrategy" : "ALLOW"
    }
}
```

You can test your AWS AppSync function using the following code:

```
import { util } from '@aws-appsync/utils';

export function request(ctx) {
  return {
    operation: 'PutItem',
    key: util.dynamodb.toMapValues({ id: util.autoId() }),
    attributeValues: util.dynamodb.toMapValues(ctx.args),
  };
}

export function response(ctx) {
  if (ctx.identity.username === 'Nadia') {
    console.log("This request is allowed")
    return ctx.result;
  }
  util.unauthorized();
}
```

The evaluated request and response handler has the data from your test context object and the generated value from `util.autoId()`. Additionally, if you were to change the `username` to a value other than `Nadia`, the results won’t be returned because the authorization check would fail. For more information about fine-grained access control, see [Authorization use cases](security-authorization-use-cases.md#aws-appsync-security-authorization-use-cases).

### Testing request and response handlers with AWS AppSync's APIs
<a name="testing-with-appsync-api-js"></a>

You can use the `EvaluateCode` API command to remotely test your code with mocked data. To get started with the command, make sure you have added the `appsync:evaluateMappingCode` permission to your policy. For example:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "appsync:evaluateCode",
            "Resource": "arn:aws:appsync:us-east-1:111122223333:*"
        }
    ]
}
```

------

You can leverage the command by using the [AWS CLI](https://aws.amazon.com/cli/) or [AWS SDKs](https://aws.amazon.com/tools/). For example, take the `Dog` schema and its AWS AppSync function request and response handlers from the previous section. Using the CLI on your local station, save the code to a file named `code.js`, then save the `context` object to a file named `context.json`. From your shell, run the following command:

```
$ aws appsync evaluate-code \
  --code file://code.js \
  --function response \
  --context file://context.json \
  --runtime name=APPSYNC_JS,runtimeVersion=1.0.0
```

The response contains an `evaluationResult` containing the payload returned by your handler. It also contains a `logs` object, that holds the list of logs that were generated by your handler during the evaluation. This makes it easy to debug your code execution and see information about your evaluation to help troubleshoot. For example:

```
{
    "evaluationResult": "{\"breed\":\"Miniature Schnauzer\",\"color\":\"black_grey\"}",
    "logs": [
        "INFO - code.js:13:5: \"This request is allowed\""
    ]
}
```

The `evaluationResult` can be parsed as JSON, which gives: 

```
{
  "breed": "Miniature Schnauzer",
  "color": "black_grey"
}
```

Using the SDK, you can easily incorporate tests from your favorite test suite to validate your handlers' behavior. We recommend creating tests using the [Jest Testing Framework](https://jestjs.io/), but any testing suite works. The following snippet shows a hypothetical validation run. Note that we expect the evaluation response to be valid JSON, so we use `JSON.parse` to retrieve JSON from the string response:

```
const AWS = require('aws-sdk')
const fs = require('fs')
const client = new AWS.AppSync({ region: 'us-east-2' })
const runtime = {name:'APPSYNC_JS',runtimeVersion:'1.0.0')

test('request correctly calls DynamoDB', async () => {
  const code = fs.readFileSync('./code.js', 'utf8')
  const context = fs.readFileSync('./context.json', 'utf8')
  const contextJSON = JSON.parse(context)
  
  const response = await client.evaluateCode({ code, context, runtime, function: 'request' }).promise()
  const result = JSON.parse(response.evaluationResult)
  
  expect(result.key.id.S).toBeDefined()
  expect(result.attributeValues.firstname.S).toEqual(contextJSON.arguments.firstname)
})
```

 This yields the following result:

```
Ran all test suites.
> jest

PASS ./index.test.js
✓ request correctly calls DynamoDB (543 ms)
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Snapshots: 0 totalTime: 1.511 s, estimated 2 s
```

## Debugging a live query
<a name="debugging-a-live-query-js"></a>

There’s no substitute for an end-to-end test and logging to debug a production application. AWS AppSync lets you log errors and full request details using Amazon CloudWatch. Additionally, you can use the AWS AppSync console to test GraphQL queries, mutations, and subscriptions and live stream log data for each request back into the query editor to debug in real time. For subscriptions, the logs display connection-time information.

To perform this, you need to have Amazon CloudWatch logs enabled in advance, as described in [Monitoring and logging](monitoring.md#aws-appsync-monitoring). Next, in the AWS AppSync console, choose the **Queries** tab and then enter a valid GraphQL query. In the lower-right section, click and drag the **Logs** window to open the logs view. At the top of the page, choose the play arrow icon to run your GraphQL query. In a few moments, your full request and response logs for the operation are streamed to this section and you can view them in the console.

# Configuring and using pipeline resolvers in AWS AppSync (JavaScript)
<a name="pipeline-resolvers-js"></a>

AWS AppSync executes resolvers on a GraphQL field. In some cases, applications require executing multiple operations to resolve a single GraphQL field. With pipeline resolvers, developers can now compose operations called Functions and execute them in sequence. Pipeline resolvers are useful for applications that, for instance, require performing an authorization check before fetching data for a field.

For more information about the architecture of a JavaScript pipeline resolver, see the [JavaScript resolvers overview](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-overview-js.html#anatomy-of-a-pipeline-resolver-js).

## Step 1: Creating a pipeline resolver
<a name="create-a-pipeline-resolver-js"></a>

In the AWS AppSync console, go to the **Schema** page.

Save the following schema:

```
schema {
    query: Query
    mutation: Mutation
}

type Mutation {
    signUp(input: Signup): User
}

type Query {
    getUser(id: ID!): User
}

input Signup {
    username: String!
    email: String!
}

type User {
    id: ID!
    username: String
    email: AWSEmail
}
```

We are going to wire a pipeline resolver to the **signUp** field on the **Mutation** type. In the **Mutation** type on the right side, choose **Attach** next to the `signUp` mutation field. Set the resolver to `pipeline resolver` and the `APPSYNC_JS` runtime, then create the resolver.

Our pipeline resolver signs up a user by first validating the email address input and then saving the user in the system. We are going to encapsulate the email validation inside a **validateEmail** function and the saving of the user inside a **saveUser** function. The **validateEmail** function executes first, and if the email is valid, then the **saveUser** function executes.

The execution flow will be as follows:

1. Mutation.signUp resolver request handler

1. validateEmail function

1. saveUser function

1. Mutation.signUp resolver response handler

Because we will probably reuse the **validateEmail** function in other resolvers on our API, we want to avoid accessing `ctx.args` because these will change from one GraphQL field to another. Instead, we can use the `ctx.stash` to store the email attribute from the `signUp(input: Signup)` input field argument.

Update your resolver code by replacing your request and response functions:

```
export function request(ctx) {
    ctx.stash.email = ctx.args.input.email
    return {};
}

export function response(ctx) {
    return ctx.prev.result;
}
```

Choose **Create** or **Save** to update the resolver.

## Step 2: Creating a function
<a name="create-a-function-js"></a>

From the pipeline resolver page, in the **Functions** section, click on **Add function**, then **Create new function**. It is also possible to create functions without going through the resolver page; to do this, in the AWS AppSync console, go to the **Functions** page. Choose the **Create function** button. Let’s create a function that checks if an email is valid and comes from a specific domain. If the email is not valid, the function raises an error. Otherwise, it forwards whatever input it was given.

Make sure you have created a data source of the **NONE** type. Choose this data source in the **Data source name** list. For the **function name**, enter in `validateEmail`. In the **function code** area, overwrite everything with this snippet:

```
import { util } from '@aws-appsync/utils';

export function request(ctx) {
  const { email } = ctx.stash;
  const valid = util.matches(
    '^[a-zA-Z0-9_.+-]+@(?:(?:[a-zA-Z0-9-]+\.)?[a-zA-Z]+\.)?(myvaliddomain)\.com',
    email
  );
  if (!valid) {
    util.error(`"${email}" is not a valid email.`);
  }

  return { payload: { email } };
}

export function response(ctx) {
  return ctx.result;
}
```

Review your inputs, then choose **Create**. We just created our **validateEmail** function. Repeat these steps to create the **saveUser** function with the following code (For the sake of simplicity, we use a **NONE** data source and pretend the user has been saved in the system after the function executes.):

```
import { util } from '@aws-appsync/utils';

export function request(ctx) {
  return ctx.prev.result;
}

export function response(ctx) {
  ctx.result.id = util.autoId();
  return ctx.result;
}
```

We just created our **saveUser** function.

## Step 3: Adding a function to a pipeline resolver
<a name="adding-a-function-to-a-pipeline-resolver-js"></a>

Our functions should have been added automatically to the pipeline resolver we just created. If this wasn't the case, or you created the functions through the **Functions** page, you can click on **Add function** back on the `signUp` resolver page to attach them. Add both the **validateEmail** and **saveUser** functions to the resolver. The **validateEmail** function should be placed before the **saveUser** function. As you add more functions, you can use the **move up** and **move down** options to reorganize the order of execution of your functions. Review your changes, then choose **Save**.

## Step 4: Running a query
<a name="running-a-query-js"></a>

In the AWS AppSync console, go to the **Queries** page. In the explorer, ensure that you're using your mutation. If you aren't, choose `Mutation` in the drop-down list, then choose `+`. Enter the following query:

```
mutation {
  signUp(input: {email: "nadia@myvaliddomain.com", username: "nadia"}) {
    id
    username
  }
}
```

This should return something like:

```
{
  "data": {
    "signUp": {
      "id": "256b6cc2-4694-46f4-a55e-8cb14cc5d7fc",
      "username": "nadia"
    }
  }
}
```

We have successfully signed up our user and validated the input email using a pipeline resolver.

# Creating basic queries (VTL)
<a name="configuring-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers-js.html).

GraphQL resolvers connect the fields in a type’s schema to a data source. Resolvers are the mechanism by which requests are fulfilled. AWS AppSync can automatically create and connect resolvers from a schema or create a schema and connect resolvers from an existing table without you needing to write any code.

Resolvers in AWS AppSync use JavaScript to convert a GraphQL expression into a format the data source can use. Alternatively, mapping templates can be written in [Apache Velocity Template Language (VTL)](https://velocity.apache.org/engine/2.0/vtl-reference.html) to convert a GraphQL expression into a format the data source can use.

This section will show you how to configure resolvers using VTL. An introductory tutorial-style programming guide for writing resolvers can be found in [Resolver mapping template programming guide](resolver-mapping-template-reference-programming-guide.md#aws-appsync-resolver-mapping-template-reference-programming-guide), and helper utilities available to use when programming can be found in [Resolver mapping template context reference](resolver-context-reference.md#aws-appsync-resolver-mapping-template-context-reference). AWS AppSync also has built-in test and debug flows that you can use when you’re editing or authoring from scratch. For more information, see [Test and debug resolvers](test-debug-resolvers.md#aws-appsync-test-debug-resolvers).

We recommend following this guide before attempting to to use any of the aforementioned tutorials.

In this section, we will walk through how to create a resolver, add a resolver for mutations, and use advanced configurations.

## Create your first resolver
<a name="create-your-first-resolver"></a>

Following the examples from the previous sections, the first step is to create a resolver for your `Query` type.

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Schema**.

1. On the right-hand side of the page, there's a window called **Resolvers**. This box contains a list of the types and fields as defined in your **Schema** window on the left-hand side of the page. You're able to attach resolvers to fields. For example, under the **Query** type, choose **Attach** next to the `getTodos` field.

1. On the **Create Resolver** page, choose the data source you created in the [Attaching a data source](https://docs.aws.amazon.com/appsync/latest/devguide/attaching-a-data-source.html) guide. In the **Configure mapping templates** window, you can choose both the generic request and response mapping templates using the drop-down list to the right or write your own.
**Note**  
The pairing of a request mapping template to a response mapping template is called a unit resolver. Unit resolvers are typically meant to perform rote operations; we recommend using them only for singular operations with a small number of data sources. For more complex operations, we recommend using pipeline resolvers, which can execute multiple operations with multiple data sources sequentially.  
For more information about the difference between request and response mapping templates, see [Unit resolvers](https://docs.aws.amazon.com//appsync/latest/devguide/resolver-mapping-template-reference-overview.html#unit-resolvers).  
For more information about using pipeline resolvers, see [Pipeline resolvers](pipeline-resolvers.md#aws-appsync-pipeline-resolvers).

1. For common use cases, the AWS AppSync console has built-in templates that you can use for getting items from data sources (e.g., all item queries, individual lookups, etc.). For example, on the simple version of the schema from [Designing your schema](designing-your-schema.md#aws-appsync-designing-your-schema) where `getTodos` didn’t have pagination, the request mapping template for listing items is as follows:

   ```
   {
       "version" : "2017-02-28",
       "operation" : "Scan"
   }
   ```

1. You always need a response mapping template to accompany the request. The console provides a default with the following passthrough value for lists:

   ```
   $util.toJson($ctx.result.items)
   ```

   In this example, the `context` object (aliased as `$ctx`) for lists of items has the form `$context.result.items`. If your GraphQL operation returns a single item, it would be `$context.result`. AWS AppSync provides helper functions for common operations, such as the `$util.toJson` function listed previously, to format responses properly. For a full list of functions, see [Resolver mapping template utility reference](resolver-util-reference.md#aws-appsync-resolver-mapping-template-util-reference).

1. Choose **Save Resolver**.

------
#### [ API ]

1. Create a resolver object by calling the [https://docs.aws.amazon.com/appsync/latest/APIReference/API_CreateResolver.html](https://docs.aws.amazon.com/appsync/latest/APIReference/API_CreateResolver.html) API.

1. You can modify your resolver's fields by calling the [https://docs.aws.amazon.com/appsync/latest/APIReference/API_UpdateResolver.html](https://docs.aws.amazon.com/appsync/latest/APIReference/API_UpdateResolver.html) API.

------
#### [ CLI ]

1. Create a resolver by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-resolver.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-resolver.html) command.

   You'll need to type in 6 parameters for this particular command:

   1. The `api-id` of your API.

   1. The `type-name` of the type that you want to modify in your schema. In the console example, this was `Query`.

   1. The `field-name` of the field that you want to modify in your type. In the console example, this was `getTodos`.

   1. The `data-source-name` of the data source you created in the [Attaching a data source](https://docs.aws.amazon.com/appsync/latest/devguide/attaching-a-data-source.html) guide.

   1. The `request-mapping-template`, which is the body of the request. In the console example, this was:

      ```
      {
          "version" : "2017-02-28",
          "operation" : "Scan"
      }
      ```

   1. The `response-mapping-template`, which is the body of the response. In the console example, this was:

      ```
      $util.toJson($ctx.result.items)
      ```

   An example command may look like this:

   ```
   aws appsync create-resolver --api-id abcdefghijklmnopqrstuvwxyz --type-name Query --field-name getTodos --data-source-name TodoTable --request-mapping-template "{ "version" : "2017-02-28", "operation" : "Scan", }" --response-mapping-template ""$"util.toJson("$"ctx.result.items)"
   ```

   An output will be returned in the CLI. Here's an example:

   ```
   {
       "resolver": {
           "kind": "UNIT",
           "dataSourceName": "TodoTable",
           "requestMappingTemplate": "{ version : 2017-02-28, operation : Scan, }",
           "resolverArn": "arn:aws:appsync:us-west-2:107289374856:apis/abcdefghijklmnopqrstuvwxyz/types/Query/resolvers/getTodos",
           "typeName": "Query",
           "fieldName": "getTodos",
           "responseMappingTemplate": "$util.toJson($ctx.result.items)"
       }
   }
   ```

1. To modify a resolver's fields and/or mapping templates, run the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/update-resolver.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/update-resolver.html) command.

   With the exception of the `api-id` parameter, the parameters used in the `create-resolver` command will be overwritten by the new values from the `update-resolver` command.

------

## Adding a resolver for mutations
<a name="adding-a-resolver-for-mutations"></a>

The next step is to create a resolver for your `Mutation` type.

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Schema**.

1. Under the **Mutation** type, choose **Attach** next to the `addTodo` field.

1. On the **Create Resolver** page, choose the data source you created in the [Attaching a data source](https://docs.aws.amazon.com/appsync/latest/devguide/attaching-a-data-source.html) guide.

1. In the **Configure mapping templates** window, you'll need to modify the request template because this is a mutation where you’re adding a new item to DynamoDB. Use the following request mapping template:

   ```
   {
       "version" : "2017-02-28",
       "operation" : "PutItem",
       "key" : {
           "id" : $util.dynamodb.toDynamoDBJson($ctx.args.id)
       },
       "attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args)
   }
   ```

1. AWS AppSync automatically converts arguments defined in the `addTodo` field from your GraphQL schema into DynamoDB operations. The previous example stores records in DynamoDB using a key of `id`, which is passed through from the mutation argument as `$ctx.args.id`. All of the other fields you pass through are automatically mapped to DynamoDB attributes with `$util.dynamodb.toMapValuesJson($ctx.args)`.

   For this resolver, use the following response mapping template:

   ```
   $util.toJson($ctx.result)
   ```

   AWS AppSync also supports test and debug workflows for editing resolvers. You can use a mock `context` object to see the transformed value of the template before invoking. Optionally, you can view the full request execution to a data source interactively when you run a query. For more information, see [Test and debug resolvers](test-debug-resolvers.md#aws-appsync-test-debug-resolvers) and [Monitoring and logging](monitoring.md#aws-appsync-monitoring).

1. Choose **Save Resolver**.

------
#### [ API ]

You can also do this with APIs by utilizing the commands in the [Create your first resolver](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers.html#create-your-first-resolver) section and the parameter details from this section.

------
#### [ CLI ]

You can also do this in the CLI by utilizing the commands in the [Create your first resolver](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers.html#create-your-first-resolver) section and the parameter details from this section.

------

At this point, if you’re not using the advanced resolvers you can begin using your GraphQL API as outlined in [Using your API](using-your-api.md#aws-appsync-using-your-api).

## Advanced resolvers
<a name="advanced-resolvers"></a>

If you are following the Advanced section and you’re building a sample schema in [Designing your schema](designing-your-schema.md#aws-appsync-designing-your-schema) to do a paginated scan, use the following request template for the `getTodos` field instead:

```
{
    "version" : "2017-02-28",
    "operation" : "Scan",
    "limit": $util.defaultIfNull(${ctx.args.limit}, 20),
    "nextToken": $util.toJson($util.defaultIfNullOrBlank($ctx.args.nextToken, null))
}
```

For this pagination use case, the response mapping is more than just a passthrough because it must contain both the *cursor* (so that the client knows what page to start at next) and the result set. The mapping template is as follows:

```
{
    "todos": $util.toJson($context.result.items),
    "nextToken": $util.toJson($context.result.nextToken)
}
```

The fields in the preceding response mapping template should match the fields defined in your `TodoConnection` type.

For the case of relations where you have a `Comments` table and you’re resolving the comments field on the `Todo` type (which returns a type of `[Comment]`), you can use a mapping template that runs a query against the second table. To do this, you must have already created a data source for the `Comments` table as outlined in [Attaching a data source](attaching-a-data-source.md#aws-appsync-getting-started-build-a-schema-from-scratch).

**Note**  
We’re using a query operation against a second table for illustrative purposes only. You could use another operation against DynamoDB instead. In addition, you could pull the data from another data source, such as AWS Lambda or Amazon OpenSearch Service, because the relation is controlled by your GraphQL schema.

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Schema**.

1. Under the **Todo** type, choose **Attach** next to the `comments` field.

1. On the **Create Resolver** page, choose your **Comments** table data source. The default name for the **Comments** table from the quickstart guides is `AppSyncCommentTable`, but it may vary depending on what name you gave it.

1. Add the following snippet to your request mapping template:

   ```
   {
       "version": "2017-02-28",
       "operation": "Query",
       "index": "todoid-index",
       "query": {
           "expression": "todoid = :todoid",
           "expressionValues": {
               ":todoid": {
                   "S": $util.toJson($context.source.id)
               }
           }
       }
   }
   ```

1. The `context.source` references the parent object of the current field that’s being resolved. In this example, `source.id` refers to the individual `Todo` object, which is then used for the query expression.

   You can use the passthrough response mapping template as follows:

   ```
   $util.toJson($ctx.result.items)
   ```

1. Choose **Save Resolver**.

1. Finally, back on the **Schema** page in the console, attach a resolver to the `addComment` field, and specify the data source for the `Comments` table. The request mapping template in this case is a simple `PutItem` with the specific `todoid` that is commented on as an argument, but you use the `$utils.autoId()` utility to create a unique sort key for the comment as follows:

   ```
   {
       "version": "2017-02-28",
       "operation": "PutItem",
       "key": {
           "todoid": { "S": $util.toJson($context.arguments.todoid) },
           "commentid": { "S": "$util.autoId()" }
       },
       "attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args)
   }
   ```

   Use a passthrough response template as follows:

   ```
   $util.toJson($ctx.result)
   ```

------
#### [ API ]

You can also do this with APIs by utilizing the commands in the [Create your first resolver](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers.html#create-your-first-resolver) section and the parameter details from this section.

------
#### [ CLI ]

You can also do this in the CLI by utilizing the commands in the [Create your first resolver](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers.html#create-your-first-resolver) section and the parameter details from this section.

------

# Disabling VTL mapping templates with direct Lambda resolvers (VTL)
<a name="direct-lambda-reference"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers-js.html).

With direct Lambda resolvers, you can circumvent the use of VTL mapping templates when using AWS Lambda data sources. AWS AppSync can provide a default payload to your Lambda function as well as a default translation from a Lambda function's response to a GraphQL type. You can choose to provide a request template, a response template, or neither and AWS AppSync will handle it accordingly. 

To learn more about the default request payload and response translation that AWS AppSync provides, see the [Direct Lambda resolver reference](resolver-mapping-template-reference-lambda.md#direct-lambda-resolvers). For more information on setting up an AWS Lambda data source and setting up an IAM Trust Policy, see [Attaching a data source](attaching-a-data-source.md). 

## Configure direct Lambda resolvers
<a name="direct-lambda-reference-resolvers"></a>

The following sections will show you how to attach Lambda data sources and add Lambda resolvers to your fields.

### Add a Lambda data source
<a name="direct-lambda-datasource"></a>

Before you can activate direct Lambda resolvers, you must add a Lambda data source.

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Data sources**.

1. Choose **Create data source**.

   1. For **Data source name**, enter a name for your data source, such as **myFunction**. 

   1. For **Data source type**, choose **AWS Lambda function**.

   1. For **Region**, choose the appropriate region.

   1. For **Function ARN**, choose the Lambda function from the dropdown list. You can search for the function name or manually enter the ARN of the function you want to use. 

   1. Create a new IAM role (recommended) or choose an existing role that has the `lambda:invokeFunction` IAM permission. Existing roles need a trust policy, as explained in the [Attaching a data source](attaching-a-data-source.md) section. 

      The following is an example IAM policy that has the required permissions to perform operations on the resource:

------
#### [ JSON ]

****  

      ```
      { 
           "Version":"2012-10-17",		 	 	  
           "Statement": [ 
               { 
                   "Effect": "Allow", 
                   "Action": [ "lambda:invokeFunction" ], 
                   "Resource": [ 
                       "arn:aws:lambda:us-west-2:123456789012:function:myFunction", 
                       "arn:aws:lambda:us-west-2:123456789012:function:myFunction:*" 
                   ] 
               } 
           ] 
       }
      ```

------

1. Choose the **Create** button.

------
#### [ CLI ]

1. Create a data source object by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-data-source.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-data-source.html) command.

   You'll need to type in 4 parameters for this particular command:

   1. The `api-id` of your API.

   1. The `name` of your data source. In the console example, this is the **Data source name**.

   1. The `type` of data source. In the console example, this is **AWS Lambda function**.

   1. The `lambda-config`, which is the **Function ARN** in the console example.
**Note**  
There are other parameters such as `Region` that must be configured but will usually default to your CLI configuration values.

   An example command may look like this:

   ```
   aws appsync create-data-source --api-id abcdefghijklmnopqrstuvwxyz --name myFunction --type AWS_LAMBDA --lambda-config lambdaFunctionArn=arn:aws:lambda:us-west-2:102847592837:function:appsync-lambda-example
   ```

   An output will be returned in the CLI. Here's an example:

   ```
   {
       "dataSource": {
           "dataSourceArn": "arn:aws:appsync:us-west-2:102847592837:apis/abcdefghijklmnopqrstuvwxyz/datasources/myFunction",
           "type": "AWS_LAMBDA",
           "name": "myFunction",
           "lambdaConfig": {
               "lambdaFunctionArn": "arn:aws:lambda:us-west-2:102847592837:function:appsync-lambda-example"
           }
       }
   }
   ```

1. To modify a data source's attributes, run the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/update-data-source.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/update-data-source.html) command.

   With the exception of the `api-id` parameter, the parameters used in the `create-data-source` command will be overwritten by the new values from the `update-data-source` command.

------

### Activate direct Lambda resolvers
<a name="direct-lambda-enable-templates"></a>

After creating a Lambda data source and setting up the appropriate IAM role to allow AWS AppSync to invoke the function, you can link it to a resolver or pipeline function. 

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Schema**.

1. In the **Resolvers** window, choose a field or operation and then select the **Attach** button.

1. In the **Create new resolver** page, choose the Lambda function from the dropdown list.

1. In order to leverage direct Lambda resolvers, confirm that request and response mapping templates are disabled in the **Configure mapping templates** section.

1. Choose the **Save Resolver** button.

------
#### [ CLI ]
+ Create a resolver by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-resolver.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/appsync/create-resolver.html) command.

  You'll need to type in 6 parameters for this particular command:

  1. The `api-id` of your API.

  1. The `type-name` of the type in your schema.

  1. The `field-name` of the field in your schema.

  1. The `data-source-name`, or your Lambda function's name.

  1. The `request-mapping-template`, which is the body of the request. In the console example, this was disabled:

     ```
     " "
     ```

  1. The `response-mapping-template`, which is the body of the response. In the console example, this was also disabled:

     ```
     " "
     ```

  An example command may look like this:

  ```
  aws appsync create-resolver --api-id abcdefghijklmnopqrstuvwxyz --type-name Subscription --field-name onCreateTodo --data-source-name LambdaTest --request-mapping-template " " --response-mapping-template " "
  ```

  An output will be returned in the CLI. Here's an example:

  ```
  {
      "resolver": {
          "resolverArn": "arn:aws:appsync:us-west-2:102847592837:apis/abcdefghijklmnopqrstuvwxyz/types/Subscription/resolvers/onCreateTodo",
          "typeName": "Subscription",
          "kind": "UNIT",
          "fieldName": "onCreateTodo",
          "dataSourceName": "LambdaTest"
      }
  }
  ```

------

When you disable your mapping templates, there are several additional behaviors that will occur in AWS AppSync:
+ By disabling a mapping template, you are signalling to AWS AppSync that you accept the default data translations specified in the [Direct Lambda resolver reference](resolver-mapping-template-reference-lambda.md#direct-lambda-resolvers).
+ By disabling the request mapping template, your Lambda data source will receive a payload consisting of the entire [Context](resolver-context-reference.md) object.
+ By disabling the response mapping template, the result of your Lambda invocation will be translated depending on the version of the request mapping template or if the request mapping template is also disabled. 

# Testing and debugging resolvers in AWS AppSync (VTL)
<a name="test-debug-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers-js.html).

AWS AppSync executes resolvers on a GraphQL field against a data source. As described in [Resolver mapping template overview](resolver-mapping-template-reference-overview.md#aws-appsync-resolver-mapping-template-reference-overview), resolvers communicate with data sources by using a templating language. This enables you to customize the behavior and apply logic and conditions before and after communicating with the data source. For an introductory tutorial-style programming guide for writing resolvers, see the [Resolver mapping template programming guide](resolver-mapping-template-reference-programming-guide.md#aws-appsync-resolver-mapping-template-reference-programming-guide).

To help developers write, test, and debug these resolvers, the AWS AppSync console also provides tools to create a GraphQL request and response with mock data down to the individual field resolver. Additionally, you can perform queries, mutations, and subscriptions in the AWS AppSync console and see a detailed log stream from Amazon CloudWatch of the entire request. This includes results from a data source.

## Testing with mock data
<a name="testing-with-mock-data"></a>

When a GraphQL resolver is invoked, it contains a `context` object that contains information about the request. This includes arguments from a client, identity information, and data from the parent GraphQL field. It also contains the results from the data source, which can be used in the response template. For more information about this structure and the available helper utilities to use when programming, see the [Resolver Mapping Template Context Reference](resolver-context-reference.md#aws-appsync-resolver-mapping-template-context-reference).

When writing or editing a resolver, you can pass a *mock* or *test context* object into the console editor. This enables you to see how both the request and the response templates evaluate without actually running against a data source. For example, you can pass a test `firstname: Shaggy` argument and see how it evaluates when using `$ctx.args.firstname` in your template code. You could also test the evaluation of any utility helpers such as `$util.autoId()` or `util.time.nowISO8601()`.

### Testing resolvers
<a name="test-a-resolver"></a>

This example will use the AWS AppSync console to test resolvers.

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **APIs dashboard**, choose your GraphQL API.

   1. In the **Sidebar**, choose **Schema**.

1. If you haven't done so already, under the type and next to the field, choose **Attach** to add your resolver.

   For more information on how to build a conplete resolver, see [Configuring resolvers](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers.html).

   Otherwise, select the resolver that's already in the field.

1. At the top of the **Edit resolver** page, choose **Select test context**, choose **Create new context**.

1. Select a sample context object or populate the JSON manually in the **Execution context** window below.

1. Enter in a **Text context name**.

1. Choose the **Save** button.

1. At the top of the **Edit Resolver** page, choose **Run test**.

For a more practical example, suppose you have an app storing a GraphQL type of `Dog` that uses automatic ID generation for objects and stores them in Amazon DynamoDB. You also want to write some values from the arguments of a GraphQL mutation, and allow only specific users to see a response. The following shows what the schema might look like:

```
type Dog {
  breed: String
  color: String
}

type Mutation {
  addDog(firstname: String, age: Int): Dog
}
```

When you add a resolver for the `addDog` mutation, you can populate a context object like the following example. The following has arguments from the client of `name` and `age`, and a `username` populated in the `identity` object:

```
{
    "arguments" : {
        "firstname": "Shaggy",
        "age": 4
    },
    "source" : {},
    "result" : {
        "breed" : "Miniature Schnauzer",
        "color" : "black_grey"
    },
    "identity": {
        "sub" : "uuid",
        "issuer" : " https://cognito-idp.{region}.amazonaws.com/{userPoolId}",
        "username" : "Nadia",
        "claims" : { },
        "sourceIp" :[  "x.x.x.x" ],
        "defaultAuthStrategy" : "ALLOW"
    }
}
```

You can test this using the following request and response mapping templates:

 **Request Template** 

```
{
    "version" : "2017-02-28",
    "operation" : "PutItem",
    "key" : {
        "id" : { "S" : "$util.autoId()" }
    },
    "attributeValues" : $util.dynamodb.toMapValuesJson($ctx.args)
}
```

 **Response Template** 

```
#if ($context.identity.username == "Nadia")
  $util.toJson($ctx.result)
#else
  $util.unauthorized()
#end
```

The evaluated template has the data from your test context object and the generated value from `$util.autoId()`. Additionally, if you were to change the `username` to a value other than `Nadia`, the results won’t be returned because the authorization check would fail. For more information about fine grained access control, see [Authorization use cases](security-authorization-use-cases.md#aws-appsync-security-authorization-use-cases).

### Testing mapping templates with AWS AppSync's APIs
<a name="testing-with-appsync-api"></a>

You can use the `EvaluateMappingTemplate` API command to remotely test your mapping templates with mocked data. To get started with the command, make sure you have added the `appsync:evaluateMappingTemplate` permission to your policy. For example:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "appsync:evaluateMappingTemplate",
            "Resource": "arn:aws:appsync:us-east-1:111122223333:*"
        }
    ]
}
```

------

You can leverage the command by using the [AWS CLI](https://aws.amazon.com/cli/) or [AWS SDKs](https://aws.amazon.com/tools/). For example, take the `Dog` schema and its request/response mapping templates from the previous section. Using the CLI on your local station, save the request template to a file named `request.vtl`, then save the `context` object to a file named `context.json`. From your shell, run the following command:

```
aws appsync evaluate-mapping-template --template file://request.vtl --context file://context.json
```

The command returns the following response:

```
{
  "evaluationResult": "{\n    \"version\" : \"2017-02-28\",\n    \"operation\" : \"PutItem\",\n    \"key\" : {\n        \"id\" : { \"S\" : \"afcb4c85-49f8-40de-8f2b-248949176456\" }\n    },\n    \"attributeValues\" : {\"firstname\":{\"S\":\"Shaggy\"},\"age\":{\"N\":4}}\n}\n"
}
```

The `evaluationResult` contains the results of testing your provided template with the provided `context`. You can also test your templates using the AWS SDKs. Here's an example using the AWS SDK for JavaScript V2: 

```
const AWS = require('aws-sdk')
const client = new AWS.AppSync({ region: 'us-east-2' })

const template = fs.readFileSync('./request.vtl', 'utf8')
const context = fs.readFileSync('./context.json', 'utf8')

client
  .evaluateMappingTemplate({ template, context })
  .promise()
  .then((data) => console.log(data))
```

Using the SDK, you can easily incorporate tests from your favorite test suite to validate your template's behavior. We recommend creating tests using the [Jest Testing Framework](https://jestjs.io/), but any testing suite works. The following snippet shows a hypothetical validation run. Note that we expect the evaluation response to be valid JSON, so we use `JSON.parse` to retrieve JSON from the string response:

```
const AWS = require('aws-sdk')
const fs = require('fs')
const client = new AWS.AppSync({ region: 'us-east-2' })

test('request correctly calls DynamoDB', async () => {
  const template = fs.readFileSync('./request.vtl', 'utf8')
  const context = fs.readFileSync('./context.json', 'utf8')
  const contextJSON = JSON.parse(context)
  
  const response = await client.evaluateMappingTemplate({ template, context }).promise()
  const result = JSON.parse(response.evaluationResult)
  
  expect(result.key.id.S).toBeDefined()
  expect(result.attributeValues.firstname.S).toEqual(contextJSON.arguments.firstname)
})
```

 This yields the following result:

```
Ran all test suites.
> jest

PASS ./index.test.js
✓ request correctly calls DynamoDB (543 ms)

Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Snapshots: 0 total
Time: 1.511 s, estimated 2 s
```

## Debugging a live query
<a name="debugging-a-live-query"></a>

There’s no substitute for an end-to-end test and logging to debug a production application. AWS AppSync lets you log errors and full request details using Amazon CloudWatch. Additionally, you can use the AWS AppSync console to test GraphQL queries, mutations, and subscriptions and live stream log data for each request back into the query editor to debug in real time. For subscriptions, the logs display connection-time information.

To perform this, you need to have Amazon CloudWatch logs enabled in advance, as described in [Monitoring and logging](monitoring.md#aws-appsync-monitoring). Next, in the AWS AppSync console, choose the **Queries** tab and then enter a valid GraphQL query. In the lower-right section, click and drag the **Logs** window to open the logs view. At the top of the page, choose the play arrow icon to run your GraphQL query. In a few moments, your full request and response logs for the operation are streamed to this section and you can view then in the console.

# Configuring and using pipeline resolvers in AWS AppSync (VTL)
<a name="pipeline-resolvers"></a>

**Note**  
We now primarily support the APPSYNC\$1JS runtime and its documentation. Please consider using the APPSYNC\$1JS runtime and its guides [here](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers-js.html).

AWS AppSync executes resolvers on a GraphQL field. In some cases, applications require executing multiple operations to resolve a single GraphQL field. With pipeline resolvers, developers can now compose operations called Functions and execute them in sequence. Pipeline resolvers are useful for applications that, for instance, require performing an authorization check before fetching data for a field.

A pipeline resolver is composed of a **Before** mapping template, an **After** mapping template, and a list of Functions. Each Function has a **request** and **response** mapping template that it executes against a data source. As a pipeline resolver delegates execution to a list of functions, it is therefore not linked to any data source. Unit resolvers and functions are primitives that execute operations against data sources. See the [Resolver mapping template overview](resolver-mapping-template-reference-overview.md#aws-appsync-resolver-mapping-template-reference-overview) for more information.

## Step 1: Creating a pipeline resolver
<a name="create-a-pipeline-resolver"></a>

In the AWS AppSync console, go to the **Schema** page.

Save the following schema:

```
schema {
    query: Query
    mutation: Mutation
}

type Mutation {
    signUp(input: Signup): User
}

type Query {
    getUser(id: ID!): User
}

input Signup {
    username: String!
    email: String!
}

type User {
    id: ID!
    username: String
    email: AWSEmail
}
```

We are going to wire a pipeline resolver to the **signUp** field on the **Mutation** type. In the **Mutation** type on the right side, choose **Attach** next to the `signUp` mutation field. On the create resolver page, click on **Actions**, then **Update runtime**. Choose `Pipeline Resolver`, then choose `VTL`, then choose **Update**. The page should now show three sections: a **Before mapping template** text area, a **Functions** section, and an **After mapping template** text area.

Our pipeline resolver signs up a user by first validating the email address input and then saving the user in the system. We are going to encapsulate the email validation inside a **validateEmail** function, and the saving of the user inside a **saveUser** function. The **validateEmail** function executes first, and if the email is valid, then the **saveUser** function executes.

The execution flow will be as follow:

1. Mutation.signUp resolver request mapping template

1. validateEmail function

1. saveUser function

1. Mutation.signUp resolver response mapping template

Because we will probably reuse the **validateEmail** function in other resolvers on our API, we want to avoid accessing `$ctx.args` because these will change from one GraphQL field to another. Instead, we can use the `$ctx.stash` to store the email attribute from the `signUp(input: Signup)` input field argument.

**BEFORE** mapping template:

```
## store email input field into a generic email key
$util.qr($ctx.stash.put("email", $ctx.args.input.email))
{}
```

The console provides a default passthrough **AFTER** mapping template that will we use:

```
$util.toJson($ctx.result)
```

Choose **Create** or **Save** to update the resolver.

## Step 2: Creating a function
<a name="create-a-function"></a>

From the pipeline resolver page, in the **Functions** section, click on **Add function**, then **Create new function**. It is also possible to create functions without going through the resolver page; to do this, in the AWS AppSync console, go to the **Functions** page. Choose the **Create function** button. Let’s create a function that checks if an email is valid and comes from a specific domain. If the email is not valid, the function raises an error. Otherwise, it forwards whatever input it was given.

On the new function page, choose **Actions**, then **Update runtime**. Choose `VTL`, then **Update**. Make sure you have created a data source of the **NONE** type. Choose this data source in the **Data source name** list. For **function name**, enter in `validateEmail`. In the **function code** area, overwrite everything with this snippet:

```
#set($valid = $util.matches("^[a-zA-Z0-9_.+-]+@(?:(?:[a-zA-Z0-9-]+\.)?[a-zA-Z]+\.)?(myvaliddomain)\.com", $ctx.stash.email))
#if (!$valid)
    $util.error("$ctx.stash.email is not a valid email.")
#end
{
    "payload": { "email": $util.toJson(${ctx.stash.email}) }
}
```

Paste this into the response mapping template:

```
$util.toJson($ctx.result)
```

Review your changes, then choose **Create**. We just created our **validateEmail** function. Repeat these steps to create the **saveUser** function with the following request and response mapping templates (For the sake of simplicity, we use a **NONE** data source and pretend the user has been saved in the system after the function executes.): 

Request mapping template:

```
## $ctx.prev.result contains the signup input values. We could have also
## used $ctx.args.input.
{
    "payload": $util.toJson($ctx.prev.result)
}
```

Response mapping template:

```
## an id is required so let's add a unique random identifier to the output
$util.qr($ctx.result.put("id", $util.autoId()))
$util.toJson($ctx.result)
```

We just created our **saveUser** function.

## Step 3: Adding a function to a pipeline resolver
<a name="adding-a-function-to-a-pipeline-resolver"></a>

Our functions should have been added automatically to the pipeline resolver we just created. If this wasn't the case, or you created the functions through the **Functions** page, you can click on **Add function** on the resolver page to attach them. Add both the **validateEmail** and **saveUser** functions to the resolver. The **validateEmail** function should be placed before the **saveUser** function. As you add more functions, you can use the **move up** and **move down** options to reorganize the order of execution of your functions. Review your changes, then choose **Save**.

## Step 4: Executing a query
<a name="executing-a-query"></a>

In the AWS AppSync console, go to the **Queries** page. In the explorer, ensure that you're using your mutation. If you aren't, choose `Mutation` in the drop-down list, then choose `+`. Enter the following query:

```
mutation {
  signUp(input: {
    email: "nadia@myvaliddomain.com"
    username: "nadia"
  }) {
    id
    email
  }
}
```

This should return something like:

```
{
  "data": {
    "signUp": {
      "id": "256b6cc2-4694-46f4-a55e-8cb14cc5d7fc",
      "email": "nadia@myvaliddomain.com"
    }
  }
}
```

We have successfully signed up our user and validated the input email using a pipeline resolver. To follow a more complete tutorial focusing on pipeline resolvers, you can go to [Tutorial: Pipeline Resolvers](tutorial-pipeline-resolvers.md#aws-appsync-tutorial-pipeline-resolvers) 

# Using an AWS AppSync API with the AWS CDK
<a name="using-your-api"></a>

**Tip**  
Before you use the CDK, we recommend reviewing the CDK's [official documentation](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html) along with AWS AppSync's [CDK reference](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync-readme.html).  
We also recommend ensuring that your [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [NPM](https://docs.npmjs.com/) installations are working on your system.

In this section, we're going to create a simple CDK application that can add and fetch items from a DynamoDB table. This is meant to be a quickstart example using some of the code from the [Designing your schema](https://docs.aws.amazon.com/appsync/latest/devguide/designing-your-schema.html), [Attaching a data source](https://docs.aws.amazon.com/appsync/latest/devguide/attaching-a-data-source.html), and [Configuring resolvers (JavaScript)](https://docs.aws.amazon.com/appsync/latest/devguide/configuring-resolvers-js.html) sections.

## Setting up a CDK project
<a name="Setting-up-a-cdk-project"></a>

**Warning**  
These steps may not be completely accurate depending on your environment. We're assuming your system has the necessary utilities installed, a way to interface with AWS services, and proper configurations in place.

The first step is installing the AWS CDK. In your CLI, you can enter the following command:

```
npm install -g aws-cdk
```

Next, you need to create a project directory, then navigate to it. An example set of commands to create and navigate to a directory is:

```
mkdir example-cdk-app
cd example-cdk-app
```

Next, you need to create an app. Our service primarily uses TypeScript. In your project directory, enter the following command:

```
cdk init app --language typescript
```

When you do this, a CDK app along with its initialization files will be installed:

![\[Terminal output showing Git repository initialization and npm install completion.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-init-app-example.png)


Your project structure may look like this:

![\[Project directory structure showing folders and files for an example CDK app.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-init-directories.png)


You'll notice we have several important directories:
+ `bin`: The initial bin file will create the app. We won't touch this in this guide.
+ `lib`: The lib directory contains your stack files. You can think of stack files as individual units of execution. Constructs will be inside our stack files. Basically, these are resources for a service that will be spun up in CloudFormation when the app is deployed. This is where most of our coding will happen.
+ `node_modules`: This directory is created by NPM and contains all package dependencies you installed using the `npm` command.

Our initial stack file may contain something like this:

```
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
// import * as sqs from 'aws-cdk-lib/aws-sqs';

export class ExampleCdkAppStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // The code that defines your stack goes here

    // example resource
    // const queue = new sqs.Queue(this, 'ExampleCdkAppQueue', {
    //   visibilityTimeout: cdk.Duration.seconds(300)
    // });
  }
}
```

This is the boilerplate code to create a stack in our app. Most of our code in this example will go inside the scope of this class.

To verify that your stack file is in the app, in your app's directory, run the following command in the terminal:

```
cdk ls
```

A list of your stacks should appear. If it doesn't, then you may need to run through the steps again or check the official documentation for help.

If you want to build your code changes before deploying, you can always run the following command in the terminal:

```
npm run build
```

And, to see the changes before deploying:

```
cdk diff
```

Before we add our code to the stack file, we're going to perform a bootstrap. Bootstrapping allows us to provision resources for the CDK before the app deploys. More information about this process can be found [here](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html). To create a bootstrap, the command is:

```
cdk bootstrap aws://ACCOUNT-NUMBER/REGION
```

**Tip**  
This step requires several IAM permissions in your account. Your bootstrap will be denied if you don't have them. If this happens, you may have to delete incomplete resources caused by the bootstrap such as the S3 bucket it generates.

Bootstrap will spin up several resources. The final message will look like this:

![\[Terminal output showing successful bootstrapping of an AWS environment.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-init-bootstrap-final.png)


This is done once per account per Region, so you won't have to do this often. The main resources of the bootstrap are the CloudFormation stack and the Amazon S3 bucket.

The Amazon S3 bucket is used to store files and IAM roles that grant permissions needed to perform deployments. The required resources are defined in an CloudFormation stack, called the bootstrap stack, which is usually named `CDKToolkit`. Like any CloudFormation stack, it appears in the CloudFormation console once it has been deployed:

![\[CDKToolkit stack with CREATE_COMPLETE status in CloudFormation console.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-init-bootstrap-cfn-console.png)


The same can be said for the bucket:

![\[S3 bucket details showing name, region, access settings, and creation date.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-init-bootstrap-bucket-console.png)


To import the services we need in our stack file, we can use the following command:

```
npm install aws-cdk-lib # V2 command
```

**Tip**  
If you're having trouble with V2, you could install the individual libraries using V1 commands:  

```
npm install @aws-cdk/aws-appsync @aws-cdk/aws-dynamodb
```
We don't recommend this because V1 has been deprecated.

## Implementing a CDK project - Schema
<a name="implementing-a-cdk-project-schema"></a>

We can now start implementing our code. First, we must create our schema. You can simply create a `.graphql` file in your app:

```
mkdir schema
touch schema.graphql
```

In our example, we included a top-level directory called `schema` containing our `schema.graphql`:

![\[File structure showing a schema folder containing schema.graphql file.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-schema-directory.png)


Inside our schema, let's include a simple example:

```
input CreatePostInput {
    title: String
    content: String
}

type Post {
    id: ID!
    title: String
    content: String
}

type Mutation {
    createPost(input: CreatePostInput!): Post
}

type Query {
    getPost: [Post]
}
```

Back in our stack file, we need to make sure the following import directives are defined:

```
import * as cdk from 'aws-cdk-lib';
import * as appsync from 'aws-cdk-lib/aws-appsync';
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
import { Construct } from 'constructs';
```

Inside the class, we'll add code to make our GraphQL API and connect it to our `schema.graphql` file:

```
export class ExampleCdkAppStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
    
    // makes a GraphQL API
    const api = new appsync.GraphqlApi(this, 'post-apis', {
      name: 'api-to-process-posts',
      schema: appsync.SchemaFile.fromAsset('schema/schema.graphql'),
    });
  }
}
```

We'll also add some code to print out the GraphQL URL, API key, and Region:

```
export class ExampleCdkAppStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
    
    // Makes a GraphQL API construct
    const api = new appsync.GraphqlApi(this, 'post-apis', {
      name: 'api-to-process-posts',
      schema: appsync.SchemaFile.fromAsset('schema/schema.graphql'),
    });

    // Prints out URL
    new cdk.CfnOutput(this, "GraphQLAPIURL", {
      value: api.graphqlUrl
    });

    // Prints out the AppSync GraphQL API key to the terminal
    new cdk.CfnOutput(this, "GraphQLAPIKey", {
      value: api.apiKey || ''
    });

    // Prints out the stack region to the terminal
    new cdk.CfnOutput(this, "Stack Region", {
      value: this.region
    });
  }
}
```

At this point, we'll use deploy our app again:

```
cdk deploy
```

This is the result:

![\[Deployment output showing ExampleCdkAppStack details, including GraphQL API URL and stack region.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-schema.png)


It appears our example was successful, but let's check the AWS AppSync console just to confirm:

![\[GraphQL interface showing successful API request with response data displayed.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-schema-result-1.png)


It appears our API was created. Now, we'll check the schema attached to the API:

![\[GraphQL schema defining CreatePostInput, Post type, Mutation, and Query operations.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-schema-result-2.png)


This appears to match up with our schema code, so it was successful. Another way to confirm this from a metadata viewpoint is to look at the CloudFormation stack:

![\[CloudFormation stack showing ExampleCdkAppStack update complete and CDKToolkit creation complete.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-schema-result-3.png)


When we deploy our CDK app, it goes through CloudFormation to spin up resources like the bootstrap. Each stack within our app maps 1:1 with an CloudFormation stack. If you go back to the stack code, the stack name was grabbed from the class name `ExampleCdkAppStack`. You can see the resources it created, which also match our naming conventions in our GraphQL API construct:

![\[Expanded view of post-apis resource showing Schema, DefaultApiKey, and CDKMetadata.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-schema-result-4.png)


## Implementing a CDK project - Data source
<a name="implementing-a-cdk-project-data-source"></a>

Next, we need to add our data source. Our example will use a DynamoDB table. Inside the stack class, we'll add some code to create a new table:

```
export class ExampleCdkAppStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // Makes a GraphQL API construct
    const api = new appsync.GraphqlApi(this, 'post-apis', {
      name: 'api-to-process-posts',
      schema: appsync.SchemaFile.fromAsset('schema/schema.graphql'),
    });

    //creates a DDB table
    const add_ddb_table = new dynamodb.Table(this, 'posts-table', {
      partitionKey: {
        name: 'id',
        type: dynamodb.AttributeType.STRING,
      },
    });

    // Prints out URL
    new cdk.CfnOutput(this, "GraphQLAPIURL", {
      value: api.graphqlUrl
    });

    // Prints out the AppSync GraphQL API key to the terminal
    new cdk.CfnOutput(this, "GraphQLAPIKey", {
      value: api.apiKey || ''
    });

    // Prints out the stack region to the terminal
    new cdk.CfnOutput(this, "Stack Region", {
      value: this.region
    });
  }
}
```

At this point, let's deploy again:

```
cdk deploy
```

We should check the DynamoDB console for our new table:

![\[DynamoDB console showing ExampleCdkAppStack-poststable as Active with Provisioned capacity.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-ddb-result-1.png)


Our stack name is correct, and the table name matches our code. If we check our CloudFormation stack again, we'll now see the new table:

![\[Expanded view of a logical ID in CloudFormation showing post-apis, posts-table, and CDKMetadata.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-ddb-result-2.png)


## Implementing a CDK project - Resolver
<a name="implementing-a-cdk-project-resolver"></a>

This example will use two resolvers: one to query the table and one to add to it. Since we're using pipeline resolvers, we'll need to declare two pipeline resolvers with one function in each. In the query, we'll add the following code:

```
export class ExampleCdkAppStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // Makes a GraphQL API construct
    const api = new appsync.GraphqlApi(this, 'post-apis', {
      name: 'api-to-process-posts',
      schema: appsync.SchemaFile.fromAsset('schema/schema.graphql'),
    });

    //creates a DDB table
    const add_ddb_table = new dynamodb.Table(this, 'posts-table', {
      partitionKey: {
        name: 'id',
        type: dynamodb.AttributeType.STRING,
      },
    });

    // Creates a function for query
    const add_func = new appsync.AppsyncFunction(this, 'func-get-post', {
      name: 'get_posts_func_1',
      api,
      dataSource: api.addDynamoDbDataSource('table-for-posts', add_ddb_table),
      code: appsync.Code.fromInline(`
          export function request(ctx) {
          return { operation: 'Scan' };
          }

          export function response(ctx) {
          return ctx.result.items;
          }
  `),
      runtime: appsync.FunctionRuntime.JS_1_0_0,
    });

    // Creates a function for mutation
    const add_func_2 = new appsync.AppsyncFunction(this, 'func-add-post', {
      name: 'add_posts_func_1',
      api,
      dataSource: api.addDynamoDbDataSource('table-for-posts-2', add_ddb_table),
      code: appsync.Code.fromInline(`
          export function request(ctx) {
            return {
            operation: 'PutItem',
            key: util.dynamodb.toMapValues({id: util.autoId()}),
            attributeValues: util.dynamodb.toMapValues(ctx.args.input),
            };
          }

          export function response(ctx) {
            return ctx.result;
          }
      `),
      runtime: appsync.FunctionRuntime.JS_1_0_0,
    });

    // Adds a pipeline resolver with the get function
    new appsync.Resolver(this, 'pipeline-resolver-get-posts', {
      api,
      typeName: 'Query',
      fieldName: 'getPost',
      code: appsync.Code.fromInline(`
          export function request(ctx) {
          return {};
          }

          export function response(ctx) {
          return ctx.prev.result;
          }
  `),
      runtime: appsync.FunctionRuntime.JS_1_0_0,
      pipelineConfig: [add_func],
    });

    // Adds a pipeline resolver with the create function
    new appsync.Resolver(this, 'pipeline-resolver-create-posts', {
      api,
      typeName: 'Mutation',
      fieldName: 'createPost',
      code: appsync.Code.fromInline(`
          export function request(ctx) {
          return {};
          }

          export function response(ctx) {
          return ctx.prev.result;
          }
  `),
      runtime: appsync.FunctionRuntime.JS_1_0_0,
      pipelineConfig: [add_func_2],
    });

    // Prints out URL
    new cdk.CfnOutput(this, "GraphQLAPIURL", {
      value: api.graphqlUrl
    });

    // Prints out the AppSync GraphQL API key to the terminal
    new cdk.CfnOutput(this, "GraphQLAPIKey", {
      value: api.apiKey || ''
    });

    // Prints out the stack region to the terminal
    new cdk.CfnOutput(this, "Stack Region", {
      value: this.region
    });
  }
}
```

In this snippet, we added a pipeline resolver called `pipeline-resolver-create-posts` with a function called `func-add-post` attached to it. This is the code that will add `Posts` to the table. The other pipeline resolver was called `pipeline-resolver-get-posts` with a function called `func-get-post` that retrieves `Posts` added to the table.

We'll deploy this to add it to the AWS AppSync service:

```
cdk deploy
```

Let's check the AWS AppSync console to see if they were attached to our GraphQL API:

![\[GraphQL API schema showing mutation and query fields with Pipeline resolvers.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-resolver-result-1.png)


It appears to be correct. In the code, both of these resolvers were attached to the GraphQL API we made (denoted by the `api` props value present in both the resolvers and functions). In the GraphQL API, the fields we attached our resolvers to were also specified in the props (defined by the `typename` and `fieldname` props in each resolver).

Let's see if the content of the resolvers is correct starting with the `pipeline-resolver-get-posts`:

![\[Code snippet showing request and response functions in a resolver, with an arrow pointing to them.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-resolver-result-2.png)


The before and after handlers match our `code` props value. We can also see that a function called `add_posts_func_1`, which matches the name of the function we attached in the resolver.

Let's look at the code content of that function:

![\[Function code showing request and response methods for a PutItem operation.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-resolver-result-3.png)


This matches up with the `code` props of the `add_posts_func_1` function. Our query was successfully uploaded, so let's check on the query:

![\[Resolver code with request and response functions, and a get_posts_func_1 function listed below.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-resolver-result-4.png)


These also match the code. If we look at `get_posts_func_1`:

![\[Code snippet showing two exported functions: request returning 'Scan' operation and response returning items.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-resolver-result-5.png)


Everything appears to be in place. To confirm this from a metadata perspective, we can check our stack in CloudFormation again:

![\[List of logical IDs for AWS resources including API, table, functions, and pipelines.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-deploy-resolver-result-6.png)


Now, we need to test this code by performing some requests.

## Implementing a CDK project - Requests
<a name="implementing-a-cdk-project-requests"></a>

To test our app in the AWS AppSync console, we made one query and one mutation:

![\[GraphQL code snippet showing a query to get post details and a mutation to create a post.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-request-1.png)


`MyMutation` contains a `createPost` operation with the arguments `1970-01-01T12:30:00.000Z` and `first post`. It returns the `date` and `title` that we passed in as well as the automatically generated `id` value. Running the mutation yields the result:

```
{
  "data": {
    "createPost": {
      "date": "1970-01-01T12:30:00.000Z",
      "id": "4dc1c2dd-0aa3-4055-9eca-7c140062ada2",
      "title": "first post"
    }
  }
}
```

If we check the DynamoDB table quickly, we can see our entry in the table when we scan it:

![\[DynamoDB table entry showing id, date, and title fields for a single item.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/cdk-code-request-2.png)


Back in the AWS AppSync console, if we run the query to retrieve this `Post`, we get the following result:

```
{
  "data": {
    "getPost": [
      {
        "id": "9f62c4dd-49d5-48d5-b835-143284c72fe0",
        "date": "1970-01-01T12:30:00.000Z",
        "title": "first post"
      }
    ]
  }
}
```

# Using subscriptions for real-time data applications in AWS AppSync
<a name="aws-appsync-real-time-data"></a>

**Important**  
As of Mar 13, 2025, you can build a real-time PubSub API powered by WebSockets using AWS AppSync Events. For more information, see [Publish events via WebSocket](https://docs.aws.amazon.com/appsync/latest/eventapi/publish-websocket.html) in the *AWS AppSync Events Developer Guide*.

AWS AppSync allows you to utilize subscriptions to implement live application updates, push notifications, etc. When clients invoke the GraphQL subscription operations, a secure WebSocket connection is automatically established and maintained by AWS AppSync. Applications can then distribute data in real-time from a data source to subscribers while AWS AppSync continually manages the application's connection and scaling requirements. The following sections will show you how subscriptions in AWS AppSync work.

## GraphQL schema subscription directives
<a name="graphql-schema-subscription-directives"></a>

Subscriptions in AWS AppSync are invoked as a response to a mutation. This means that you can make any data source in AWS AppSync real time by specifying a GraphQL schema directive on a mutation.

The AWS Amplify client libraries automatically handle subscription connection management. The libraries use pure WebSockets as the network protocol between the client and service.

**Note**  
To control authorization at connection time to a subscription, you can use AWS Identity and Access Management (IAM), AWS Lambda, Amazon Cognito identity pools, or Amazon Cognito user pools for field-level authorization. For fine-grained access controls on subscriptions, you can attach resolvers to your subscription fields and perform logic using the identity of the caller and AWS AppSync data sources. For more information, see [Configuring authorization and authentication to secure your GraphQL APIs](security-authz.md).

Subscriptions are triggered from mutations and the mutation selection set is sent to subscribers.

The following example shows how to work with GraphQL subscriptions. It doesn't specify a data source because the data source could be Lambda, Amazon DynamoDB, or Amazon OpenSearch Service.

To get started with subscriptions, you must add a subscription entry point to your schema as follows:

```
schema {
    query: Query
    mutation: Mutation
    subscription: Subscription
}
```

Suppose you have a blog post site, and you want to subscribe to new blogs and changes to existing blogs. To do this, add the following `Subscription` definition to your schema:

```
type Subscription {
    addedPost: Post
    updatedPost: Post
    deletedPost: Post
}
```

Suppose further that you have the following mutations:

```
type Mutation {
    addPost(id: ID! author: String! title: String content: String url: String): Post!
    updatePost(id: ID! author: String! title: String content: String url: String ups: Int! downs: Int! expectedVersion: Int!): Post!
    deletePost(id: ID!): Post!
}
```

You can make these fields real time by adding an `@aws_subscribe(mutations: ["mutation_field_1", "mutation_field_2"])` directive for each of the subscriptions you want to receive notifications for, as follows:

```
type Subscription {
    addedPost: Post
    @aws_subscribe(mutations: ["addPost"])
    updatedPost: Post
    @aws_subscribe(mutations: ["updatePost"])
    deletedPost: Post
    @aws_subscribe(mutations: ["deletePost"])
}
```

Because the `@aws_subscribe(mutations: ["",..,""])` takes an array of mutation inputs, you can specify multiple mutations, which initiate a subscription. If you're subscribing from a client, your GraphQL query might look like the following:

```
subscription NewPostSub {
    addedPost {
        __typename
        version
        title
        content
        author
        url
    }
}
```

This subscription query is needed for client connections and tooling.

With the pure WebSockets client, selection set filtering is done per client, as each client can define its own selection set. In this case, the subscription selection set must be a subset of the mutation selection set. For example, a subscription `addedPost{author title}` linked to the mutation `addPost(...){id author title url version}` receives only the author and title of the post. It does not receive the other fields. However, if the mutation lacked the author in its selection set, the subscriber would get a `null` value for the author field (or an error in case the author field is defined as required/not-null in the schema).

The subscription selection set is essential when using pure WebSockets. If a field is not explicitly defined in the subscription, then AWS AppSync doesn't return the field.

In the previous example, the subscriptions didn't have arguments. Suppose that your schema looks like the following:

```
type Subscription {
    updatedPost(id:ID! author:String): Post
    @aws_subscribe(mutations: ["updatePost"])
}
```

In this case, your client defines a subscription as follows:

```
subscription UpdatedPostSub {
    updatedPost(id:"XYZ", author:"ABC") {
        title
        content
    }
}
```

The return type of a `subscription` field in your schema must match the return type of the corresponding mutation field. In the previous example, this was shown as both `addPost` and `addedPost` returned as a type of `Post`.

To set up subscriptions on the client, see [Building a client application using Amplify client](building-a-client-app.md).

## Using subscription arguments
<a name="using-subscription-arguments"></a>

An important part of using GraphQL subscriptions is understanding when and how to use arguments. You can make subtle changes to modify how and when to notify clients about mutations that have occurred. To do this, see the sample schema from the quickstart chapter, which creates "Todos". For this sample schema, the following mutations are defined:

```
type Mutation {
    createTodo(input: CreateTodoInput!): Todo
    updateTodo(input: UpdateTodoInput!): Todo
    deleteTodo(input: DeleteTodoInput!): Todo
}
```

In the default sample, clients can subscribe to updates to any `Todo` by using the `onUpdateTodo` `subscription` with no arguments:

```
subscription OnUpdateTodo {
  onUpdateTodo {
    description
    id
    name
    when
  }
}
```

You can filter your `subscription` by using its arguments. For example, to only trigger a `subscription` when a `todo` with a specific `ID` is updated, specify the `ID` value:

```
subscription OnUpdateTodo {
  onUpdateTodo(id: "a-todo-id") {
    description
    id
    name
    when
  }
}
```

You can also pass multiple arguments. For example, the following `subscription` demonstrates how to get notified of any `Todo` updates at a specific place and time:

```
subscription todosAtHome {
  onUpdateTodo(when: "tomorrow", where: "at home") {
    description
    id
    name
    when
    where
  }
}
```

Note that all of the arguments are optional. If you don't specify any arguments in your `subscription`, you will be subscribed to all `Todo` updates that occur in your application. However, you could update your `subscription`'s field definition to require the `ID` argument. This would force the response of a specific `todo` instead of all `todo`s:

```
onUpdateTodo(
  id: ID!,
  name: String,
  when: String,
  where: String,
  description: String
): Todo
```

### Argument null value has meaning
<a name="argument-null-value-has-meaning"></a>

When making a subscription query in AWS AppSync, a `null` argument value will filter the results differently than omitting the argument entirely.

Let's go back to the todos API sample where we could create todos. See the sample schema from the quickstart chapter.

Let's modify our schema to include a new `owner` field, on the `Todo` type, that describes who the owner is. The `owner` field is not required and can only be set on `UpdateTodoInput`. See the following simplified version of the schema:

```
type Todo {
  id: ID!
  name: String!
  when: String!
  where: String!
  description: String!
  owner: String
}

input CreateTodoInput {
  name: String!
  when: String!
  where: String!
  description: String!
}

input UpdateTodoInput {
  id: ID!
  name: String
  when: String
  where: String
  description: String
  owner: String
}

type Subscription {
    onUpdateTodo(
        id: ID,
        name: String,
        when: String,
        where: String,
        description: String
    ): Todo @aws_subscribe(mutations: ["updateTodo"])
}
```

The following subscription returns all `Todo`updates:

```
subscription MySubscription {
  onUpdateTodo {
    description
    id
    name
    when
    where
  }
}
```

If you modify the preceding subscription to add the field argument `owner: null`, you are now asking a different question. This subscription now registers the client to get notified of all the `Todo` updates that have not provided an owner.

```
subscription MySubscription {
  onUpdateTodo(owner: null) {
    description
    id
    name
    when
    where
  }
}
```

**Note**  
**As of January 1, 2022, MQTT over WebSockets is no longer available as a protocol for GraphQL subscriptions in AWS AppSync APIs. Pure WebSockets is the only protocol supported in AWS AppSync.**  
Clients based on the AWS AppSync SDK or the Amplify libraries, released after November 2019, automatically use pure WebSockets by default. Upgrading the clients to the latest version allows them to use AWS AppSync's pure WebSockets engine.  
Pure WebSockets come with a larger payload size (240 KB), a wider variety of client options, and improved CloudWatch metrics. For more information on using pure WebSocket clients, see [Building a real-time WebSocket client in AWS AppSync](real-time-websocket-client.md).

# Creating generic pub/sub APIs powered by serverless WebSockets in AWS AppSync
<a name="aws-appsync-real-time-create-generic-api-serverless-websocket"></a>

**Important**  
As of Mar 13, 2025, you can build a real-time PubSub API powered by WebSockets using AWS AppSync Events. For more information, see [Publish events via WebSocket](https://docs.aws.amazon.com/appsync/latest/eventapi/publish-websocket.html) in the *AWS AppSync Events Developer Guide*.

Some applications only require simple WebSocket APIs where clients listen to a specific channel or topic. Generic JSON data with no specific shape or strongly typed requirements can be pushed to clients listening to one of these channels in a pure and simple publish-subscribe (pub/sub) pattern.

Use AWS AppSync to implement simple pub/sub WebSocket APIs with little to no GraphQL knowledge in minutes by automatically generating GraphQL code on both the API backend and the client sides.

## Create and configure pub-sub APIs
<a name="aws-appsync-real-time-enhanced-filtering-using-pub-sub-apis"></a>

To get started, do the following: 

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **Dashboard**, choose **Create API**.

1. On the next screen, choose **Create a real-time API**, then choose **Next**.

1. Enter a friendly name for your pub/sub API.

1. You can enable [private API](https://docs.aws.amazon.com/appsync/latest/devguide/using-private-apis.html) features, but we recommend keeping this off for now. Choose **Next**.

1. You can choose to automatically generate a working pub/sub API using WebSockets. We recommend keeping this feature off for now as well. Choose **Next**.

1. Choose **Create API** and then wait for a couple of minutes. A new pre-configured AWS AppSync pub/sub API will be created in your AWS account.

The API uses AWS AppSync's built-in local resolvers (for more information about using local resolvers, see [Tutorial: Local Resolvers](https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-local-resolvers-js.html) in the *AWS AppSync Developer Guide*) to manage multiple temporary pub/sub channels and WebSocket connections, which automatically delivers and filters data to subscribed clients based only on the channel name. API calls are authorized with an API key.

After the API is deployed, you are presented with a couple of extra steps to generate client code and integrate it with your client application. For an example on how to quickly integrate a client, this guide will use a simple React web application.

1. Start by creating a boilerplate React app using [NPM](https://www.npmjs.com/get-npm) on your local machine:

   ```
   $ npx create-react-app mypubsub-app 
   $ cd mypubsub-app
   ```
**Note**  
This example uses the [Amplify libraries](https://docs.amplify.aws/lib/) to connect clients to the backend API. However there’s no need to create an Amplify CLI project locally. While React is the client of choice in this example, Amplify libraries also support iOS, Android, and Flutter clients, providing the same capabilities in these different runtimes. The supported Amplify clients provide simple abstractions to interact with AWS AppSync GraphQL API backends with few lines of code including built-in WebSocket capabilities fully compatible with the [AWS AppSync real-time WebSocket protocol](https://docs.aws.amazon.com/appsync/latest/devguide/real-time-websocket-client.html):  

   ```
   $ npm install @aws-amplify/api
   ```

1. In the AWS AppSync console, select **JavaScript**, then **Download** to download a single file with the API configuration details and generated GraphQL operations code.

1. Copy the downloaded file to the `/src` folder in your React project.

1. Next, replace the content of the existing boilerplate `src/App.js` file with the sample client code available in the console.

1. Use the following command to start the application locally:

   ```
   $ npm start
   ```

1. To test sending and receiving real-time data, open two browser windows and access *localhost:3000*. The sample application is configured to send generic JSON data to a hard-coded channel named *robots*.

1.  In one of the browser windows, enter the following JSON blob in the text box then click **Submit**: 

   ```
   {
     "robot":"r2d2",
     "planet": "tatooine"
   }
   ```

Both browser instances are subscribed to the *robots* channel and receive the published data in real time, displayed at the bottom of the web application:

![\[Example React app for pub/sub API\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/pub-sub-react.png)


All necessary GraphQL API code, including the schema, resolvers, and operations are automatically generated to enable a generic pub/sub use case. On the backend, data is published to AWS AppSync’s real-time endpoint with a GraphQL mutation such as the following:

```
mutation PublishData {
    publish(data: "{\"msg\": \"hello world!\"}", name: "channel") {
        data
        name
    }
}
```

Subscribers access the published data sent to the specific temporary channel with a related GraphQL subscription:

```
subscription SubscribeToData {
    subscribe(name:"channel") {
        name
        data
    }
}
```

## Implementing pub-sub APIs into existing applications
<a name="aws-appsync-real-time-enhanced-filtering-existing-apps"></a>

In case you just need to implement a real-time feature in an existing application, this generic pub/sub API configuration can be easily integrated into any application or API technology. While there are advantages in using a single API endpoint to securely access, manipulate, and combine data from one or more data sources in a single network call with GraphQL, there’s no need to convert or rebuild an existing REST-based application from scratch in order to take advantage of AWS AppSync's real-time capabilities. For instance, you could have an existing CRUD workload in a separate API endpoint with clients sending and receiving messages or events from the existing application to the generic pub/sub API for real-time and pub/sub purposes only. 

# Defining enhanced subscriptions filters in AWS AppSync
<a name="aws-appsync-real-time-enhanced-filtering"></a>

**Important**  
As of Mar 13, 2025, you can build a real-time PubSub API powered by WebSockets using AWS AppSync Events. For more information, see [Publish events via WebSocket](https://docs.aws.amazon.com/appsync/latest/eventapi/publish-websocket.html) in the *AWS AppSync Events Developer Guide*.

In AWS AppSync, you can define and enable business logic for data filtering on the backend directly in the GraphQL API subscription resolvers by using filters that support additional logical operators. You can configure these filters, unlike the subscription arguments that are defined on the subscription query in the client. For more information about using subscription arguments, see [Using subscription arguments](aws-appsync-real-time-data.md#using-subscription-arguments). For a list of operators, see [AWS AppSync resolver mapping template utility reference](resolver-util-reference.md).

For the purpose of this document, we divide real-time data filtering into the following categories:
+ **Basic filtering** - Filtering based on client-defined arguments in the subscription query.
+ **Enhanced filtering** - Filtering based on logic defined centrally in the AWS AppSync service backend.

The following sections explain how to configure enhanced subscription filters and show their practical use.

## Defining subscriptions in your GraphQL schema
<a name="aws-appsync-real-time-enhanced-filtering-using-subscription-filters"></a>

To use enhanced subscription filters, you define the subscription in the GraphQL schema then define the enhanced filter using a filtering extension. To illustrate how enhanced subscription filtering works in AWS AppSync, use the following GraphQL schema, which defines a ticket management system API, as an example:

```
type Ticket {
	id: ID
	createdAt: AWSDateTime
	content: String
	severity: Int
	priority: Priority
	category: String
	group: String
	status: String
	
}

type Mutation {
	createTicket(input: TicketInput): Ticket
}

type Query {
	getTicket(id: ID!): Ticket
}

type Subscription {
	onSpecialTicketCreated: Ticket @aws_subscribe(mutations: ["createTicket"])
	onGroupTicketCreated(group: String!): Ticket @aws_subscribe(mutations: ["createTicket"])
}



enum Priority {
	none
	lowest
	low
	medium
	high
	highest
}

input TicketInput {
	content: String
	severity: Int
	priority: Priority
	category: String
	group: String
```

Suppose you create a `NONE` data source for your API, then attach a resolver to the `createTicket` mutation using this data source. Your handlers may look like this:

```
import { util } from '@aws-appsync/utils';

export function request(ctx) {
	return {
		payload: {
			id: util.autoId(),
			createdAt: util.time.nowISO8601(),
			status: 'pending',
			...ctx.args.input,
		},
	};
}

export function response(ctx) {
	return ctx.result;
}
```

**Note**  
Enhanced filters are enabled in the GraphQL resolver's handler in a given subscription. For more information, see [Resolver reference](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-js-version.html).

To implement the behavior of the enhanced filter, you must use the `extensions.setSubscriptionFilter()` function to define a filter expression evaluated against published data from a GraphQL mutation that the subscribed clients might be interested in. For more information about the filtering extensions, see [Extensions](https://docs.aws.amazon.com//appsync/latest/devguide/extensions-js.html).

The following section explains how to use filtering extensions to implement enhanced filters.

## Creating enhanced subscription filters using filtering extensions
<a name="aws-appsync-real-time-enhanced-filtering-defining-filters"></a>

Enhanced filters are written in JSON in the response handler of the subscription's resolvers. Filters can be grouped together in a list called a `filterGroup`. Filters are defined using at least one rule, each with fields, operators, and values. Let’s define a new resolver for `onSpecialTicketCreated` that sets up an enhanced filter. You can configure multiple rules in a filter that are evaluated using AND logic, while multiple filters in a filter group are evaluated using OR logic:

```
import { util, extensions } from '@aws-appsync/utils';

export function request(ctx) {
	// simplfy return null for the payload
	return { payload: null };
}

export function response(ctx) {
	const filter = {
		or: [
			{ severity: { ge: 7 }, priority: { in: ['high', 'medium'] } },
			{ category: { eq: 'security' }, group: { in: ['admin', 'operators'] } },
		],
	};
	extensions.setSubscriptionFilter(util.transform.toSubscriptionFilter(filter));

  // important: return null in the response
	return null;
}
```

Based on the filters defined in the preceding example, important tickets are automatically pushed to subscribed API clients if a ticket is created with:
+ `priority` level `high` or `medium`

  AND 
+ `severity` level greater than or equal to `7` (`ge`)

OR 
+ `classification` ticket set to `Security` 

  AND 
+ `group` assignment set to `admin` or `operators`

![\[Example showing a ticket filtering query\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/aws-priority-example.png)


Filters defined in the subscription resolver (enhanced filtering) take precedence over filtering based only on subscription arguments (basic filtering). For more information about using subscription arguments, see [Using subscription arguments](https://docs.aws.amazon.com//appsync/latest/devguide/aws-appsync-real-time-data.html#using-subscription-arguments)).

If an argument is defined and required in the GraphQL schema of the subscription, filtering based on the given argument takes place only if the argument is defined as a rule in the resolver's `extensions.setSubscriptionFilter()` method. However, if there are no `extensions` filtering methods in the subscription resolver, arguments defined in the client are used only for basic filtering. You can't use basic filtering and enhanced filtering concurrently.

You can use the [`context` variable](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference-js.html) in the subscription's filter extension logic to access contextual information about the request. For example, when using Amazon Cognito User Pools, OIDC, or Lambda custom authorizers for authorization, you can retrieve information about your users in `context.identity` when the subscription is established. You can use that information to establish filters based on your users’ identity.

Now assume that you want to implement the enhanced filter behavior for `onGroupTicketCreated`. The `onGroupTicketCreated` subscription requires a mandatory `group` name as an argument. When created, tickets are automatically assigned a `pending` status. You can set up a subscription filter to only receive newly created tickets that belong to the provided group:

```
import { util, extensions } from '@aws-appsync/utils';

export function request(ctx) {
	// simplfy return null for the payload
	return { payload: null };
}

export function response(ctx) {
	const filter = { group: { eq: ctx.args.group }, status: { eq: 'pending' } };
	extensions.setSubscriptionFilter(util.transform.toSubscriptionFilter(filter));

	return null;
}
```

When data is published using a mutation like in the following example:

```
mutation CreateTicket {
  createTicket(input: {priority: medium, severity: 2, group: "aws"}) {
    id
    priority
    severity
    status
    group
    createdAt
  }
}
```

Subscribed clients listen for the data to be automatically pushed via WebSockets as soon as a ticket is created with the `createTicket` mutation:

```
subscription OnGroup {
  onGroupTicketCreated(group: "aws") {
    category
    status
    severity
    priority
    id
    group
    createdAt
    content
  }
}
```

Clients can be subscribed without arguments because the filtering logic is implemented in the AWS AppSync service with enhanced filtering, which simplifies the client code. Clients receive data only if the defined filter criteria is met.

## Defining enhanced filters for nested schema fields
<a name="aws-appsync-real-time-enhanced-filters-nested-schema-fields.title"></a>

You can use enhanced subscription filtering to filter nested schema fields. Suppose we modified the schema from the previous section to include location and address types:

```
type Ticket {
	id: ID
	createdAt: AWSDateTime
	content: String
	severity: Int
	priority: Priority
	category: String
	group: String
	status: String
	location: ProblemLocation
}

type Mutation {
	createTicket(input: TicketInput): Ticket
}

type Query {
	getTicket(id: ID!): Ticket
}

type Subscription {
	onSpecialTicketCreated: Ticket @aws_subscribe(mutations: ["createTicket"])
	onGroupTicketCreated(group: String!): Ticket @aws_subscribe(mutations: ["createTicket"])
}

type ProblemLocation {
	address: Address
}

type Address {
	country: String
}

enum Priority {
	none
	lowest
	low
	medium
	high
	highest
}

input TicketInput {
	content: String
	severity: Int
	priority: Priority
	category: String
	group: String
	location: AWSJSON
```

With this schema, you can use a `.` separator to represent nesting. The following example adds a filter rule for a nested schema field under `location.address.country`. The subscription will be triggered if the ticket's address is set to `USA`:

```
import { util, extensions } from '@aws-appsync/utils';

export const request = (ctx) => ({ payload: null });

export function response(ctx) {
	const filter = {
		or: [
			{ severity: { ge: 7 }, priority: { in: ['high', 'medium'] } },
			{ category: { eq: 'security' }, group: { in: ['admin', 'operators'] } },
			{ 'location.address.country': { eq: 'USA' } },
		],
	};
	extensions.setSubscriptionFilter(util.transform.toSubscriptionFilter(filter));
	return null;
}
```

In the example above, `location` represents nesting level one, `address` represents nesting level two, and `country` represents nesting level three, all of which are separated by the `.` separator.

You can test this subscription by using the `createTicket` mutation:

```
mutation CreateTicketInUSA {
  createTicket(input: {location: "{\"address\":{\"country\":\"USA\"}}"}) {
    category
    content
    createdAt
    group
    id
    location {
      address {
        country
      }
    }
    priority
    severity
    status
  }
}
```

## Defining enhanced filters from the client
<a name="aws-appsync-real-time-enhanced-filtering-defining-from-client"></a>

You can use basic filtering in GraphQL with [subscriptions arguments](https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-data.html#using-subscription-arguments). The client that makes the call in the subscription query defines the arguments' values. When enhanced filters are enabled in an AWS AppSync subscription resolver with the `extensions` filtering, backend filters defined in the resolver take precedence and priority.

Configure dynamic, client-defined enhanced filters using a `filter` argument in the subscription. When you configure these filters, you must update the GraphQL schema to reflect the new argument:

```
...
type Subscription {
    onSpecialTicketCreated(filter: String): Ticket
        @aws_subscribe(mutations: ["createTicket"])
}
...
```

The client can then send a subscription query like in the following example:

```
subscription onSpecialTicketCreated($filter: String) {
     onSpecialTicketCreated(filter: $filter) {
        id
        group
        description
        priority
        severity
     }
 }
```

You can configure the query variable like the following example:

```
{"filter" : "{\"severity\":{\"le\":2}}"}
```

The `util.transform.toSubscriptionFilter()` resolver utility can be implemented in the subscription response mapping template to apply the filter defined in the subscription argument for each client:

```
import { util, extensions } from '@aws-appsync/utils';

export function request(ctx) {
	// simplfy return null for the payload
	return { payload: null };
}

export function response(ctx) {
	const filter = ctx.args.filter;
	extensions.setSubscriptionFilter(util.transform.toSubscriptionFilter(filter));
	return null;
}
```

With this strategy, clients can define their own filters that use enhanced filtering logic and additional operators. Filters are assigned when a given client invokes the subscription query in a secure WebSocket connection. For more information about the transform utility for enhanced filtering, including the format of the `filter` query variable payload, see [JavaScript resolvers overview](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-overview-js.html).

## Additional enhanced filtering restrictions
<a name="aws-appsync-real-time-enhanced-filtering-additional-restrictions"></a>

Below are several use cases where additional restrictions are placed on enhanced filters:
+ Enhanced filters don't support filtering for top-level object lists. In this use case, published data from the mutation will be ignored for enhanced subscriptions.
+ AWS AppSync supports up to five levels of nesting. Filters on schema fields past nesting level five will be ignored. Take the GraphQL response below. The `continent` field in `venue.address.country.metadata.continent` is allowed because it's a level five nest. However, `financial` in `venue.address.country.metadata.capital.financial` is a level six nest, so the filter won't work:

  ```
  {
      "data": {
          "onCreateFilterEvent": {
              "venue": {
                  "address": {
                      "country": {
                          "metadata": {
                              "capital": {
                                  "financial": "New York"
                              },
                              "continent" : "North America"
                          }
                      },
                      "state": "WA"
                  },
                  "builtYear": 2023
              },
              "private": false,
          }
      }
  }
  ```

# Unsubscribing WebSocket connections using filters in AWS AppSync
<a name="aws-appsync-real-time-invalidation"></a>

**Important**  
As of Mar 13, 2025, you can build a real-time PubSub API powered by WebSockets using AWS AppSync Events. For more information, see [Publish events via WebSocket](https://docs.aws.amazon.com/appsync/latest/eventapi/publish-websocket.html) in the *AWS AppSync Events Developer Guide*.

In AWS AppSync, you can forcibly unsubscribe and close (invalidate) a WebSocket connection from a connected client based on specific filtering logic. This is useful in authorization-related scenarios such as when you remove a user from a group.

Subscription invalidation occurs in response to a payload defined in a mutation. We recommend that you treat mutations used to invalidate subscription connections as administrative operations in your API and scope permissions accordingly by limiting their use to an admin user, group, or backend service. For example, using schema authorization directives such as `@aws_auth(cognito_groups: ["Administrators"])` or `@aws_iam`. For more information, see [Using additional authorization modes](https://docs.aws.amazon.com/appsync/latest/devguide/security-authz.html#using-additional-authorization-modes).

Invalidation filters use the same syntax and logic as [enhanced subscription filters](https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-enhanced-filtering.html). Define these filters using the following utilities:
+ `extensions.invalidateSubscriptions()` – Defined in the GraphQL resolver's response handler for a mutation.
+ `extensions.setSubscriptionInvalidationFilter()` – Defined in the GraphQL resolver's response handler of the subscriptions linked to the mutation.

For more information about invalidation filtering extensions, see [JavaScript resolvers overview](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-reference-overview-js.html).

## Using subscription invalidation
<a name="aws-appsync-real-time-invalidation-using-invalidations"></a>

To see how subscription invalidation works in AWS AppSync, use the following GraphQL schema:

```
type User {
  userId: ID!
  groupId: ID!
}
    
type Group {
  groupId: ID!
  name: String!
  members: [ID!]!
}

type GroupMessage {
  userId: ID!
  groupId: ID!
  message: String!
}

type Mutation {
    createGroupMessage(userId: ID!, groupId : ID!, message: String!): GroupMessage
    removeUserFromGroup(userId: ID!, groupId : ID!) : User @aws_iam
}

type Subscription {
    onGroupMessageCreated(userId: ID!, groupId : ID!): GroupMessage
        @aws_subscribe(mutations: ["createGroupMessage"])
}

type Query {
	none: String
}
```

Define an invalidation filter in the `removeUserFromGroup` mutation resolver code:

```
import { extensions } from '@aws-appsync/utils';

export function request(ctx) {
	return { payload: null };
}

export function response(ctx) {
	const { userId, groupId } = ctx.args;
	extensions.invalidateSubscriptions({
		subscriptionField: 'onGroupMessageCreated',
		payload: { userId, groupId },
	});
	return { userId, groupId };
}
```

When the mutation is invoked, the data defined in the `payload` object is used to unsubscribe the subscription defined in `subscriptionField`. An invalidation filter is also defined in the `onGroupMessageCreated` subscription's response mapping template. 

If the `extensions.invalidateSubscriptions()` payload contains an ID that matches the IDs from the subscribed client as defined in the filter, the corresponding subscription is unsubscribed. In addition, the WebSocket connection is closed. Define the subscription resolver code for the `onGroupMessageCreated` subscription:

```
import { util, extensions } from '@aws-appsync/utils';

export function request(ctx) {
	// simplfy return null for the payload
	return { payload: null };
}

export function response(ctx) {
	const filter = { groupId: { eq: ctx.args.groupId } };
	extensions.setSubscriptionFilter(util.transform.toSubscriptionFilter(filter));

	const invalidation = { groupId: { eq: ctx.args.groupId }, userId: { eq: ctx.args.userId } };
	extensions.setSubscriptionInvalidationFilter(util.transform.toSubscriptionFilter(invalidation));

	return null;
}
```

Note that the subscription response handler can have both subscription filters and invalidation filters defined at the same time.

For example, assume that client A subscribes a new user with the ID `user-1` to the group with the ID `group-1` using the following subscription request:

```
onGroupMessageCreated(userId : "user-1", groupId: :"group-1"){...}
```

AWS AppSync runs the subscription resolver, which generates subscription and invalidation filters as defined in the preceding `onGroupMessageCreated` response mapping template. For client A, the subscription filters allow data to be sent only to `group-1`, and the invalidation filters are defined for both `user-1` and `group-1`.

Now assume that client B subscribes a user with the ID `user-2` to a group with the ID `group-2` using the following subscription request:

```
onGroupMessageCreated(userId : "user-2", groupId: :"group-2"){...}
```

AWS AppSync runs the subscription resolver, which generates subscription and invalidation filters. For client B, the subscription filters allow data to be sent only to `group-2`, and the invalidation filters are defined for both `user-2` and `group-2`.

Next, assume that a new group message with the ID `message-1` is created using a mutation request like in the following example:

```
createGroupMessage(id: "message-1", groupId :
      "group-1", message: "test message"){...}
```

Subscribed clients matching the defined filters automatically receive the following data payload via WebSockets:

```
{
  "data": {
    "onGroupMessageCreated": {
      "id": "message-1",
      "groupId": "group-1",
      "message": "test message",
    }
  }
}
```

Client A receives the message because the filtering criteria match the defined subscription filter. However, client B doesn't receive the message, as the user is not part of `group-1`. Also, the request doesn't match the subscription filter defined in the subscription resolver.

Finally, assume that `user-1` is removed from `group-1` using the following mutation request:

```
removeUserFromGroup(userId: "user-1", groupId : "group-1"){...}
```

The mutation initiates a subscription invalidation as defined in its `extensions.invalidateSubscriptions()` resolver response handler code. AWS AppSync then unsubscribes client A and closes its WebSocket connection. Client B is unaffected, as the invalidation payload defined in the mutation doesn't match its user or group.

When AWS AppSync invalidates a connection, the client receives a message confirming that they are unsubscribed:

```
{
  "message": "Subscription complete."
}
```

## Using context variables in subscription invalidation filters
<a name="aws-appsync-real-time-invalidation-context"></a>

As with enhanced subscription filters, you can use the [`context` variable](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference-js.html) in the subscription invalidation filter extension to access certain data.

For example, it's possible to configure an email address as the invalidation payload in the mutation, then match it against the email attribute or claim from a subscribed user authorized with Amazon Cognito user pools or OpenID Connect. The invalidation filter defined in the `extensions.setSubscriptionInvalidationFilter()` subscription invalidator checks if the email address set by the mutation's `extensions.invalidateSubscriptions()` payload matches the email address retrieved from the user's JWT token in `context.identity.claims.email`, initiating the invalidation.

# Building a real-time WebSocket client in AWS AppSync
<a name="real-time-websocket-client"></a>

**Important**  
As of Mar 13, 2025, you can build a real-time PubSub API powered by WebSockets using AWS AppSync Events. For more information, see [Publish events via WebSocket](https://docs.aws.amazon.com/appsync/latest/eventapi/publish-websocket.html) in the *AWS AppSync Events Developer Guide*..

AWS AppSync's real-time WebSocket client enables GraphQL subscriptions through a multi-step process. The client first establishes a WebSocket connection with the AWS AppSync real-time endpoint, sends a connection initialization message, and waits for acknowledgment. After successful connection, the client registers subscriptions by sending start messages with unique IDs and GraphQL queries. AWS AppSync confirms successful subscriptions with acknowledgment messages. The client then listens for subscription events, which are triggered by corresponding mutations. To maintain the connection, AWS AppSync sends periodic keep-alive messages. When finished, the client unregisters subscriptions by sending stop messages. This system supports multiple subscriptions on a single WebSocket connection and accommodates various authorization modes, including API keys, Amazon Cognito user pools, IAM, and Lambda.

## Real-time WebSocket client implementation for GraphQL subscriptions
<a name="appsynclong-real-time-websocket-client-implementation-guide-for-graphql-subscriptions"></a>

The following sequence diagram and steps show the real-time subscriptions workflow between the WebSocket client, HTTP client, and AWS AppSync.

![\[Sequence diagram showing WebSocket client, AppSync endpoints, and HTTP client interactions for real-time subscriptions.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/realtime-client-flow.png)


1. The client establishes a WebSocket connection with the AWS AppSync real-time endpoint. If there is a network error, the client should do a jittered exponential backoff. For more information, see [Exponential backoff and jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/) on the AWS Architecture Blog.

1. (Optional) After successfully establishing the WebSocket connection, the client sends a `connection_init` message.

1. If `connection_init` is sent, the client waits for a `connection_ack` message from AWS AppSync. This message includes a `connectionTimeoutMs` parameter, which is the maximum wait time in milliseconds for a `"ka"` (keep-alive) message.

1. AWS AppSync sends `"ka"` messages periodically. The client keeps track of the time that it received each `"ka"` message. If the client doesn't receive a `"ka"` message within `connectionTimeoutMs` milliseconds, the client should close the connection.

1. The client registers the subscription by sending a `start` subscription message. A single WebSocket connection supports multiple subscriptions, even if they are in different authorization modes.

1. The client waits for AWS AppSync to send `start_ack` messages to confirm successful subscriptions. If there is an error, AWS AppSync returns a `"type": "error"` message.

1. The client listens for subscription events, which are sent after a corresponding mutation is called. Queries and mutations are usually sent through `https://` to the AWS AppSync GraphQL endpoint. Subscriptions flow through the AWS AppSync real-time endpoint using the secure WebSocket (`wss://`).

1. The client unregisters the subscription by sending a `stop` subscription message.

1. After unregistering all subscriptions and checking that there are no messages transferring through the WebSocket, the client can disconnect from the WebSocket connection.

## Handshake details to establish the WebSocket connection
<a name="handshake-details-to-establish-the-websocket-connection"></a>

To connect and initiate a successful handshake with AWS AppSync, a WebSocket client needs the following:
+ The AWS AppSync real-time endpoint
+ Headers – Contain information relevant to the AWS AppSync endpoint and authorization. AWS AppSync supports the following three methods for providing headers: 
  + Headers via query string
    + The header information is encoded as a base64 string, derived from a stringified JSON object. This JSON object contains details relevant to the AWS AppSync endpoint and authorization. The content of the JSON object varies depending on the authorization mode.
  + Headers via `Sec-WebSocket-Protocol`
    + A base64Url-encoded string from the stringified JSON object that contains information relevant to the AWS AppSync endpoint and authorization is passed as the protocol in the `Sec-WebSocket-Protocol` header. The content of the JSON object varies depending on the authorization mode.
  + Headers via standard HTTP headers:
    + Headers can be passed as standard HTTP headers in the connection request, similar to how headers are passed for GraphQL queries and mutations to AWS AppSync. However, passing headers via standard HTTP headers is not supported for private API connection requests.
+  `payload` – Base64-encoded string of `payload`. Payload is needed only if headers are provided using query string

With these requirements, a WebSocket client can connect to the URL, which contains the real-time endpoint with the query string, using `graphql-ws` as the WebSocket protocol.

### Discovering the real-time endpoint from the GraphQL endpoint
<a name="discovering-the-appsync-real-time-endpoint-from-the-appsync-graphql-endpoint"></a>

The AWS AppSync GraphQL endpoint and the AWS AppSync real-time endpoint are slightly different in protocol and domain. You can retrieve the GraphQL endpoint using the AWS Command Line Interface (AWS CLI) command `aws appsync get-graphql-api`.

****AWS AppSync GraphQL endpoint:****  
 `https://example1234567890000.appsync-api.us-east-1.amazonaws.com/graphql`

****AWS AppSync real-time endpoint:****  
 `wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql`

Applications can connect to the AWS AppSync GraphQL endpoint (`https://`) using any HTTP client for queries and mutations. Applications can connect to the AWS AppSync real-time endpoint (`wss://`) using any WebSocket client for subscriptions.

With custom domain names, you can interact with both endpoints using a single domain. For example, if you configure `api.example.com` as your custom domain, you can interact with your GraphQL and real-time endpoints using these URLs:

**AWS AppSync custom domain GraphQL endpoint:**  
`https://api.example.com/graphql`

**AWS AppSync custom domain real-time endpoint:**  
`wss://api.example.com/graphql/realtime`

## Header parameter format based on AWS AppSync API authorization mode
<a name="header-parameter-format-based-on-appsync-api-authorization-mode"></a>

The format of the `header` object used in the connection query string varies depending on the AWS AppSync API authorization mode. The `host` field in the object refers to the AWS AppSync GraphQL endpoint, which is used to validate the connection even if the `wss://` call is made against the real-time endpoint. To initiate the handshake and establish the authorized connection, the `payload` should be an empty JSON object. Payload is needed only if headers are passed via query string.

The following sections demonstrate the header formats for each authorization mode.

### API key
<a name="api-key"></a>

#### API key header
<a name="api-key-list"></a>

**Header contents**
+  `"host": <string>`: The host for the AWS AppSync GraphQL endpoint or your custom domain name.
+  `"x-api-key": <string>`: The API key configured for the AWS AppSync API.

**Example**

```
{
    "host":"example1234567890000.appsync-api.us-east-1.amazonaws.com",
    "x-api-key":"da2-12345678901234567890123456"
}
```

**Headers via query string**

First, a JSON object containing the `host` and the `x-api-key` is converted into a string. Next, this string is encoded using base64 encoding. The resulting base64-encoded string is added as a query parameter named `header` to the WebSocket URL for establishing the connection with the AWS AppSync real-time endpoint. The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql?header=eyJob3N0IjoiZXhhbXBsZTEyMzQ1Njc4OTAwMDAuYXBwc3luYy1hcGkudXMtZWFzdC0xLmFtYXpvbmF3cy5jb20iLCJ4LWFtei1kYXRlIjoiMjAyMDA0MDFUMDAxMDEwWiIsIngtYXBpLWtleSI6ImRhMi16NHc0NHZoczV6Z2MzZHRqNXNranJsbGxqaSJ9&payload=e30=
```

It's important to note that in addition to the base64-encoded header object, an empty JSON object \$1\$1 is also base64-encoded and included as a separate query parameter named `payload` in the WebSocket URL.

**Headers via `Sec-WebSocket-Protocol`**

A JSON object containing the `host` and the `x-api-key` is converted to a string and then encoded using base64Url encoding. The resulting base64Url-encoded string is prefixed with `header-`. This prefixed string is then used as a new sub-protocol in addition to `graphql-ws` in the `Sec-WebSocket-Protocol` header when establishing the WebSocket connection with the AWS AppSync real-time endpoint. 

The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql
```

The `Sec-WebSocket-Protocol` header contains the following value:

```
"sec-websocket-protocol" : ["graphql-ws", "header-ewogICAgImhvc3QiOiJleGFtcGxlMTIzNDU2Nzg5MDAwMC5hcHBzeW5jLWFwaS51cy1lYXN0LTEuYW1hem9uYXdzLmNvbSIsCiAgICAieC1hcGkta2V5IjoiZGEyLTEyMzQ1Njc4OTAxMjM0NTY3ODkwMTIzNDU2Igp9"]
```

**Headers via standard HTTP headers**

In this method, the host and API key information is transmitted using standard HTTP headers when establishing the WebSocket connection with the AWS AppSync real-time endpoint. The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql
```

The request headers would include the following:

```
"sec-websocket-protocol" : ["graphql-ws"]
"host":"example1234567890000.appsync-api.us-east-1.amazonaws.com",
"x-api-key":"da2-12345678901234567890123456"
```

### Amazon Cognito user pools and OpenID Connect (OIDC)
<a name="amazon-cognito-user-pools-and-openid-connect-oidc"></a>

#### Amazon Cognito and OIDC header
<a name="amazon-cognito-user-pools-and-openid-connect-oidc-list"></a>

Header contents:
+  `"Authorization": <string>`: A JWT ID token. The header can use a [Bearer scheme](https://datatracker.ietf.org/doc/html/rfc6750#section-2.1).
+  `"host": <string>`: The host for the AWS AppSync GraphQL endpoint or your custom domain name.

Example:

```
{
    "Authorization":"eyEXAMPLEiJjbG5xb3A5eW5MK09QYXIrMTJHWEFLSXBieU5WNHhsQjEXAMPLEnM2WldvPSIsImFsZyI6IlEXAMPLEn0.eyEXAMPLEiJhNmNmMjcwNy0xNjgxLTQ1NDItOWYxOC1lNjY0MTg2NjlkMzYiLCJldmVudF9pZCI6ImVkMzM5MmNkLWNjYTMtNGM2OC1hNDYyLTJlZGI3ZTNmY2FjZiIsInRva2VuX3VzZSI6ImFjY2VzcyIsInNjb3BlIjoiYXdzLmNvZ25pdG8uc2lnbmluLnVzZXIuYWRtaW4iLCJhdXRoX3RpbWUiOjE1Njk0NTc3MTgsImlzcyI6Imh0dHBzOlwvXC9jb2duaXRvLWlkcC5hcC1zb3V0aGVhc3QtMi5hbWF6b25hd3MuY29tXC9hcC1zb3V0aGVhc3QtMl83OHY0SVZibVAiLCJleHAiOjE1Njk0NjEzMjAsImlhdCI6MTU2OTQ1NzcyMCwianRpIjoiNTgzZjhmYmMtMzk2MS00YzA4LWJhZTAtYzQyY2IxMTM5NDY5IiwiY2xpZW50X2lkIjoiM3FlajVlMXZmMzd1N3RoZWw0dG91dDJkMWwiLCJ1c2VybmFtZSI6ImVsb3EXAMPLEn0.B4EXAMPLEFNpJ6ikVp7e6DRee95V6Qi-zEE2DJH7sHOl2zxYi7f-SmEGoh2AD8emxQRYajByz-rE4Jh0QOymN2Ys-ZIkMpVBTPgu-TMWDyOHhDUmUj2OP82yeZ3wlZAtr_gM4LzjXUXmI_K2yGjuXfXTaa1mvQEBG0mQfVd7SfwXB-jcv4RYVi6j25qgow9Ew52ufurPqaK-3WAKG32KpV8J4-Wejq8t0c-yA7sb8EnB551b7TU93uKRiVVK3E55Nk5ADPoam_WYE45i3s5qVAP_-InW75NUoOCGTsS8YWMfb6ecHYJ-1j-bzA27zaT9VjctXn9byNFZmEXAMPLExw",
    "host":"example1234567890000.appsync-api.us-east-1.amazonaws.com"
}
```

**Headers via query string**

First, a JSON object containing the `host` and the `Authorization` is converted into a string. Next, this string is encoded using base64 encoding. The resulting base64-encoded string is added as a query parameter named `header` to the WebSocket URL for establishing the connection with the AWS AppSync real-time endpoint. The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql?header=eyJBdXRob3JpemF0aW9uIjoiZXlKcmFXUWlPaUpqYkc1eGIzQTVlVzVNSzA5UVlYSXJNVEpIV0VGTFNYQmllVTVXTkhoc1FqaFBWVzlZTW5NMldsZHZQU0lzSW1Gc1p5STZJbEpUTWpVMkluMC5leUp6ZFdJaU9pSmhObU5tTWpjd055MHhOamd4TFRRMU5ESXRPV1l4T0MxbE5qWTBNVGcyTmpsa016WWlMQ0psZG1WdWRGOXBaQ0k2SW1Wa016TTVNbU5rTFdOallUTXROR00yT0MxaE5EWXlMVEpsWkdJM1pUTm1ZMkZqWmlJc0luUnZhMlZ1WDNWelpTSTZJbUZqWTJWemN5SXNJbk5qYjNCbElqb2lZWGR6TG1OdloyNXBkRzh1YzJsbmJtbHVMblZ6WlhJdVlXUnRhVzRpTENKaGRYUm9YM1JwYldVaU9qRTFOamswTlRjM01UZ3NJbWx6Y3lJNkltaDBkSEJ6T2x3dlhDOWpiMmR1YVhSdkxXbGtjQzVoY0MxemIzVjBhR1ZoYzNRdE1pNWhiV0Y2YjI1aGQzTXVZMjl0WEM5aGNDMXpiM1YwYUdWaGMzUXRNbDgzT0hZMFNWWmliVkFpTENKbGVIQWlPakUxTmprME5qRXpNakFzSW1saGRDSTZNVFUyT1RRMU56Y3lNQ3dpYW5ScElqb2lOVGd6WmpobVltTXRNemsyTVMwMFl6QTRMV0poWlRBdFl6UXlZMkl4TVRNNU5EWTVJaXdpWTJ4cFpXNTBYMmxrSWpvaU0zRmxhalZsTVhabU16ZDFOM1JvWld3MGRHOTFkREprTVd3aUxDSjFjMlZ5Ym1GdFpTSTZJbVZzYjNKNllXWmxJbjAuQjRjZEp0aDNLRk5wSjZpa1ZwN2U2RFJlZTk1VjZRaS16RUUyREpIN3NIT2wyenhZaTdmLVNtRUdvaDJBRDhlbXhRUllhakJ5ei1yRTRKaDBRT3ltTjJZcy1aSWtNcFZCVFBndS1UTVdEeU9IaERVbVVqMk9QODJ5ZVozd2xaQXRyX2dNNEx6alhVWG1JX0syeUdqdVhmWFRhYTFtdlFFQkcwbVFmVmQ3U2Z3WEItamN2NFJZVmk2ajI1cWdvdzlFdzUydWZ1clBxYUstM1dBS0czMktwVjhKNC1XZWpxOHQwYy15QTdzYjhFbkI1NTFiN1RVOTN1S1JpVlZLM0U1NU5rNUFEUG9hbV9XWUU0NWkzczVxVkFQXy1Jblc3NU5Vb09DR1RzUzhZV01mYjZlY0hZSi0xai1iekEyN3phVDlWamN0WG45YnlORlptS0xwQTJMY3h3IiwiaG9zdCI6ImV4YW1wbGUxMjM0NTY3ODkwMDAwLmFwcHN5bmMtYXBpLnVzLWVhc3QtMS5hbWF6b25hd3MuY29tIn0=&payload=e30=
```

It's important to note that in addition to the base64-encoded header object, an empty JSON object \$1\$1 is also base64-encoded and included as a separate query parameter named `payload` in the WebSocket URL.

**Headers via `Sec-WebSocket-Protocol`**

A JSON object containing the `host` and the `Authorization` is converted to a string and then encoded using base64Url encoding. The resulting base64Url-encoded string is prefixed with `header-`. This prefixed string is then used as a new sub-protocol in addition to `graphql-ws` in the `Sec-WebSocket-Protocol` header when establishing the WebSocket connection with the AWS AppSync real-time endpoint. 

The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql
```

The `Sec-WebSocket-Protocol` header contains the following value:

```
"sec-websocket-protocol" : ["graphql-ws", "header-ewogICAgImhvc3QiOiJleGFtcGxlMTIzNDU2Nzg5MDAwMC5hcHBzeW5jLWFwaS51cy1lYXN0LTEuYW1hem9uYXdzLmNvbSIsCiAgICAieC1hcGkta2V5IjoiZGEyLTEyMzQ1Njc4OTAxMjM0NTY3ODkwMTIzNDU2Igp9"]
```

**Headers via standard HTTP headers**

In this method, the host and Authorization information is transmitted using standard HTTP headers when establishing the WebSocket connection with the AWS AppSync real-time endpoint. The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql
```

The request headers would include the following:

```
"sec-websocket-protocol" : ["graphql-ws"]
"Authorization":"eyEXAMPLEiJjbG5xb3A5eW5MK09QYXIrMTJHWEFLSXBieU5WNHhsQjEXAMPLEnM2WldvPSIsImFsZyI6IlEXAMPLEn0.eyEXAMPLEiJhNmNmMjcwNy0xNjgxLTQ1NDItOWYxOC1lNjY0MTg2NjlkMzYiLCJldmVudF9pZCI6ImVkMzM5MmNkLWNjYTMtNGM2OC1hNDYyLTJlZGI3ZTNmY2FjZiIsInRva2VuX3VzZSI6ImFjY2VzcyIsInNjb3BlIjoiYXdzLmNvZ25pdG8uc2lnbmluLnVzZXIuYWRtaW4iLCJhdXRoX3RpbWUiOjE1Njk0NTc3MTgsImlzcyI6Imh0dHBzOlwvXC9jb2duaXRvLWlkcC5hcC1zb3V0aGVhc3QtMi5hbWF6b25hd3MuY29tXC9hcC1zb3V0aGVhc3QtMl83OHY0SVZibVAiLCJleHAiOjE1Njk0NjEzMjAsImlhdCI6MTU2OTQ1NzcyMCwianRpIjoiNTgzZjhmYmMtMzk2MS00YzA4LWJhZTAtYzQyY2IxMTM5NDY5IiwiY2xpZW50X2lkIjoiM3FlajVlMXZmMzd1N3RoZWw0dG91dDJkMWwiLCJ1c2VybmFtZSI6ImVsb3EXAMPLEn0.B4EXAMPLEFNpJ6ikVp7e6DRee95V6Qi-zEE2DJH7sHOl2zxYi7f-SmEGoh2AD8emxQRYajByz-rE4Jh0QOymN2Ys-ZIkMpVBTPgu-TMWDyOHhDUmUj2OP82yeZ3wlZAtr_gM4LzjXUXmI_K2yGjuXfXTaa1mvQEBG0mQfVd7SfwXB-jcv4RYVi6j25qgow9Ew52ufurPqaK-3WAKG32KpV8J4-Wejq8t0c-yA7sb8EnB551b7TU93uKRiVVK3E55Nk5ADPoam_WYE45i3s5qVAP_-InW75NUoOCGTsS8YWMfb6ecHYJ-1j-bzA27zaT9VjctXn9byNFZmEXAMPLExw",
"host":"example1234567890000.appsync-api.us-east-1.amazonaws.com"
```

### IAM
<a name="iam"></a>

#### IAM header
<a name="iam-list"></a>

**Header content**
+  `"accept": "application/json, text/javascript"`: A constant `<string>` parameter.
+  `"content-encoding": "amz-1.0"`: A constant `<string>` parameter.
+  `"content-type": "application/json; charset=UTF-8"`: A constant `<string>` parameter.
+  `"host": <string>`: This is the host for the AWS AppSync GraphQL endpoint.
  + `"x-amz-date": <string>`: The timestamp must be in UTC and in the following ISO 8601 format: YYYYMMDD'T'HHMMSS'Z'. For example, 20150830T123600Z is a valid timestamp. Do not include milliseconds in the timestamp. For more information, see [Handling dates in Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/sigv4-date-handling.html) in the *AWS General Reference*.
  +  `"X-Amz-Security-Token": <string>`: The AWS session token, which is required when using temporary security credentials. For more information, see [Using temporary credentials with AWS resources](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_credentials_temp_use-resources.html) in the *IAM User Guide*.
  +  `"Authorization": <string>`: Signature Version 4 (SigV4) signing information for the AWS AppSync endpoint. For more information on the signing process, see [Task 4: Add the signature to the HTTP request](https://docs.aws.amazon.com/general/latest/gr/sigv4-add-signature-to-request.html) in the *AWS General Reference*.

The SigV4 signing HTTP request includes a canonical URL, which is the AWS AppSync GraphQL endpoint with `/connect` appended. The service endpoint AWS Region is same Region where you're using the AWS AppSync API, and the service name is 'appsync'. The HTTP request to sign is the following:

```
{
  url: "https://example1234567890000.appsync-api.us-east-1.amazonaws.com/graphql/connect",
  data: "{}",
  method: "POST",
  headers: {
    "accept": "application/json, text/javascript",
    "content-encoding": "amz-1.0",
    "content-type": "application/json; charset=UTF-8",
  }
}
```

**Example**

```
{
  "accept": "application/json, text/javascript",
  "content-encoding": "amz-1.0",
  "content-type": "application/json; charset=UTF-8",
  "host": "example1234567890000.appsync-api.us-east-1.amazonaws.com",
  "x-amz-date": "20200401T001010Z",
  "X-Amz-Security-Token": "AgEXAMPLEZ2luX2VjEAoaDmFwLXNvdXRoZWFEXAMPLEcwRQIgAh97Cljq7wOPL8KsxP3YtDuyc/9hAj8PhJ7Fvf38SgoCIQDhJEXAMPLEPspioOztj++pEagWCveZUjKEn0zyUhBEXAMPLEjj//////////8BEXAMPLExODk2NDgyNzg1NSIMo1mWnpESWUoYw4BkKqEFSrm3DXuL8w+ZbVc4JKjDP4vUCKNR6Le9C9pZp9PsW0NoFy3vLBUdAXEXAMPLEOVG8feXfiEEA+1khgFK/wEtwR+9zF7NaMMMse07wN2gG2tH0eKMEXAMPLEQX+sMbytQo8iepP9PZOzlZsSFb/dP5Q8hk6YEXAMPLEYcKZsTkDAq2uKFQ8mYUVA9EtQnNRiFLEY83aKvG/tqLWNnGlSNVx7SMcfovkFDqQamm+88y1OwwAEYK7qcoceX6Z7GGcaYuIfGpaX2MCCELeQvZ+8WxEgOnIfz7GYvsYNjLZSaRnV4G+ILY1F0QNW64S9Nvj+BwDg3ht2CrNvpwjVYlj9U3nmxE0UG5ne83LL5hhqMpm25kmL7enVgw2kQzmU2id4IKu0C/WaoDRuO2F5zE63vJbxN8AYs7338+4B4HBb6BZ6OUgg96Q15RA41/gIqxaVPxyTpDfTU5GfSLxocdYeniqqpFMtZG2n9d0u7GsQNcFkNcG3qDZm4tDo8tZbuym0a2VcF2E5hFEgXBa+XLJCfXi/77OqAEjP0x7Qdk3B43p8KG/BaioP5RsV8zBGvH1zAgyPha2rN70/tT13yrmPd5QYEfwzexjKrV4mWIuRg8NTHYSZJUaeyCwTom80VFUJXG+GYTUyv5W22aBcnoRGiCiKEYTLOkgXecdKFTHmcIAejQ9Welr0a196Kq87w5KNMCkcCGFnwBNFLmfnbpNqT6rUBxxs3X5ntX9d8HVtSYINTsGXXMZCJ7fnbWajhg/aox0FtHX21eF6qIGT8j1z+l2opU+ggwUgkhUUgCH2TfqBj+MLMVVvpgqJsPKt582caFKArIFIvO+9QupxLnEH2hz04TMTfnU6bQC6z1buVe7h+tOLnh1YPFsLQ88anib/7TTC8k9DsBTq0ASe8R2GbSEsmO9qbbMwgEaYUhOKtGeyQsSJdhSk6XxXThrWL9EnwBCXDkICMqdntAxyyM9nWsZ4bL9JHqExgWUmfWChzPFAqn3F4y896UqHTZxlq3WGypn5HHcem2Hqf3IVxKH1inhqdVtkryEiTWrI7ZdjbqnqRbl+WgtPtKOOweDlCaRs3R2qXcbNgVhleMk4IWnF8D1695AenU1LwHjOJLkCjxgNFiWAFEPH9aEXAMPLExA==",
  "Authorization": "AWS4-HMAC-SHA256 Credential=XXXXXXXXXXXXXXXXXXX/20200401/us-east-1/appsync/aws4_request, SignedHeaders=accept;content-encoding;content-type;host;x-amz-date;x-amz-security-token, Signature=83EXAMPLEbcc1fe3ee69f75cd5ebbf4cb4f150e4f99cec869f149c5EXAMPLEdc"
}
```

**Headers via query string**

First, a JSON object containing the `host` (AWS AppSync GraphQL endpoint) and the other authorization headers is converted to a string. Next, this string is encoded using base64 encoding. The resulting base64-encoded string is added to the WebSocket URL as a query parameter named `header`. The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql?header=eyJBdXRob3JpemF0aW9uIjoiZXlKcmFXUWlPaUpqYkc1eGIzQTVlVzVNSzA5UVlYSXJNVEpIV0VGTFNYQmllVTVXTkhoc1FqaFBWVzlZTW5NMldsZHZQU0lzSW1Gc1p5STZJbEpUTWpVMkluMC5leUp6ZFdJaU9pSmhObU5tTWpjd055MHhOamd4TFRRMU5ESXRPV1l4T0MxbE5qWTBNVGcyTmpsa016WWlMQ0psZG1WdWRGOXBaQ0k2SW1Wa016TTVNbU5rTFdOallUTXROR00yT0MxaE5EWXlMVEpsWkdJM1pUTm1ZMkZqWmlJc0luUnZhMlZ1WDNWelpTSTZJbUZqWTJWemN5SXNJbk5qYjNCbElqb2lZWGR6TG1OdloyNXBkRzh1YzJsbmJtbHVMblZ6WlhJdVlXUnRhVzRpTENKaGRYUm9YM1JwYldVaU9qRTFOamswTlRjM01UZ3NJbWx6Y3lJNkltaDBkSEJ6T2x3dlhDOWpiMmR1YVhSdkxXbGtjQzVoY0MxemIzVjBhR1ZoYzNRdE1pNWhiV0Y2YjI1aGQzTXVZMjl0WEM5aGNDMXpiM1YwYUdWaGMzUXRNbDgzT0hZMFNWWmliVkFpTENKbGVIQWlPakUxTmprME5qRXpNakFzSW1saGRDSTZNVFUyT1RRMU56Y3lNQ3dpYW5ScElqb2lOVGd6WmpobVltTXRNemsyTVMwMFl6QTRMV0poWlRBdFl6UXlZMkl4TVRNNU5EWTVJaXdpWTJ4cFpXNTBYMmxrSWpvaU0zRmxhalZsTVhabU16ZDFOM1JvWld3MGRHOTFkREprTVd3aUxDSjFjMlZ5Ym1GdFpTSTZJbVZzYjNKNllXWmxJbjAuQjRjZEp0aDNLRk5wSjZpa1ZwN2U2RFJlZTk1VjZRaS16RUUyREpIN3NIT2wyenhZaTdmLVNtRUdvaDJBRDhlbXhRUllhakJ5ei1yRTRKaDBRT3ltTjJZcy1aSWtNcFZCVFBndS1UTVdEeU9IaERVbVVqMk9QODJ5ZVozd2xaQXRyX2dNNEx6alhVWG1JX0syeUdqdVhmWFRhYTFtdlFFQkcwbVFmVmQ3U2Z3WEItamN2NFJZVmk2ajI1cWdvdzlFdzUydWZ1clBxYUstM1dBS0czMktwVjhKNC1XZWpxOHQwYy15QTdzYjhFbkI1NTFiN1RVOTN1S1JpVlZLM0U1NU5rNUFEUG9hbV9XWUU0NWkzczVxVkFQXy1Jblc3NU5Vb09DR1RzUzhZV01mYjZlY0hZSi0xai1iekEyN3phVDlWamN0WG45YnlORlptS0xwQTJMY3h3IiwiaG9zdCI6ImV4YW1wbGUxMjM0NTY3ODkwMDAwLmFwcHN5bmMtYXBpLnVzLWVhc3QtMS5hbWF6b25hd3MuY29tIn0=&payload=e30=
```

It's important to note that in addition to the base64-encoded header object, an empty JSON object \$1\$1 is also base64-encoded and included as a separate query parameter named `payload` in the WebSocket URL.

**Headers via `Sec-WebSocket-Protocol`**

A JSON object containing the `host` and the other authorization headers is converted to a string and then encoded using base64Url encoding. The resulting base64Url-encoded string is prefixed with `header-`. This prefixed string is then used as a new sub-protocol in addition to `graphql-ws` in the `Sec-WebSocket-Protocol` header when establishing the WebSocket connection with the AWS AppSync real-time endpoint. 

The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql
```

The `Sec-WebSocket-Protocol` header contains the following value:

```
"sec-websocket-protocol" : ["graphql-ws", "header-ew0KICAiYWNjZXB0IjogImFwcGxpY2F0aW9uL2pzb24sIHRleHQvamF2YXNjcmlwdCIsDQogICJjb250ZW50LWVuY29kaW5nIjogImFtei0xLjAiLA0KICAiY29udGVudC10eXBlIjogImFwcGxpY2F0aW9uL2pzb247IGNoYXJzZXQ9VVRGLTgiLA0KICAiaG9zdCI6ICJleGFtcGxlMTIzNDU2Nzg5MDAwMC5hcHBzeW5jLWFwaS51cy1lYXN0LTEuYW1hem9uYXdzLmNvbSIsDQogICJ4LWFtei1kYXRlIjogIjIwMjAwNDAxVDAwMTAxMFoiLA0KICAiWC1BbXotU2VjdXJpdHktVG9rZW4iOiAiQWdFWEFNUExFWjJsdVgyVmpFQW9hRG1Gd0xYTnZkWFJvWldGRVhBTVBMRWN3UlFJZ0FoOTdDbGpxN3dPUEw4S3N4UDNZdER1eWMvOWhBajhQaEo3RnZmMzhTZ29DSVFEaEpFWEFNUExFUHNwaW9PenRqKytwRWFnV0N2ZVpVaktFbjB6eVVoQkVYQU1QTEVqai8vLy8vLy8vLy84QkVYQU1QTEV4T0RrMk5EZ3lOemcxTlNJTW8xbVducEVTV1VvWXc0QmtLcUVGU3JtM0RYdUw4dytaYlZjNEpLakRQNHZVQ0tOUjZMZTlDOXBacDlQc1cwTm9GeTN2TEJVZEFYRVhBTVBMRU9WRzhmZVhmaUVFQSsxa2hnRksvd0V0d1IrOXpGN05hTU1Nc2UwN3dOMmdHMnRIMGVLTUVYQU1QTEVRWCtzTWJ5dFFvOGllcFA5UFpPemxac1NGYi9kUDVROGhrNllFWEFNUExFWWNLWnNUa0RBcTJ1S0ZROG1ZVVZBOUV0UW5OUmlGTEVZODNhS3ZHL3RxTFdObkdsU05WeDdTTWNmb3ZrRkRxUWFtbSs4OHkxT3d3QUVZSzdxY29jZVg2WjdHR2NhWXVJZkdwYVgyTUNDRUxlUXZaKzhXeEVnT25JZno3R1l2c1lOakxaU2FSblY0RytJTFkxRjBRTlc2NFM5TnZqK0J3RGczaHQyQ3JOdnB3alZZbGo5VTNubXhFMFVHNW5lODNMTDVoaHFNcG0yNWttTDdlblZndzJrUXptVTJpZDRJS3UwQy9XYW9EUnVPMkY1ekU2M3ZKYnhOOEFZczczMzgrNEI0SEJiNkJaNk9VZ2c5NlExNVJBNDEvZ0lxeGFWUHh5VHBEZlRVNUdmU0x4b2NkWWVuaXFxcEZNdFpHMm45ZDB1N0dzUU5jRmtOY0czcURabTR0RG84dFpidXltMGEyVmNGMkU1aEZFZ1hCYStYTEpDZlhpLzc3T3FBRWpQMHg3UWRrM0I0M3A4S0cvQmFpb1A1UnNWOHpCR3ZIMXpBZ3lQaGEyck43MC90VDEzeXJtUGQ1UVlFZnd6ZXhqS3JWNG1XSXVSZzhOVEhZU1pKVWFleUN3VG9tODBWRlVKWEcrR1lUVXl2NVcyMmFCY25vUkdpQ2lLRVlUTE9rZ1hlY2RLRlRIbWNJQWVqUTlXZWxyMGExOTZLcTg3dzVLTk1Da2NDR0Zud0JORkxtZm5icE5xVDZyVUJ4eHMzWDVudFg5ZDhIVnRTWUlOVHNHWFhNWkNKN2ZuYldhamhnL2FveDBGdEhYMjFlRjZxSUdUOGoxeitsMm9wVStnZ3dVZ2toVVVnQ0gyVGZxQmorTUxNVlZ2cGdxSnNQS3Q1ODJjYUZLQXJJRkl2Tys5UXVweExuRUgyaHowNFRNVGZuVTZiUUM2ejFidVZlN2grdE9MbmgxWVBGc0xRODhhbmliLzdUVEM4azlEc0JUcTBBU2U4UjJHYlNFc21POXFiYk13Z0VhWVVoT0t0R2V5UXNTSmRoU2s2WHhYVGhyV0w5RW53QkNYRGtJQ01xZG50QXh5eU05bldzWjRiTDlKSHFFeGdXVW1mV0NoelBGQXFuM0Y0eTg5NlVxSFRaeGxxM1dHeXBuNUhIY2VtMkhxZjNJVnhLSDFpbmhxZFZ0a3J5RWlUV3JJN1pkamJxbnFSYmwrV2d0UHRLT093ZURsQ2FSczNSMnFYY2JOZ1ZobGVNazRJV25GOEQxNjk1QWVuVTFMd0hqT0pMa0NqeGdORmlXQUZFUEg5YUVYQU1QTEV4QT09IiwNCiAgIkF1dGhvcml6YXRpb24iOiAiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPVhYWFhYWFhYWFhYWFhYWFhYWFgvMjAyMDA0MDEvdXMtZWFzdC0xL2FwcHN5bmMvYXdzNF9yZXF1ZXN0LCBTaWduZWRIZWFkZXJzPWFjY2VwdDtjb250ZW50LWVuY29kaW5nO2NvbnRlbnQtdHlwZTtob3N0O3gtYW16LWRhdGU7eC1hbXotc2VjdXJpdHktdG9rZW4sIFNpZ25hdHVyZT04M0VYQU1QTEViY2MxZmUzZWU2OWY3NWNkNWViYmY0Y2I0ZjE1MGU0Zjk5Y2VjODY5ZjE0OWM1RVhBTVBMRWRjIg0KfQ"]
```

**Headers via standard HTTP headers**

In this method, the host and the other authorization information is transmitted using standard HTTP headers when establishing the WebSocket connection with the AWS AppSync real-time endpoint. The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql
```

The request headers would include the following:

```
"sec-websocket-protocol" : ["graphql-ws"]
"accept": "application/json, text/javascript",
"content-encoding": "amz-1.0",
"content-type": "application/json; charset=UTF-8",
"host": "example1234567890000.appsync-api.us-east-1.amazonaws.com",
"x-amz-date": "20200401T001010Z",
"X-Amz-Security-Token": "AgEXAMPLEZ2luX2VjEAoaDmFwLXNvdXRoZWFEXAMPLEcwRQIgAh97Cljq7wOPL8KsxP3YtDuyc/9hAj8PhJ7Fvf38SgoCIQDhJEXAMPLEPspioOztj++pEagWCveZUjKEn0zyUhBEXAMPLEjj//////////8BEXAMPLExODk2NDgyNzg1NSIMo1mWnpESWUoYw4BkKqEFSrm3DXuL8w+ZbVc4JKjDP4vUCKNR6Le9C9pZp9PsW0NoFy3vLBUdAXEXAMPLEOVG8feXfiEEA+1khgFK/wEtwR+9zF7NaMMMse07wN2gG2tH0eKMEXAMPLEQX+sMbytQo8iepP9PZOzlZsSFb/dP5Q8hk6YEXAMPLEYcKZsTkDAq2uKFQ8mYUVA9EtQnNRiFLEY83aKvG/tqLWNnGlSNVx7SMcfovkFDqQamm+88y1OwwAEYK7qcoceX6Z7GGcaYuIfGpaX2MCCELeQvZ+8WxEgOnIfz7GYvsYNjLZSaRnV4G+ILY1F0QNW64S9Nvj+BwDg3ht2CrNvpwjVYlj9U3nmxE0UG5ne83LL5hhqMpm25kmL7enVgw2kQzmU2id4IKu0C/WaoDRuO2F5zE63vJbxN8AYs7338+4B4HBb6BZ6OUgg96Q15RA41/gIqxaVPxyTpDfTU5GfSLxocdYeniqqpFMtZG2n9d0u7GsQNcFkNcG3qDZm4tDo8tZbuym0a2VcF2E5hFEgXBa+XLJCfXi/77OqAEjP0x7Qdk3B43p8KG/BaioP5RsV8zBGvH1zAgyPha2rN70/tT13yrmPd5QYEfwzexjKrV4mWIuRg8NTHYSZJUaeyCwTom80VFUJXG+GYTUyv5W22aBcnoRGiCiKEYTLOkgXecdKFTHmcIAejQ9Welr0a196Kq87w5KNMCkcCGFnwBNFLmfnbpNqT6rUBxxs3X5ntX9d8HVtSYINTsGXXMZCJ7fnbWajhg/aox0FtHX21eF6qIGT8j1z+l2opU+ggwUgkhUUgCH2TfqBj+MLMVVvpgqJsPKt582caFKArIFIvO+9QupxLnEH2hz04TMTfnU6bQC6z1buVe7h+tOLnh1YPFsLQ88anib/7TTC8k9DsBTq0ASe8R2GbSEsmO9qbbMwgEaYUhOKtGeyQsSJdhSk6XxXThrWL9EnwBCXDkICMqdntAxyyM9nWsZ4bL9JHqExgWUmfWChzPFAqn3F4y896UqHTZxlq3WGypn5HHcem2Hqf3IVxKH1inhqdVtkryEiTWrI7ZdjbqnqRbl+WgtPtKOOweDlCaRs3R2qXcbNgVhleMk4IWnF8D1695AenU1LwHjOJLkCjxgNFiWAFEPH9aEXAMPLExA==",
"Authorization": "AWS4-HMAC-SHA256 Credential=XXXXXXXXXXXXXXXXXXX/20200401/us-east-1/appsync/aws4_request, SignedHeaders=accept;content-encoding;content-type;host;x-amz-date;x-amz-security-token, Signature=83EXAMPLEbcc1fe3ee69f75cd5ebbf4cb4f150e4f99cec869f149c5EXAMPLEdc"
```

To sign the request using a custom domain:

```
{
  url: "https://api.example.com/graphql/connect",
  data: "{}",
  method: "POST",
  headers: {
    "accept": "application/json, text/javascript",
    "content-encoding": "amz-1.0",
    "content-type": "application/json; charset=UTF-8",
  }
}
```

**Example**

```
{
  "accept": "application/json, text/javascript",
  "content-encoding": "amz-1.0",
  "content-type": "application/json; charset=UTF-8",
  "host": "api.example.com",
  "x-amz-date": "20200401T001010Z",
  "X-Amz-Security-Token": "AgEXAMPLEZ2luX2VjEAoaDmFwLXNvdXRoZWFEXAMPLEcwRQIgAh97Cljq7wOPL8KsxP3YtDuyc/9hAj8PhJ7Fvf38SgoCIQDhJEXAMPLEPspioOztj++pEagWCveZUjKEn0zyUhBEXAMPLEjj//////////8BEXAMPLExODk2NDgyNzg1NSIMo1mWnpESWUoYw4BkKqEFSrm3DXuL8w+ZbVc4JKjDP4vUCKNR6Le9C9pZp9PsW0NoFy3vLBUdAXEXAMPLEOVG8feXfiEEA+1khgFK/wEtwR+9zF7NaMMMse07wN2gG2tH0eKMEXAMPLEQX+sMbytQo8iepP9PZOzlZsSFb/dP5Q8hk6YEXAMPLEYcKZsTkDAq2uKFQ8mYUVA9EtQnNRiFLEY83aKvG/tqLWNnGlSNVx7SMcfovkFDqQamm+88y1OwwAEYK7qcoceX6Z7GGcaYuIfGpaX2MCCELeQvZ+8WxEgOnIfz7GYvsYNjLZSaRnV4G+ILY1F0QNW64S9Nvj+BwDg3ht2CrNvpwjVYlj9U3nmxE0UG5ne83LL5hhqMpm25kmL7enVgw2kQzmU2id4IKu0C/WaoDRuO2F5zE63vJbxN8AYs7338+4B4HBb6BZ6OUgg96Q15RA41/gIqxaVPxyTpDfTU5GfSLxocdYeniqqpFMtZG2n9d0u7GsQNcFkNcG3qDZm4tDo8tZbuym0a2VcF2E5hFEgXBa+XLJCfXi/77OqAEjP0x7Qdk3B43p8KG/BaioP5RsV8zBGvH1zAgyPha2rN70/tT13yrmPd5QYEfwzexjKrV4mWIuRg8NTHYSZJUaeyCwTom80VFUJXG+GYTUyv5W22aBcnoRGiCiKEYTLOkgXecdKFTHmcIAejQ9Welr0a196Kq87w5KNMCkcCGFnwBNFLmfnbpNqT6rUBxxs3X5ntX9d8HVtSYINTsGXXMZCJ7fnbWajhg/aox0FtHX21eF6qIGT8j1z+l2opU+ggwUgkhUUgCH2TfqBj+MLMVVvpgqJsPKt582caFKArIFIvO+9QupxLnEH2hz04TMTfnU6bQC6z1buVe7h+tOLnh1YPFsLQ88anib/7TTC8k9DsBTq0ASe8R2GbSEsmO9qbbMwgEaYUhOKtGeyQsSJdhSk6XxXThrWL9EnwBCXDkICMqdntAxyyM9nWsZ4bL9JHqExgWUmfWChzPFAqn3F4y896UqHTZxlq3WGypn5HHcem2Hqf3IVxKH1inhqdVtkryEiTWrI7ZdjbqnqRbl+WgtPtKOOweDlCaRs3R2qXcbNgVhleMk4IWnF8D1695AenU1LwHjOJLkCjxgNFiWAFEPH9aEXAMPLExA==",
  "Authorization": "AWS4-HMAC-SHA256 Credential=XXXXXXXXXXXXXXXXXXX/20200401/us-east-1/appsync/aws4_request, SignedHeaders=accept;content-encoding;content-type;host;x-amz-date;x-amz-security-token, Signature=83EXAMPLEbcc1fe3ee69f75cd5ebbf4cb4f150e4f99cec869f149c5EXAMPLEdc"
}
```

**Request URL with query string**

```
wss://api.example.com/graphql?header=eyEXAMPLEHQiOiJhcHBsaWNhdGlvbi9qc29uLCB0ZXh0L2phdmFEXAMPLEQiLCJjb250ZW50LWVuY29kaW5nIjoEXAMPLEEuMCIsImNvbnRlbnQtdHlwZSI6ImFwcGxpY2F0aW9EXAMPLE47IGNoYXJzZXQ9VVRGLTgiLCJob3N0IjoiZXhhbXBsZEXAMPLENjc4OTAwMDAuYXBwc3luYy1hcGkudXMtZWFzdC0xLmFtYEXAMPLEcy5jb20iLCJ4LWFtei1kYXRlIjoiMjAyMDA0MDFUMDAxMDEwWiIsIlgtEXAMPLElY3VyaXR5LVRva2VuIjoiQWdvSmIzSnBaMmx1WDJWakVBb2FEbUZ3TFhOdmRYUm9aV0Z6ZEMweUlrY3dSUUlnQWg5N0NsanE3d09QTDhLc3hQM1l0RHV5Yy85aEFqOFBoSjdGdmYzOFNnb0NJUURoSllKYkpsbmpQc3Bpb096dGorK3BFYWdXQ3ZlWlVqS0VuMHp5VWhCbXhpck5CUWpqLy8vLy8vLy8vLzhCRUFBYUREY3hPRGsyTkRneU56ZzFOU0lNbzFtV25wRVNXVW9ZdzRCa0txRUZTcm0zRFh1TDh3K1piVmM0SktqRFA0dlVDS05SNkxlOUM5cFpwOVBzVzBOb0Z5M3ZMQlVkQVh3dDZQSld1T1ZHOGZlWGZpRUVBKzFraGdGSy93RXR3Uis5ekY3TmFNTU1zZTA3d04yZ0cydEgwZUtNVFhuOEF3QVFYK3NNYnl0UW84aWVwUDlQWk96bFpzU0ZiL2RQNVE4aGs2WWpHVGFMMWVZY0tac1RrREFxMnVLRlE4bVlVVkE5RXRRbk5SaUZMRVk4M2FLdkcvdHFMV05uR2xTTlZ4N1NNY2ZvdmtGRHFRYW1tKzg4eTFPd3dBRVlLN3Fjb2NlWDZaN0dHY2FZdUlmR3BhWDJNQ0NFTGVRdlorOFd4RWdPbklmejdHWXZzWU5qTFpTYVJuVjRHK0lMWTFGMFFOVzY0UzlOdmorQndEZzNodDJDck52cHdqVllsajlVM25teEUwVUc1bmU4M0xMNWhocU1wbTI1a21MN2VuVmd3MmtRem1VMmlkNElLdTBDL1dhb0RSdU8yRjV6RTYzdkpieE44QVlzNzMzOCs0QjRIQmI2Qlo2T1VnZzk2UTE1UkE0MS9nSXF4YVZQeHlUcERmVFU1R2ZTTHhvY2RZZW5pcXFwRk10WkcybjlkMHU3R3NRTmNGa05jRzNxRFptNHREbzh0WmJ1eW0wYTJWY0YyRTVoRkVnWEJhK1hMSkNmWGkvNzdPcUFFalAweDdRZGszQjQzcDhLRy9CYWlvUDVSc1Y4ekJHdkgxekFneVBoYTJyTjcwL3RUMTN5cm1QZDVRWUVmd3pleGpLclY0bVdJdVJnOE5USFlTWkpVYWV5Q3dUb204MFZGVUpYRytHWVRVeXY1VzIyYUJjbm9SR2lDaUtFWVRMT2tnWGVjZEtGVEhtY0lBZWpROVdlbHIwYTE5NktxODd3NUtOTUNrY0NHRm53Qk5GTG1mbmJwTnFUNnJVQnh4czNYNW50WDlkOEhWdFNZSU5Uc0dYWE1aQ0o3Zm5iV2FqaGcvYW94MEZ0SFgyMWVGNnFJR1Q4ajF6K2wyb3BVK2dnd1Vna2hVVWdDSDJUZnFCaitNTE1WVnZwZ3FKc1BLdDU4MmNhRktBcklGSXZPKzlRdXB4TG5FSDJoejA0VE1UZm5VNmJRQzZ6MWJ1VmU3aCt0T0xuaDFZUEZzTFE4OGFuaWIvN1RUQzhrOURzQlRxMEFTZThSMkdiU0VzbU85cWJiTXdnRWFZVWhPS3RHZXlRc1NKZGhTazZYeFhUaHJXTDlFbndCQ1hEa0lDTXFkbnRBeHl5TTluV3NaNGJMOUpIcUV4Z1dVbWZXQ2h6UEZBcW4zRjR5ODk2VXFIVFp4bHEzV0d5cG41SEhjZW0ySHFmM0lWeEtIMWluaHFkVnRrcnlFaVRXckk3WmRqYnFucVJibCtXZ3RQdEtPT3dlRGxDYVJzM1IycVhjYk5nVmhsZU1rNElXbkY4RDE2OTVBZW5VMUx3SGpPSkxrQ2p4Z05GaVdBRkVQSDlhTklhcXMvWnhBPT0iLCJBdXRob3JpemF0aW9uIjoiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPVhYWFhYWFhYWFhYWFhYWFhYWFgvMjAxOTEwMDIvdXMtZWFzdC0xEXAMPLE5bmMvYXdzNF9yZXF1ZXN0LCBTaWduZWRIZWFkZXJzPWFjY2VwdDtjb250ZWEXAMPLE29kaW5nO2NvbnRlbnQtdHlwZTtob3EXAMPLEW16LWRhdGU7eC1hbXotc2VjdXJpdHktdG9rZW4sIFNpZ25hdHVyZT04MzE4EXAMPLEiY2MxZmUzZWU2OWY3NWNkEXAMPLE0Y2I0ZjE1MGU0Zjk5Y2VjODY5ZjE0OWM1ZDAzNDEXAMPLEn0=&payload=e30=
```

**Note**  
One WebSocket connection can have multiple subscriptions (even with different authentication modes). One way to implement this is to create a WebSocket connection for the first subscription and then close it when the last subscription is unregistered. You can optimize this by waiting a few seconds before closing the WebSocket connection, in case the app is subscribed immediately after the last subscription is unregistered. For a mobile app example, when changing from one screen to another, on *unmounting* event it stops a subscription, and on *mounting* event it starts a different subscription.

### Lambda authorization
<a name="lambda-auth"></a>

#### Lambda authorization header
<a name="lambda-auth-list"></a>

**Header content**
+  `"Authorization": <string>`: The value that is passed as `authorizationToken`.
+  `"host": <string>`: The host for the AWS AppSync GraphQL endpoint or your custom domain name.

**Example**

```
{
    "Authorization":"M0UzQzM1MkQtMkI0Ni00OTZCLUI1NkQtMUM0MTQ0QjVBRTczCkI1REEzRTIxLTk5NzItNDJENi1BQjMwLTFCNjRFNzQ2NzlCNQo=",
    "host":"example1234567890000.appsync-api.us-east-1.amazonaws.com"
}
```

**Headers via query string**

First, a JSON object containing the `host` and the `Authorization` is converted into a string. Next, this string is encoded using base64 encoding. The resulting base64-encoded string is added as a query parameter named `header` to the WebSocket URL for establishing the connection with the AWS AppSync real-time endpoint. The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql?header=eyJBdXRob3JpemF0aW9uIjoiZXlKcmFXUWlPaUpqYkc1eGIzQTVlVzVNSzA5UVlYSXJNVEpIV0VGTFNYQmllVTVXTkhoc1FqaFBWVzlZTW5NMldsZHZQU0lzSW1Gc1p5STZJbEpUTWpVMkluMC5leUp6ZFdJaU9pSmhObU5tTWpjd055MHhOamd4TFRRMU5ESXRPV1l4T0MxbE5qWTBNVGcyTmpsa016WWlMQ0psZG1WdWRGOXBaQ0k2SW1Wa016TTVNbU5rTFdOallUTXROR00yT0MxaE5EWXlMVEpsWkdJM1pUTm1ZMkZqWmlJc0luUnZhMlZ1WDNWelpTSTZJbUZqWTJWemN5SXNJbk5qYjNCbElqb2lZWGR6TG1OdloyNXBkRzh1YzJsbmJtbHVMblZ6WlhJdVlXUnRhVzRpTENKaGRYUm9YM1JwYldVaU9qRTFOamswTlRjM01UZ3NJbWx6Y3lJNkltaDBkSEJ6T2x3dlhDOWpiMmR1YVhSdkxXbGtjQzVoY0MxemIzVjBhR1ZoYzNRdE1pNWhiV0Y2YjI1aGQzTXVZMjl0WEM5aGNDMXpiM1YwYUdWaGMzUXRNbDgzT0hZMFNWWmliVkFpTENKbGVIQWlPakUxTmprME5qRXpNakFzSW1saGRDSTZNVFUyT1RRMU56Y3lNQ3dpYW5ScElqb2lOVGd6WmpobVltTXRNemsyTVMwMFl6QTRMV0poWlRBdFl6UXlZMkl4TVRNNU5EWTVJaXdpWTJ4cFpXNTBYMmxrSWpvaU0zRmxhalZsTVhabU16ZDFOM1JvWld3MGRHOTFkREprTVd3aUxDSjFjMlZ5Ym1GdFpTSTZJbVZzYjNKNllXWmxJbjAuQjRjZEp0aDNLRk5wSjZpa1ZwN2U2RFJlZTk1VjZRaS16RUUyREpIN3NIT2wyenhZaTdmLVNtRUdvaDJBRDhlbXhRUllhakJ5ei1yRTRKaDBRT3ltTjJZcy1aSWtNcFZCVFBndS1UTVdEeU9IaERVbVVqMk9QODJ5ZVozd2xaQXRyX2dNNEx6alhVWG1JX0syeUdqdVhmWFRhYTFtdlFFQkcwbVFmVmQ3U2Z3WEItamN2NFJZVmk2ajI1cWdvdzlFdzUydWZ1clBxYUstM1dBS0czMktwVjhKNC1XZWpxOHQwYy15QTdzYjhFbkI1NTFiN1RVOTN1S1JpVlZLM0U1NU5rNUFEUG9hbV9XWUU0NWkzczVxVkFQXy1Jblc3NU5Vb09DR1RzUzhZV01mYjZlY0hZSi0xai1iekEyN3phVDlWamN0WG45YnlORlptS0xwQTJMY3h3IiwiaG9zdCI6ImV4YW1wbGUxMjM0NTY3ODkwMDAwLmFwcHN5bmMtYXBpLnVzLWVhc3QtMS5hbWF6b25hd3MuY29tIn0=&payload=e30=
```

It's important to note that in addition to the base64-encoded header object, an empty JSON object \$1\$1 is also base64-encoded and included as a separate query parameter named `payload` in the WebSocket URL.

**Headers via `Sec-WebSocket-Protocol`**

A JSON object containing the `host` and the `Authorization` is converted to a string and then encoded using base64Url encoding. The resulting base64Url-encoded string is prefixed with `header-`. This prefixed string is then used as a new sub-protocol in addition to `graphql-ws` in the `Sec-WebSocket-Protocol` header when establishing the WebSocket connection with the AWS AppSync real-time endpoint. 

The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql
```

The `Sec-WebSocket-Protocol` header contains the following value:

```
"sec-websocket-protocol" : ["graphql-ws", "header-ewogICAgImhvc3QiOiJleGFtcGxlMTIzNDU2Nzg5MDAwMC5hcHBzeW5jLWFwaS51cy1lYXN0LTEuYW1hem9uYXdzLmNvbSIsCiAgICAieC1hcGkta2V5IjoiZGEyLTEyMzQ1Njc4OTAxMjM0NTY3ODkwMTIzNDU2Igp9"]
```

**Headers via standard HTTP headers**

In this method, the host and Authorization information is transmitted using standard HTTP headers when establishing the WebSocket connection with the AWS AppSync real-time endpoint. The resulting request URL takes the following form:

```
wss://example1234567890000.appsync-realtime-api.us-east-1.amazonaws.com/graphql
```

The request headers would include the following:

```
"sec-websocket-protocol" : ["graphql-ws"]
"Authorization":"eyEXAMPLEiJjbG5xb3A5eW5MK09QYXIrMTJHWEFLSXBieU5WNHhsQjEXAMPLEnM2WldvPSIsImFsZyI6IlEXAMPLEn0.eyEXAMPLEiJhNmNmMjcwNy0xNjgxLTQ1NDItOWYxOC1lNjY0MTg2NjlkMzYiLCJldmVudF9pZCI6ImVkMzM5MmNkLWNjYTMtNGM2OC1hNDYyLTJlZGI3ZTNmY2FjZiIsInRva2VuX3VzZSI6ImFjY2VzcyIsInNjb3BlIjoiYXdzLmNvZ25pdG8uc2lnbmluLnVzZXIuYWRtaW4iLCJhdXRoX3RpbWUiOjE1Njk0NTc3MTgsImlzcyI6Imh0dHBzOlwvXC9jb2duaXRvLWlkcC5hcC1zb3V0aGVhc3QtMi5hbWF6b25hd3MuY29tXC9hcC1zb3V0aGVhc3QtMl83OHY0SVZibVAiLCJleHAiOjE1Njk0NjEzMjAsImlhdCI6MTU2OTQ1NzcyMCwianRpIjoiNTgzZjhmYmMtMzk2MS00YzA4LWJhZTAtYzQyY2IxMTM5NDY5IiwiY2xpZW50X2lkIjoiM3FlajVlMXZmMzd1N3RoZWw0dG91dDJkMWwiLCJ1c2VybmFtZSI6ImVsb3EXAMPLEn0.B4EXAMPLEFNpJ6ikVp7e6DRee95V6Qi-zEE2DJH7sHOl2zxYi7f-SmEGoh2AD8emxQRYajByz-rE4Jh0QOymN2Ys-ZIkMpVBTPgu-TMWDyOHhDUmUj2OP82yeZ3wlZAtr_gM4LzjXUXmI_K2yGjuXfXTaa1mvQEBG0mQfVd7SfwXB-jcv4RYVi6j25qgow9Ew52ufurPqaK-3WAKG32KpV8J4-Wejq8t0c-yA7sb8EnB551b7TU93uKRiVVK3E55Nk5ADPoam_WYE45i3s5qVAP_-InW75NUoOCGTsS8YWMfb6ecHYJ-1j-bzA27zaT9VjctXn9byNFZmEXAMPLExw",
"host":"example1234567890000.appsync-api.us-east-1.amazonaws.com"
```

## Real-time WebSocket operation
<a name="real-time-websocket-operation"></a>

After initiating a successful WebSocket handshake with AWS AppSync, the client must send a subsequent message to connect to AWS AppSync for different operations. These messages require the following data:
+  `type`: The type of the operation.
+  `id`: A unique identifier for the subscription. We recommend using a UUID for this purpose.
+  `payload`: The associated payload, depending on the operation type.

The `type` field is the only required field; the `id` and `payload` fields are optional.

### Sequence of events
<a name="sequence-of-events"></a>

To successfully initiate, establish, register, and process the subscription request, the client must step through the following sequence:

1. Initialize connection (`connection_init`)

1. Connection acknowledgment (`connection_ack`)

1. Subscription registration (`start`)

1. Subscription acknowledgment (`start_ack`)

1. Processing subscription (`data`)

1. Subscription unregistration (`stop`)

## Connection init message
<a name="connection-init-message"></a>

(Optional) After a successful handshake, the client can send the `connection_init` message to start communicating with the AWS AppSync real-time endpoint. The message is a string obtained by stringifying the JSON object as follows:

```
{ "type": "connection_init" }
```

## Connection acknowledge message
<a name="connection-acknowledge-message"></a>

After sending the `connection_init` message, the client must wait for the `connection_ack` message. All messages sent before receiving `connection_ack` are ignored. The message should read as follows:

```
{
  "type": "connection_ack",
  "payload": {
    // Time in milliseconds waiting for ka message before the client should terminate the WebSocket connection
    "connectionTimeoutMs": 300000
  }
}
```

## Keep-alive message
<a name="keep-alive-message"></a>

In addition to the connection acknowledgment message, the client periodically receives keep-alive messages. If the client doesn't receive a keep-alive message within the connection timeout period, the client should close the connection. AWS AppSync keeps sending these messages and servicing the registered subscriptions until it shuts down the connection automatically (after 24 hours). Keep-alive messages are heartbeats and do not need the client to acknowledge them.

```
{ "type": "ka" }
```

## Subscription registration message
<a name="subscription-registration-message"></a>

After the client receives a `connection_ack` message, the client can send subscription registration messages to AWS AppSync. This type of message is a stringified JSON object that contains the following fields:
+  `"id": <string>`: The ID of the subscription. This ID must be unique for each subscription, otherwise the server returns an error indicating that the subscription ID is duplicated.
+  `"type": "start"`: A constant `<string>` parameter.
+  `"payload": <Object>`: An object that contains the information relevant to the subscription.
  +  `"data": <string>`: A stringified JSON object that contains a GraphQL query and variables.
    +  `"query": <string>`: A GraphQL operation.
    +  `"variables": <Object>`: An object that contains the variables for the query.
  +  `"extensions": <Object>`: An object that contains an authorization object.
+  `"authorization": <Object>`: An object that contains the fields required for authorization.

### Authorization object for subscription registration
<a name="authorization-object-for-subscription-registration"></a>

The same rules in the [Header parameter format based on AWS AppSync API authorization mode](#header-parameter-format-based-on-appsync-api-authorization-mode) section apply for the authorization object. The only exception is for IAM, where the SigV4 signature information is slightly different. For more details, see the IAM example.

Example using Amazon Cognito user pools:

```
{
  "id": "ee849ef0-cf23-4cb8-9fcb-152ae4fd1e69",
  "payload": {
    "data": "{\"query\":\"subscription onCreateMessage {\\n onCreateMessage {\\n __typename\\n message\\n }\\n }\",\"variables\":{}}",
      "extensions": {
        "authorization": {
          "Authorization": "eyEXAMPLEiJjbG5xb3A5eW5MK09QYXIrMTJEXAMPLEBieU5WNHhsQjhPVW9YMnM2WldvPSIsImFsZyI6IlEXAMPLEn0.eyJzdWIiOiJhNmNmMjcwNy0xNjgxLTQ1NDItEXAMPLENjY0MTg2NjlkMzYiLCJldmVudF9pZCI6ImU3YWVmMzEyLWUEXAMPLEY0Zi04YjlhLTRjMWY5M2Q5ZTQ2OCIsInRva2VuX3VzZSI6ImFjY2VzcyIsIEXAMPLEIjoiYXdzLmNvZ25pdG8uc2lnbmluLnVzZXIuYWRtaW4iLCJhdXRoX3RpbWUiOjE1Njk2MTgzMzgsImlzcyI6Imh0dEXAMPLEXC9jb2duaXRvLWlkcC5hcC1zb3V0aGVhc3QtMi5hbWF6b25hd3MuY29tXC9hcC1zbEXAMPLEc3QtMl83OHY0SVZibVAiLCJleHAiOjE1NzAyNTQ3NTUsImlhdCI6MTU3MDI1MTE1NSwianRpIjoiMmIEXAMPLEktZTVkMi00ZDhkLWJiYjItNjA0YWI4MDEwOTg3IiwiY2xpZW50X2lkIjoiM3FlajVlMXZmMzd1EXAMPLE0dG91dDJkMWwiLCJ1c2VybmFtZSI6ImVsb3J6YWZlIn0.CT-qTCtrYeboUJ4luRSTPXaNewNeEXAMPLE14C6sfg05tO0fOMpiUwj9k19gtNCCMqoSsjtQoUweFnH4JYa5EXAMPLEVxOyQEQ4G7jQrt5Ks6STn53vuseR3zRW9snWgwz7t3ZmQU-RWvW7yQU3sNQRLEXAMPLEcd0yufBiCYs3dfQxTTdvR1B6Wz6CD78lfNeKqfzzUn2beMoup2h6EXAMPLE4ow8cUPUPvG0DzRtHNMbWskjPanu7OuoZ8iFO_Eot9kTtAlVKYoNbWkZhkD8dxutyoU4RSH5JoLAnrGF5c8iKgv0B2dfEXAMPLEIihxaZVJ9w9w48S4EXAMPLEcA",
          "host": "example1234567890000.appsync-api.us-east-1.amazonaws.com"
         }
      }
  },
  "type": "start"
}
```

Example using IAM:

```
{
  "id": "eEXAMPLE-cf23-1234-5678-152EXAMPLE69",
  "payload": {
    "data": "{\"query\":\"subscription onCreateMessage {\\n onCreateMessage {\\n __typename\\n message\\n }\\n }\",\"variables\":{}}",
    "extensions": {
      "authorization": {
        "accept": "application/json, text/javascript",
        "content-type": "application/json; charset=UTF-8",
        "X-Amz-Security-Token": "AgEXAMPLEZ2luX2VjEAoaDmFwLXNvdXRoZWFEXAMPLEcwRQIgAh97Cljq7wOPL8KsxP3YtDuyc/9hAj8PhJ7Fvf38SgoCIQDhJEXAMPLEPspioOztj++pEagWCveZUjKEn0zyUhBEXAMPLEjj//////////8BEXAMPLExODk2NDgyNzg1NSIMo1mWnpESWUoYw4BkKqEFSrm3DXuL8w+ZbVc4JKjDP4vUCKNR6Le9C9pZp9PsW0NoFy3vLBUdAXEXAMPLEOVG8feXfiEEA+1khgFK/wEtwR+9zF7NaMMMse07wN2gG2tH0eKMEXAMPLEQX+sMbytQo8iepP9PZOzlZsSFb/dP5Q8hk6YEXAMPLEYcKZsTkDAq2uKFQ8mYUVA9EtQnNRiFLEY83aKvG/tqLWNnGlSNVx7SMcfovkFDqQamm+88y1OwwAEYK7qcoceX6Z7GGcaYuIfGpaX2MCCELeQvZ+8WxEgOnIfz7GYvsYNjLZSaRnV4G+ILY1F0QNW64S9Nvj+BwDg3ht2CrNvpwjVYlj9U3nmxE0UG5ne83LL5hhqMpm25kmL7enVgw2kQzmU2id4IKu0C/WaoDRuO2F5zE63vJbxN8AYs7338+4B4HBb6BZ6OUgg96Q15RA41/gIqxaVPxyTpDfTU5GfSLxocdYeniqqpFMtZG2n9d0u7GsQNcFkNcG3qDZm4tDo8tZbuym0a2VcF2E5hFEgXBa+XLJCfXi/77OqAEjP0x7Qdk3B43p8KG/BaioP5RsV8zBGvH1zAgyPha2rN70/tT13yrmPd5QYEfwzexjKrV4mWIuRg8NTHYSZJUaeyCwTom80VFUJXG+GYTUyv5W22aBcnoRGiCiKEYTLOkgXecdKFTHmcIAejQ9Welr0a196Kq87w5KNMCkcCGFnwBNFLmfnbpNqT6rUBxxs3X5ntX9d8HVtSYINTsGXXMZCJ7fnbWajhg/aox0FtHX21eF6qIGT8j1z+l2opU+ggwUgkhUUgCH2TfqBj+MLMVVvpgqJsPKt582caFKArIFIvO+9QupxLnEH2hz04TMTfnU6bQC6z1buVe7h+tOLnh1YPFsLQ88anib/7TTC8k9DsBTq0ASe8R2GbSEsmO9qbbMwgEaYUhOKtGeyQsSJdhSk6XxXThrWL9EnwBCXDkICMqdntAxyyM9nWsZ4bL9JHqExgWUmfWChzPFAqn3F4y896UqHTZxlq3WGypn5HHcem2Hqf3IVxKH1inhqdVtkryEiTWrI7ZdjbqnqRbl+WgtPtKOOweDlCaRs3R2qXcbNgVhleMk4IWnF8D1695AenU1LwHjOJLkCjxgNFiWAFEPH9aEXAMPLExA==",
        "Authorization": "AWS4-HMAC-SHA256 Credential=XXXXXXXXXXXXXXXXXXXX/20200401/us-east-1/appsync/aws4_request, SignedHeaders=accept;content-encoding;content-type;host;x-amz-date;x-amz-security-token, Signature=b90131a61a7c4318e1c35ead5dbfdeb46339a7585bbdbeceeaff51f4022eb1fd",
        "content-encoding": "amz-1.0",
        "host": "example1234567890000.appsync-api.us-east-1.amazonaws.com",
        "x-amz-date": "20200401T001010Z"
      }
    }
  },
  "type": "start"
}
```

Example using a custom domain name:

```
{
  "id": "key-cf23-4cb8-9fcb-152ae4fd1e69",
  "payload": {
    "data": "{\"query\":\"subscription onCreateMessage {\\n onCreateMessage {\\n __typename\\n message\\n }\\n }\",\"variables\":{}}",
      "extensions": {
        "authorization": {
          "x-api-key": "da2-12345678901234567890123456",
          "host": "api.example.com"
         }
      }
  },
  "type": "start"
}
```

The SigV4 signature does not need `/connect` to be appended to the URL, and the JSON stringified GraphQL operation replaces `data`. The following is an example of a SigV4 signature request:

```
{
  url: "https://example1234567890000.appsync-api.us-east-1.amazonaws.com/graphql",
  data: "{\"query\":\"subscription onCreateMessage {\\n onCreateMessage {\\n __typename\\n message\\n }\\n }\",\"variables\":{}}",
  method: "POST",
  headers: {
    "accept": "application/json, text/javascript",
    "content-encoding": "amz-1.0",
    "content-type": "application/json; charset=UTF-8",
  }
}
```

## Subscription acknowledgment message
<a name="subscription-acknowledge-message"></a>

After sending the subscription start message, the client should wait for AWS AppSync to send the `start_ack` message. The `start_ack` message indicates that the subscription is successful.

Subscription acknowledgment example:

```
{
  "type": "start_ack",
  "id": "eEXAMPLE-cf23-1234-5678-152EXAMPLE69"
}
```

## Error message
<a name="error-message"></a>

If connection init or subscription registration fails, or if a subscription is ended from the server, the server sends an error message to the client. If the error happens during connection init time, the connection will be closed by the server.
+  `"type": "error"`: A constant `<string>` parameter.
+  `"id": <string>`: The ID of the corresponding registered subscription, if relevant.
+  `"payload" <Object>`: An object that contains the corresponding error information.

Example:

```
{
  "type": "error",
  "payload": {
    "errors": [
      {
        "errorType": "LimitExceededError",
        "message": "Rate limit exceeded"
      }
    ]
  }
}
```

## Processing data messages
<a name="processing-data-messages"></a>

When a client submits a mutation, AWS AppSync identifies all of the subscribers interested in it and sends a `"type":"data"` message to each using the corresponding subscription `id` from the `"start"` subscription operation. The client is expected to keep track of the subscription `id` that it sends so that when it receives a data message, the client can match it with the corresponding subscription.
+  `"type": "data"`: A constant `<string>` parameter.
+  `"id": <string>`: The ID of the corresponding registered subscription.
+  `"payload" <Object>`: An object that contains the subscription information.

Example:

```
{
  "type": "data",
  "id": "ee849ef0-cf23-4cb8-9fcb-152ae4fd1e69",
  "payload": {
    "data": {
      "onCreateMessage": {
        "__typename": "Message",
        "message": "test"
      }
    }
  }
}
```

## Subscription unregistration message
<a name="subscription-unregistration-message"></a>

When the app wants to stop listening to the subscription events, the client should send a message with the following stringified JSON object:
+  `"type": "stop"`: A constant `<string>` parameter.
+  `"id": <string>`: The ID of the subscription to unregister.

Example:

```
{
  "type":"stop",
  "id":"ee849ef0-cf23-4cb8-9fcb-152ae4fd1e69"
}
```

AWS AppSync sends back a confirmation message with the following stringified JSON object:
+  `"type": "complete"`: A constant `<string>` parameter.
+  `"id": <string>`: The ID of the unregistered subscription.

After the client receives the confirmation message, it receives no more messages for this particular subscription.

Example:

```
{
  "type":"complete",
  "id":"eEXAMPLE-cf23-1234-5678-152EXAMPLE69"
}
```

## Disconnecting the WebSocket
<a name="disconnecting-the-websocket"></a>

Before disconnecting, to avoid data loss, the client should have the necessary logic to check that no operation is currently in place through the WebSocket connection. All subscriptions should be unregistered before disconnecting from the WebSocket.

# Merging APIs in AWS AppSync
<a name="merged-api"></a>

As the use of GraphQL expands within an organization, trade-offs between API ease-of-use and API development velocity can arise. On the one hand, organizations adopt AWS AppSync and GraphQL to simplify application development. This gives developers a flexible API they can use to securely access, manipulate, and combine data from one or more data domains with a single network call. On the other hand, teams within an organization that are responsible for the different data domains combined into a single GraphQL API endpoint may want the ability to create, manage, and deploy API updates independent of each other. This increases their development velocities. 

To resolve this tension, the AWS AppSync Merged APIs feature allows teams from different data domains to independently create and deploy AWS AppSync APIs (e.g., GraphQL schemas, resolvers, data sources, and functions), that can then be combined into a single, merged API. This gives organizations the ability to maintain a simple to use, cross domain API, and a way for the different teams that contribute to that API the ability to quickly and independently make API updates.

The following diagram shows the merged API workflow:

![\[Diagram showing the merged API workflow with multiple source APIs being combined into a single merged API endpoint\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/merged-api-workflow.png)


Using Merged APIs, organizations can import the resources of multiple, independent source AWS AppSync APIs into a single AWS AppSync Merged API endpoint. To do this, AWS AppSync allows you to create a list of source AWS AppSync APIs, and then merge all of the metadata associated with the source APIs including schema, types, datasources, resolvers, and functions, into a new AWS AppSync merged API.

During merges, there's the possibility that a merge conflict will occur due to inconsistencies in the source API data content such as type naming conflicts when combining multiple schemas. For simple use cases where no definitions in the source APIs conflict, there's no need to modify the source API schemas. The resulting Merged API simply imports all types, resolvers, data sources and functions from the original source AWS AppSync APIs. For complex use cases where conflicts arise, the users/teams will have to resolve the conflicts through various means. AWS AppSync provides users with several tools and examples that can reduce merge conflicts. 

Subsequent merges that are configured in AWS AppSync will propagate changes made in the source APIs to the associated Merged API.

## Merged APIs and Federation
<a name="merged-api-federation"></a>

There are many solutions and patterns in the GraphQL community for combining GraphQL schemas and enabling team collaboration through a shared graph. AWS AppSync Merged APIs adopt a *build time* approach to schema composition, where source APIs are combined into a separate, Merged API. An alternative approach is to layer a *run time* router across multiple source APIs or sub-graphs. In this approach, the router receives a request, references a combined schema that it maintains as metadata, constructs a request plan, and then distributes request elements across its underlying sub-graphs/servers. The following table compares the AWS AppSync Merged API build-time approach with router-based, run-time approaches to GraphQL schema composition:


|  |  |  | 
| --- |--- |--- |
| Feature | AppSync Merged API | Router-based solutions | 
| Sub-graphs managed independently | Yes | Yes | 
| Sub-graphs addressable independently | Yes | Yes | 
| Automated schema composition | Yes | Yes | 
| Automated conflict detection | Yes | Yes | 
| Conflict resolution via schema directives | Yes | Yes | 
| Supported sub-graph servers | AWS AppSync\$1 | Varies | 
| Network complexity | Single, merged API means no extra network hops. | Multi-layer architecture requires query planning and delegation, sub-query parsing and serialization/deserialization, and reference resolvers in sub-graphs to perform joins. | 
| Observability support | Built-in monitoring, logging, and tracing. A single, Merged API server means simplified debugging. | Build-your-own observability across router and all associated sub-graph servers. Complex debugging across distributed system. | 
| Authorization support | Built in support for multiple authorization modes. | Build-your-own authorization rules. | 
| Cross account security | Built-in support for cross-AWS cloud account associations. | Build-your-own security model. | 
| Subscriptions support | Yes | No | 

\$1 AWS AppSync Merged APIs can only be associated with AWS AppSync source APIs. If you need support for schema composition across AWS AppSync and non-AWS AppSync sub-graphs, you can connect one or more AWS AppSync GraphQL and/or Merged APIs into a router-based solution. For example, see the reference blog for adding AWS AppSync APIs as a sub-graph using a router-based architecture with Apollo Federation v2: [Apollo GraphQL Federation with AWS AppSync](https://aws.amazon.com/blogs/mobile/federation-appsync-subgraph/). 

**Topics**
+ [Merged APIs and Federation](#merged-api-federation)
+ [Merged API conflict resolution](#merged-api-conflict-resolution)
+ [Configuring schemas](#configuring-schemas-merged-api)
+ [Configuring authorization modes](#configuring-authorization-merged-api)
+ [Configuring execution roles](#execution-roles-merged-api)
+ [Configuring cross-account Merged APIs using AWS RAM](#cross-account-merged-api)
+ [Merging](#merges)
+ [Additional support for Merged APIs](#merge-api-additional-support)
+ [Merged API limitations](#merged-api-limits)
+ [Merged API considerations](#merged-api-considerations)
+ [Creating Merged APIs](#creating-merged-api)

## Merged API conflict resolution
<a name="merged-api-conflict-resolution"></a>

In the event of a merge conflict, AWS AppSync provides users with several tools and examples to help troubleshoot the issue(s).

### Merged API schema directives
<a name="merged-api-schema-directive"></a>

 AWS AppSync has introduced several GraphQL directives that can be used to- reduce or resolve conflicts across source APIs:
+ *@canonical*: This directive sets the precedence of types/fields with similar names and data. If two or more source APIs have the same GraphQL type or field, one of the APIs can annotate their type or field as *canonical*, which will be prioritized during the merge. Conflicting types/fields that aren't annotated with this directive in other source APIs are ignored when merged. 
+ *@hidden*: This directive encapsulates certain types/fields to remove it from the merging process. Teams may want to remove or hide specific types or operations in the source API so only internal clients can access specific typed data. With this directive attached, types or fields are not merged into the Merged API. 
+ *@renamed*: This directive changes the names of types/fields to reduce naming conflicts. There are situations where different APIs have the same type or field name. However, they all need to be available in the merged schema. A simple way to include them all in the Merged API is to rename the field to something similar but different. 

To show the utility schema directives provide, consider the following example:

In this example, let's assume that we want to merge two source APIs. We're given two schemas that create and retrieve posts (e.g., comment section or social media posts). Assuming that the types and fields are very similar, there's a high chance for conflict during a merge operation. The snippets below show the types and fields of each schema.

The first file, called *Source1.graphql*, is a GraphQL schema that allows a user to create `Posts` using the `putPost` mutation. Each `Post` contains a title and an ID. The ID is used to reference the `User`, or poster's information (email and address), and the `Message`, or the payload (content). The `User` type is annotated with the *@canonical* tag.

```
# This snippet represents a file called Source1.graphql

type Mutation {
    putPost(id: ID!, title: String!): Post
}

type Post {
    id: ID!
    title: String!
}

type Message {
   id: ID!
   content: String
}

type User @canonical {
   id: ID!
   email: String!
   address: String!
}

type Query {
    singlePost(id: ID!): Post
    getMessage(id: ID!): Message
}
```

The second file, called *Source2.graphql*, is a GraphQL schema that does very similar things as *Source1.graphql*. However, notice that the fields of each type are different. When merging these two schemas, there will be merge conflicts because of these differences. 

Also note how *Source2.graphql* also contains several directives to reduce these conflicts. The `Post` type is annotated with a *@hidden* tag to obfuscate itself during the merge operation. The `Message` type is annotated with the *@renamed* tag to modify the type name to `ChatMessage` in the event of a naming conflict with another `Message` type.

```
# This snippet represents a file called Source2.graphql

type Post @hidden  {
    id: ID!
    title: String!
    internalSecret: String!
}

type Message @renamed(to: "ChatMessage") {
   id: ID!
   chatId: ID!
   from: User!
   to: User!
}

# Stub user so that we can link the canonical definition from Source1
type User {
   id: ID!
}

type Query {
    getPost(id: ID!): Post
    getMessage(id: ID!): Message @renamed(to: "getChatMessage")
}
```

When the merge occurs, the result will produce the `MergedSchema.graphql` file:

```
# This snippet represents a file called MergedSchema.graphql

type Mutation {
    putPost(id: ID!, title: String!): Post
}

# Post from Source2 was hidden so only uses the Source1 definition. 
type Post {
    id: ID!
    title: String!
}

# Renamed from Message to resolve the conflict
type ChatMessage {
   id: ID!
   chatId: ID!
   from: User!
   to: User!
}

type Message {
   id: ID!
   content: String
}

# Canonical definition from Source1
type User {
   id: ID!
   email: String!
   address: String!
}

type Query {
    singlePost(id: ID!): Post
    getMessage(id: ID!): Message
    
    # Renamed from getMessage
    getChatMessage(id: ID!): ChatMessage
}
```

Several things occurred in the merge:
+ The `User` type from *Source1.graphql* was prioritized over the `User` from *Source2.graphql* due to the *@canonical* annotation.
+ The `Message` type from *Source1.graphql* was included in the merge. However, the `Message` from *Source2.graphql* had a naming conflict. Due to its *@renamed* annotation, it was also included in the merge but with the alternative name `ChatMessage`.
+ The `Post` type from *Source1.graphql* was included, but the `Post` type from *Source2.graphql* wasn't. Normally, there would be a conflict on this type, but because the `Post` type from *Source2.graphql* had a *@hidden* annotation, its data was obfuscated and not included in the merge. This resulted in no conflicts.
+ The `Query` type was updated to include the contents from both files. However, one `GetMessage` query was renamed to `GetChatMessage` due to the directive. This resolved the naming conflict between the two queries with the same name.

There's also the case of no directives being added to a conflicting type. Here, the merged type will include the union of all fields from all source definitions of that type. For instance, consider the following example:

This schema, called *Source1.graphql*, allows for creating and retrieving `Posts`. The configuration is similar to the previous example, but with less information.

```
# This snippet represents a file called Source1.graphql

type Mutation {
    putPost(id: ID!, title: String!): Post
}

type Post  {
    id: ID!
    title: String!
}

type Query {
    getPost(id: ID!): Post
}
```

This schema, called *Source2.graphql*, allows for creating and retrieving `Reviews` (e.g., movie rating or restaurant reviews). `Reviews` are associated with the `Post` of the same ID value. Together, they contain the title, post ID, and payload message of the full review post.

When merging, there will be a conflict between the two `Post` types. Because there are no annotations to resolve this issue, the default behavior is to perform a union operation on the conflicting types.

```
# This snippet represents a file called Source2.graphql

type Mutation {
    putReview(id: ID!, postId: ID!, comment: String!): Review
}

type Post  {
    id: ID!
    reviews: [Review]
}

type Review {
   id: ID!
   postId: ID!
   comment: String!
}

type Query {
    getReview(id: ID!): Review
}
```

When the merge occurs, the result will produce the `MergedSchema.graphql` file:

```
# This snippet represents a file called MergedSchema.graphql

type Mutation {
    putReview(id: ID!, postId: ID!, comment: String!): Review
    putPost(id: ID!, title: String!): Post
}

type Post  {
    id: ID!
    title: String!
    reviews: [Review]
}

type Review {
   id: ID!
   postId: ID!
   comment: String!
}

type Query {
    getPost(id: ID!): Post
    getReview(id: ID!): Review
}
```

Several things occurred in the merge:
+ The `Mutation` type faced no conflicts and was merged.
+ The `Post` type fields were combined via union operation. Notice how the union between the two produced a single `id`, a `title`, and a single `reviews`.
+ The `Review` type faced no conflicts and was merged.
+ The `Query` type faced no conflicts and was merged.

### Managing resolvers on shared types
<a name="resolvers-shared-types-merged-api"></a>

In the above example, consider the case where *Source1.graphql* has configured a unit resolver on `Query.getPost`, which uses a DynamoDB data source named `PostDatasource`. This resolver will return the `id` and `title` of a `Post` type. Now, consider *Source2.graphql* has configured a pipeline resolver on `Post.reviews`, which runs two functions. `Function1` has a `None` data source attached to perform custom authorization checks. `Function2` has a DynamoDB data source attached to query the `reviews` table.

```
query GetPostQuery {
    getPost(id: "1") {
        id,
        title,
        reviews
    }
}
```

When the query above is run by a client to the Merged API endpoint, the AWS AppSync service first runs the unit resolver for `Query.getPost` from `Source1`, which calls the `PostDatasource` and returns the data from DynamoDB. Then, it runs the `Post.reviews` pipeline resolver in which `Function1` performs custom authorization logic and `Function2` returns the reviews given the `id` found in `$context.source`. The service processes the request as a single GraphQL run, and this simple request will only require a single request token.

### Managing resolver conflicts on shared types
<a name="resolver-conflict-shared-type-merged-api"></a>

Consider the following case where we also implement a resolver on `Query.getPost` in order to provide multiple fields at a time beyond the field resolver in `Source2`. *Source1.graphql* may look like this:

```
# This snippet represents a file called Source1.graphql

type Post  {
    id: ID!
    title: String!
    date: AWSDateTime!
}

type Query {
    getPost(id: ID!): Post
}
```

*Source2.graphql* may look like this:

```
# This snippet represents a file called Source2.graphql

type Post  {
  id: ID!
  content: String!
  contentHash: String! 
  author: String! 
}

type Query {
    getPost(id: ID!): Post
}
```

Attempting to merge these two schemas will generate a merge error because AWS AppSync Merged APIs don't allow multiple source resolvers to be attached to the same field. In order to resolve this conflict, you can implement a field resolver pattern that would require *Source2.graphql* to add a separate type that will define the fields that it owns from the `Post` type. In the following example, we add a type called `PostInfo`, which contains the content and author fields that will be resolved by *Source2.graphql*. *Source1.graphql* will implement the resolver attached to `Query.getPost`, while *Source2.graphql* will now attach a resolver to `Post.postInfo`to ensure that all data can be successfully retrieved:

```
type Post  {
  id: ID!
  postInfo: PostInfo
}

type PostInfo {
   content: String!
   contentHash: String!
   author: String!
}

type Query {
    getPost(id: ID!): Post
}
```

While resolving such a conflict requires source API schemas to be rewritten and, potentially, clients to change their queries, the advantage of this approach is that ownership of merged resolvers remains clear across source teams.

## Configuring schemas
<a name="configuring-schemas-merged-api"></a>

Two parties are responsible for configuring the schemas to create a Merged API:
+ **Merged API owners** - Merged API owners must configure the Merged API's authorization logic and advanced settings like logging, tracing, caching, and WAF support.
+ **Associated source API owners** - Associated API owners must configure the schemas, resolvers, and datasources that make up the Merged API.

Because your Merged API’s schema is created from the schemas of your associated source APIs, it's **read only**. This means changes to the schema must be initiated in your source APIs. In the AWS AppSync console, you can toggle between your Merged schema and the individual schemas of the source APIs included in your Merged API using the drop-down list above the **Schema** window.

## Configuring authorization modes
<a name="configuring-authorization-merged-api"></a>

Multiple authorization modes are available to protect your Merged API. To learn more about authorization modes in AWS AppSync, see [Authorization and authentication](https://docs.aws.amazon.com/appsync/latest/devguide/security-authz.html).

The following authorization modes are available to use with Merged APIs:
+  **API key**: The simplest authorization strategy. All requests must include an API key under the `x-api-key` request header. Expired API keys are kept for 60 days after the expiration date. 
+  **AWS Identity and Access Management (IAM)**: The AWS IAM authorization strategy authorizes all requests that are **sigv4 signed**. 
+  **Amazon Cognito User Pools**: Authorize your users via Amazon Cognito User Pools to achieve more fine-grained control. 
+  **AWS Lambda Authorizers**: A serverless function that allows you to authenticate and authorize access to your AWS AppSync API using custom logic.
+ **OpenID Connect**: This authorization type enforces OpenID connect (OIDC) tokens provided by an OIDC-compliant service. Your application can leverage users and privileges defined by your OIDC provider for controlling access.

The authorization modes of a Merged API are configured by the Merged API owner. At the time of a merge operation, the Merged API must include the primary authorization mode configured on a source API either as its own primary authorization mode or as a secondary authorization mode. Otherwise, it will be incompatible, and the merge operation will fail with a conflict. When using multi-auth directives in the source APIs, the merging process is able to automatically merge these directives into the unified endpoint. In the case where the primary authorization mode of the source API doesn't match the primary authorization mode of the Merged API, it will automatically add these auth directives to ensure that the authorization mode for the types in the source API is consistent.

## Configuring execution roles
<a name="execution-roles-merged-api"></a>

When you create a Merged API, you need to define a service role. An AWS service role is an AWS Identity and Access Management (IAM) role that is used by AWS services to perform tasks on your behalf.

In this context, it's necessary for your Merged API to run resolvers that access data from the data sources configured in your source APIs. The required service role for this is the `mergedApiExecutionRole`, and it must have explicit access to run requests on source APIs included in your merged API via the `appsync:SourceGraphQL` IAM permission. During the run of a GraphQL request, the AWS AppSync service will assume this service role and authorize the role to perform the `appsync:SourceGraphQL` action.

AWS AppSync supports allowing or denying this permission on specific top-level fields within the request like how the IAM authorization mode works for IAM APIs. For non-top-level fields, AWS AppSync requires you to define the permission on the source API ARN itself. In order to restrict access to specific non-top-level fields in the Merged API, we recommend implementing custom logic within your Lambda or hiding the source API fields from the Merged API using the *@hidden* directive. If you want to allow the role to perform all data operations within a source API, you can add the policy below. Note that the first resource entry allows access to all top-level fields and the second entry covers child resolvers that authorize on the source API resource itself: 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Effect": "Allow", 
        "Action": [ "appsync:SourceGraphQL"], 
        "Resource": [ 
            "arn:aws:appsync:us-west-2:123456789012:apis/YourSourceGraphQLApiId/*", 
            "arn:aws:appsync:us-west-2:123456789012:apis/YourSourceGraphQLApiId"] 
    }] 
}
```

------

If you want to limit the access to only a specific top-level field, you can use a policy like this:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Effect": "Allow", 
        "Action": [ "appsync:SourceGraphQL"], 
        "Resource": [ 
            "arn:aws:appsync:us-west-2:123456789012:apis/YourSourceGraphQLApiId/types/Query/fields/<Field-1>",
            "arn:aws:appsync:us-west-2:123456789012:apis/YourSourceGraphQLApiId"] 
    }] 
}
```

------

You can also use the AWS AppSync console API creation wizard to generate a service role to allow your Merged API to access resources configured in source APIs that are in the same account as your merged API. In the case where your source APIs are not in the same account as your merged API, you must first share your resources using AWS Resource Access Manager (AWS RAM). 

## Configuring cross-account Merged APIs using AWS RAM
<a name="cross-account-merged-api"></a>

When you create a Merged API, you can optionally associate source APIs from other accounts that have been shared via AWS Resource Access Manager (AWS RAM). AWS RAM helps you share your resources securely across AWS accounts, within your organization or organizational units (OUs), and with IAM roles and users.

AWS AppSync integrates with AWS RAM in order to support configuring and accessing source APIs across multiple accounts from a single Merged API. AWS RAM allows you to create a resource share, or a container of resources and the permission sets that will be shared for each of them. You can add AWS AppSync APIs to a resource share in AWS RAM. Within a resource share, AWS AppSync provides three different permission sets that can be associated with an AWS AppSync API in RAM:

1. `AWSRAMPermissionAppSyncSourceApiOperationAccess`: The default permission set that's added when sharing an AWS AppSync API in AWS RAM if no other permission is specified. This permission set is used for sharing a source AWS AppSync API with a Merged API owner. This permission set includes the permission for `appsync:AssociateMergedGraphqlApi` on the source API as well as the `appsync:SourceGraphQL` permission required to access the source API resources at runtime.

1. `AWSRAMPermissionAppSyncMergedApiOperationAccess`: This permission set should be configured when sharing a Merged API with a source API owner. This permission set will give the source API the ability to configure the Merged API including the ability to associate any source APIs owned by the target principal to the Merged API and to read and update the source API associations of the Merged API.

1. `AWSRAMPermissionAppSyncAllowSourceGraphQLAccess`: This permission set allows the `appsync:SourceGraphQL` permission to be used with an AWS AppSync API. It is intended to be used for sharing a source API with a Merged API owner. In contrast to the default permission set for source API operation access, this permission set only includes the runtime permission `appsync:SourceGraphQL`. If a user opts to share the Merged API operation access to a source API owner, they will also need to share this permission from the source API to the Merged API owner in order to have runtime access through the Merged API endpoint.

AWS AppSync also supports customer-managed permissions. When one of the provided AWS-managed permissions doesn't work, you can create your own customer-managed permission. Customer-managed permissions are managed permissions that you author and maintain by precisely specifying which actions can be performed under which conditions with resources shared using AWS RAM. AWS AppSync allows you to choose from the following actions when creating your own permission:

1. `appsync:AssociateSourceGraphqlApi`

1. `appsync:AssociateMergedGraphqlApi`

1. `appsync:GetSourceApiAssociation`

1. `appsync:UpdateSourceApiAssociation`

1. `appsync:StartSchemaMerge`

1. `appsync:ListTypesByAssociation`

1. `appsync:SourceGraphQL`

Once you have properly shared a source API or Merged API in AWS RAM and, if necessary, the resource share invitation has been accepted, it will be visible in the AWS AppSync console when you create or update the source API associations on your Merged API. You can also list all AWS AppSync APIs that have been shared using AWS RAM with your account regardless of the permission set by calling the `ListGraphqlApis` operation provided by AWS AppSync and using the `OTHER_ACCOUNTS` owner filter. 

**Note**  
Sharing via AWS RAM requires the caller in AWS RAM to have permission to perform the `appsync:PutResourcePolicy` action on any API that is being shared. 

## Merging
<a name="merges"></a>

### Managing merges
<a name="managing-merges"></a>

Merged APIs are meant to support team collaboration on a unified AWS AppSync endpoint. Teams can independently evolve their own isolated source GraphQL APIs in the backend while the AWS AppSync service manages the integration of the resources into the single Merged API endpoint in order to reduce friction in collaboration and decrease development lead times.

### Auto-merges
<a name="auto-merge"></a>

Source APIs associated with your AWS AppSync Merged API can be configured to automatically merge (auto-merge) into the Merged API after any changes are made to the source API. This ensures that the changes from the source API are always propagated to the Merged API endpoint in the background. Any change in the source API schema will be updated in the Merged API so long as it does not introduce a merge conflict with an existing definition in the Merged API. If the update in the source API is updating a resolver, data source, or function, the imported resource will also be updated.When a new conflict is introduced that cannot be automatically resolved (auto-resolved), the Merged API schema update is rejected due to an unsupported conflict during the merge operation. The error message is available in the console for each source API association that has a status of `MERGE_FAILED`. You can also inspect the error message by calling the `GetSourceApiAssociation` operation for a given source API association using the AWS SDK or using the AWS CLI like so:

```
aws appsync get-source-api-association --merged-api-identifier <Merged API ARN> --association-id <SourceApiAssociation id>
```

This will produce a result in the following format:

```
{
    "sourceApiAssociation": {
        "associationId": "<association id>",
        "associationArn": "<association arn>",
        "sourceApiId": "<source api id>",
        "sourceApiArn": "<source api arn>",
        "mergedApiArn": "<merged api arn>",
        "mergedApiId": "<merged api id>",
        "sourceApiAssociationConfig": {
            "mergeType": "MANUAL_MERGE"
        },
        "sourceApiAssociationStatus": "MERGE_FAILED",
        "sourceApiAssociationStatusDetail": "Unable to resolve conflict on object with name title: Merging is not supported for fields with different types."
    }
}
```

### Manual merges
<a name="manual-merges"></a>

The default setting for a source API is a manual merge. To merge any changes that have occurred in the source APIs since the Merged API was last updated, the source API owner can invoke a manual merge from the AWS AppSync console or via the `StartSchemaMerge` operation available in the AWS SDK and AWS CLI.

## Additional support for Merged APIs
<a name="merge-api-additional-support"></a>

### Configuring subscriptions
<a name="config-subscription"></a>

Unlike router-based approaches to GraphQL schema composition, AWS AppSync Merged APIs provide built-in support for GraphQL subscriptions. All subscription operations defined in your associated source APIs will automatically merge and function in your Merged API without modification. To learn more about how AWS AppSync supports subscriptions via serverless WebSockets connection, see [Real-time data](https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-data.html).

### Configuring observability
<a name="config-observability"></a>

AWS AppSync Merged APIs provide built-in logging, monitoring and metrics via [Amazon CloudWatch](https://docs.aws.amazon.com/appsync/latest/devguide/monitoring.html). AWS AppSync also provides built-in support for tracing via [AWS X-Ray](https://docs.aws.amazon.com/appsync/latest/devguide/x-ray-tracing.html). 

### Configuring custom domains
<a name="config-custom-domain"></a>

AWS AppSync Merged APIs provide built-in support for using custom domains with your Merged API's [GraphQL and Real-time endpoints](https://docs.aws.amazon.com/appsync/latest/devguide/custom-domain-name.html). 

### Configuring caching
<a name="config-caching"></a>

AWS AppSync Merged APIs provide built-in support for optionally caching request-level and/or resolver-level responses as well as response compression. To learn more, see [Caching and compression](https://docs.aws.amazon.com/appsync/latest/devguide/enabling-caching.html). 

### Configuring private APIs
<a name="config-private-api"></a>

AWS AppSync Merged APIs provide built-in support for Private APIs that limit access to your Merged API’s GraphQL and Real-time endpoints to traffic originating from [VPC endpoints that you can configure](https://docs.aws.amazon.com/appsync/latest/devguide/using-private-apis.html). 

### Configuring firewall rules
<a name="config-firewall"></a>

AWS AppSync Merged APIs provide built-in support for AWS WAF, which enables you to protect your APIs by defining [web application firewall rules](https://docs.aws.amazon.com/appsync/latest/devguide/WAF-Integration.html). 

### Configuring audit logs
<a name="config-audit"></a>

AWS AppSync Merged APIs provide built-in support for AWS CloudTrail, which enables you to [configure and manage audit logs](https://docs.aws.amazon.com/appsync/latest/devguide/cloudtrail-logging.html). 

## Merged API limitations
<a name="merged-api-limits"></a>

When developing Merged APIs, take note of the following rules:

1. A Merged API cannot be a source API for another Merged API.

1. A source API cannot be associated with more than one Merged API.

1. The default size limit for a Merged API schema document is 10 MB.

1. The default number of source APIs that can be associated with a Merged API is 10. However, you can request a limit increase if you need more than 10 source APIs in your Merged API.

## Merged API considerations
<a name="merged-api-considerations"></a>

When designing and implementing Merged APIs, consider the following:

Merging multiple source APIs into a single endpoint can increase the size and complexity of your GraphQL schema and queries. As your merged schema grows, queries may need to traverse multiple resolvers to fulfill a single request, which can add latency to your overall request time. For example, a query that accesses fields from multiple source APIs may require AWS AppSync to execute resolvers from each source API in sequence, with each resolver adding to the total response time.

We strongly recommend that you test your Merged APIs thoroughly during development and under realistic load conditions to ensure they meet your business requirements. Pay specific attention to:
+ The depth and complexity of your merged schema, particularly queries that access fields across multiple source APIs.
+ The number of resolvers that must execute to fulfill common query patterns.
+ The performance characteristics of your data sources and resolvers under expected load.
+ The impact of network latency when accessing resources across multiple source APIs.

Consider implementing performance optimizations such as caching, batching data source requests, and designing your source API schemas to minimize the number of resolver executions required for common operations.

## Creating Merged APIs
<a name="creating-merged-api"></a>

**To create a Merged API in the console**

1. Sign in to the AWS Management Console and open the [AWS AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **Dashboard **, choose **Create API**.

1. Choose **Merged API**, then choose **Next**.

1. In the **Specify API details** page, enter in the following information: 

   1. Under **API Details**, enter in the following information:

      1. Specify your merged API’s **API name**. This field is a way to label your GraphQL API to conveniently distinguish it from other GraphQL APIs. 

      1. Specify the **Contact details**. This field is optional and attaches a name or group to the GraphQL API. It’s not linked to or generated by other resources and works much like the API name field. 

   1. Under **Service role**, you must attach an IAM execution role to your merged API so that AWS AppSync can securely import and use your resources at runtime. You can choose to **Create and use a new service role**, which will allow you to specify the policies and resources that AWS AppSync will use. You can also import an existing IAM role by choosing **Use an existing service role**, then selecting the role from the drop-down list. 

   1. Under **Private API configuration**, you can choose to enable private API features. Note that this choice cannot be changed after creating the merged API. For more information about private APIs, see [Using AWS AppSync Private APIs](https://docs.aws.amazon.com/appsync/latest/devguide/using-private-apis.html). 

      Choose **Next** after you're done. 

1. Next, you must add the GraphQL APIs that will be used as the foundation for your merged API. In the **Select source APIs** page, enter in the following information: 

   1. In the **APIs from your AWS account** table, choose **Add Source APIs**. In the list of GraphQL APIs, each entry will contain the following data:

      1. **Name**: The GraphQL API’s **API name** field. 

      1. **API ID**: The GraphQL API’s unique ID value.

      1. **Primary auth mode**: The default authorization mode for the GraphQL API. For more information about authorization modes in AWS AppSync, see [Authorization and authentication](https://docs.aws.amazon.com/appsync/latest/devguide/security-authz.html). 

      1. **Additonal auth mode**: The secondary authorization modes that were configured in the GraphQL API.

      1. Choose the APIs that you will use in the merged API by selecting the checkbox next to the API’s **Name ** field. Afterwards, choose **Add Source APIs**. The selected GraphQL APIs will appear in the **APIs from your AWS accounts** table.

   1. In the **APIs from other AWS accounts** table, choose **Add Source APIs**. The GraphQL APIs in this list come from other accounts that are sharing their resources to yours through AWS Resource Access Manager (AWS RAM). The process for selecting GraphQL APIs in this table is the same as the process in the previous section. For more information about sharing resources through AWS RAM, see [What is AWS Resource Access Manager?](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html).

      Choose **Next** after you're done.

   1. Add your primary auth mode. See [Authorization and authentication](https://docs.aws.amazon.com/appsync/latest/devguide/security-authz.html) for more information. Choose **Next**.

   1. Review your inputs, then choose **Create API**.

# Building GraphQL APIs with RDS introspection
<a name="rds-introspection"></a>

AWS AppSync's introspection utility can discover models from database tables and propose GraphQL types. The AWS AppSync console's Create API wizard can instantly generate an API from an Aurora MySQL or PostgreSQL database. It automatically creates types and JavaScript resolvers to read and write data.

AWS AppSync provides direct integration with Amazon Aurora databases through the Amazon RDS Data API. Rather than requiring a persistent database connection, the Amazon RDS Data API offers a secure HTTP endpoint that AWS AppSync connects to for running SQL statements. You can use this to create a relational database API for your MySQL and PostgreSQL workloads on Aurora.

Building an API for your relational database with AWS AppSync has several advantages:
+ Your database is not directly exposed to clients, decoupling the access point from the database itself. 
+ You can build purpose-built APIs tailored to the needs of different applications, removing the need for custom business logic in frontends. This aligns with the Backend-For-Frontend (BFF) pattern. 
+ Authorization and access control can be implemented at the AWS AppSync layer using various authorization modes to control access. No additional compute resources are required to connect to the database, such as hosting a web server or proxying connections. 
+ Real-time capabilities can be added via subscriptions, with data mutations made through AppSync automatically pushed to connected clients. 
+ Clients can connect to the API over HTTPS using common ports like 443.

AWS AppSync makes building APIs from existing relational databases easy. Its introspection utility can discover models from database tables and propose GraphQL types. The AWS AppSync console's *Create API* wizard can instantly generate an API from an Aurora MySQL or PostgreSQL database. It automatically creates types and JavaScript resolvers to read and write data. 

AWS AppSync provides integrated JavaScript utilities to simplify writing SQL statements in resolvers. You can use AWS AppSync's `sql` tag templates for static statements with dynamic values, or the `rds` module utilities to build statements programmatically. See the [resolver function reference for RDS](https://docs.aws.amazon.com//appsync/latest/devguide/resolver-reference-rds-js.html) data sources and [built-in modules](https://docs.aws.amazon.com//appsync/latest/devguide/built-in-modules-js.html#built-in-rds-modules) for more. 

## Using the introspection feature (console)
<a name="using-introspection-console"></a>

For a detailed tutorial and getting started guide, see [Tutorial: Aurora PostgreSQL Serverless with Data API](https://docs.aws.amazon.com//appsync/latest/devguide/aurora-serverless-tutorial-js.html). 

The AWS AppSync console allows you to create an AWS AppSync GraphQL API from your existing Aurora database configured with the Data API in just a few minutes. This quickly generates an operational schema based on your database configuration. You can use the API as-is or build on it to add features. 

1. Sign in to the AWS Management Console and open the [AppSync console](https://console.aws.amazon.com/appsync/).

   1. In the **Dashboard**, choose **Create API**.

1. Under **API options**, choose **GraphQL APIs**, **Start with an Amazon Aurora cluster**, then **Next**.

   1. Enter an **API name**. This will be used as an identifier for the API in the console.

   1. For **contact details**, you can enter a point of contact to identify a manager for the API. This is an optional field.

   1. Under **Private API configuration**, you can enable private API features. A private API can only be accessed from a configured VPC endpoint (VPCE). For more information, see [Private APIs](https://docs.aws.amazon.com//appsync/latest/devguide/using-private-apis.html).

      We don't recommend enabling this feature for this example. Choose **Next** after reviewing your inputs.

1. In the **Database** page, choose **Select database**.

   1. You need to choose your database from your cluster. The first step is to choose the **Region** in which your cluster exists.

   1. Choose the **Aurora cluster** from the drop-down list. Note that you must have created and [enabled](https://docs.aws.amazon.com//AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.enabling) a corresponding data API before using the resource.

   1. Next, you must add the credentials for your database to the service. This is primarily done using AWS Secrets Manager. Choose the **Region** in which your secret exists. For more information on how to retrieve secret information, see [Find secrets](https://docs.aws.amazon.com//secretsmanager/latest/userguide/manage_search-secret.html) or [Retrieve secrets](https://docs.aws.amazon.com//secretsmanager/latest/userguide/retrieving-secrets.html).

   1. Add your secret from the drop-down list. Note that the user must have [read permissions](https://docs.aws.amazon.com//AmazonRDS/latest/UserGuide/security_iam_id-based-policy-examples.html#security_iam_id-based-policy-examples-console) for your database.

1. Choose **Import**.

   AWS AppSync will start introspecting your database, discovering tables, columns, primary keys, and indexes. It checks that the discovered tables can be supported in a GraphQL API. Note that to support creating new rows, tables need a primary key, which can use multiple columns. AWS AppSync maps table columns to type fields as follows:     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/rds-introspection.html)

1. Once table discovery is complete, the **Database** section will be populated with your information. In the new **Database tables** section, the data from the table may already be populated and converted to a type for your schema. If you don't see some of the required data, you can check for it by choosing **Add tables**, clicking on the checkboxes for those types in the modal that appears, then choosing **Add**. 

   To remove a type from the **Database tables** section, click on the checkbox next to the type you want to remove, then choose **Remove**. The removed types will be placed in the **Add tables** modal if you want to add them again later.

   Note that AWS AppSync uses the table names as type names, but you can rename them - for example, changing a plural table name like *movies* to the type name *Movie*. To rename a type in the **Database tables** section, click on the checkbox of the type you want to rename, then click on the *pencil* icon in the **Type name** column.

   To preview the content of the schema based on your selections, choose **Preview schema**. Note that this schema cannot be empty, so you'll have to have at least one table converted to a type. Also, this schema cannot exceed 1 MB in size.

   1. Under **Service role**, choose whether to create a new service role specifically for this import or use an existing role.

1. Choose **Next**.

1. Next, choose whether to create a read-only API (queries only) or an API for reading and writing data (with queries and mutations). The latter also supports real-time subscriptions triggered by mutations. 

1. Choose **Next**.

1. Review your choices and then choose **Create API**. AWS AppSync will create the API and attach resolvers to queries and mutations. The generated API is fully operational and can be extended as needed. 

## Using the introspection feature (API)
<a name="using-introspection-api"></a>

You can use the `StartDataSourceIntrospection` introspection API to discover models in your database programmatically. For more details on the command, see using the [https://docs.aws.amazon.com//appsync/latest/APIReference/API_StartDataSourceIntrospection.html](https://docs.aws.amazon.com//appsync/latest/APIReference/API_StartDataSourceIntrospection.html) API. 

To use `StartDataSourceIntrospection`, provide your Aurora cluster Amazon Resource Name (ARN), database name, and AWS Secrets Manager secret ARN. The command starts the introspection process. You can retrieve the results with the `GetDataSourceIntrospection` command. You can specify whether the command should return the Storage Definition Language (SDL) string for the discovered models. This is useful for generating an SDL schema definition directly from the discovered models.

 For example, if you have the following Data definition language (DDL) statement for a simple `Todos` table:

```
create table if not exists public.todos  
(  
id serial constraint todos_pk primary key,  
description text,  
due timestamp,  
"createdAt" timestamp default now()  
);
```

You start the introspection with the following.

```
aws appsync start-data-source-introspection \ 
  --rds-data-api-config resourceArn=<cluster-arn>,secretArn=<secret-arn>,databaseName=database
```

Next, use the `GetDataSourceIntrospection` command to retrieve the result.

```
aws appsync get-data-source-introspection \
  --introspection-id a1234567-8910-abcd-efgh-identifier \
  --include-models-sdl
```

This returns the following result.

```
{
    "introspectionId": "a1234567-8910-abcd-efgh-identifier",
    "introspectionStatus": "SUCCESS",
    "introspectionStatusDetail": null,
    "introspectionResult": {
        "models": [
            {
                "name": "todos",
                "fields": [
                    {
                        "name": "description",
                        "type": {
                            "kind": "Scalar",
                            "name": "String",
                            "type": null,
                            "values": null
                        },
                        "length": 0
                    },
                    {
                        "name": "due",
                        "type": {
                            "kind": "Scalar",
                            "name": "AWSDateTime",
                            "type": null,
                            "values": null
                        },
                        "length": 0
                    },
                    {
                        "name": "id",
                        "type": {
                            "kind": "NonNull",
                            "name": null,
                            "type": {
                                "kind": "Scalar",
                                "name": "Int",
                                "type": null,
                                "values": null
                            },
                            "values": null
                        },
                        "length": 0
                    },
                    {
                        "name": "createdAt",
                        "type": {
                            "kind": "Scalar",
                            "name": "AWSDateTime",
                            "type": null,
                            "values": null
                        },
                        "length": 0
                    }
                ],
                "primaryKey": {
                    "name": "PRIMARY_KEY",
                    "fields": [
                        "id"
                    ]
                },
                "indexes": [],
                "sdl": "type todos\n{\ndescription: String\n\ndue: AWSDateTime\n\nid: Int!\n\ncreatedAt: AW
SDateTime\n}\n"
            }
        ],
        "nextToken": null
    }
}
```