

# GraphQL and AWS AppSync architecture
<a name="graphql-overview"></a>

**Note**  
This guide assumes the user has a working knowledge of the REST architectural style. We recommend reviewing this and other front-end topics before working with GraphQL and AWS AppSync.

GraphQL is a query and manipulation language for APIs. GraphQL provides a flexible and intuitive syntax to describe data requirements and interactions. It enables developers to ask for exactly what is needed and get back predictable results. It also makes it possible to access many sources in a single request, reducing the number of network calls and bandwidth requirements, therefore saving battery life and CPU cycles consumed by applications. 

Making updates to data is made simple with mutations, allowing developers to describe how the data should change. GraphQL also facilitates the quick setup of real-time solutions via subscriptions. All of these features combined, coupled with powerful developer tools, make GraphQL essential to managing application data.

GraphQL is an alternative to REST. RESTful architecture is currently one of the more popular solutions for client-server communication. It's centered on the concept of your resources (data) being exposed by a URL. These URLs can be used to access and manipulate the data through CRUD (create, read, update, delete) operations in the form of HTTP methods like `GET`, `POST`, and `DELETE`. REST's advantage is that it's relatively simple to learn and implement. You can quickly set up RESTful APIs to call a wide range of services. 

However, technology is getting more complicated. As applications, tools, and services begin to scale for a worldwide audience, the need for fast, scalable architectures is of paramount importance. REST has many shortcomings when dealing with scalable operations. See this [use case](https://aws.amazon.com/blogs/architecture/what-to-consider-when-modernizing-apis-with-graphql-on-aws/) for an example.

In the following sections, we'll review some of the concepts surrounding RESTful APIs. We'll then introduce GraphQL and how it works.

For more information about GraphQL and the benefits of migrating over to AWS, see the [Decision guide to GraphQL implementations](https://aws.amazon.com/graphql/guide/).

**Topics**
+ [What is an API](what-is-an-api.md)
+ [What is REST](what-is-rest.md)
+ [What is GraphQL](what-is-graphql.md)
+ [Comparing REST and GraphQL](comparing-rest-graphql.md)
+ [Why Use GraphQL over REST](why-use-graphql.md)
+ [Components of a GraphQL API](api-components.md)
+ [Additional properties of GraphQL](graphql-properties.md)

# What is an API?
<a name="what-is-an-api"></a>

An application programming interface (API) defines the rules that you must follow to communicate with other software systems. Developers expose or create APIs so that other applications can communicate with their applications programmatically. For example, the timesheet application exposes an API that asks for an employee's full name and a range of dates. When it receives this information, it internally processes the employee's timesheet and returns the number of hours worked in that date range.

You can think of a web API as a gateway between clients and resources on the web.

## Clients
<a name="what-is-a-client"></a>

Clients are users who want to access information from the web. The client can be a person or a software system that uses the API. For example, developers can write programs that access weather data from a weather system. Or you can access the same data from your browser when you visit the weather website directly.

## Resources
<a name="what-is-a-resource"></a>

Resources are the information that different applications provide to their clients. Resources can be images, videos, text, numbers, or any type of data. The machine that gives the resource to the client is also called the server. Organizations use APIs to share resources and provide web services while maintaining security, control, and authentication. In addition, APIs help them to determine which clients get access to specific internal resources.

# What is REST?
<a name="what-is-rest"></a>

At a high level, representational State Transfer (REST) is a software architecture that imposes conditions on how an API should work. REST was initially created as a guideline to manage communication on a complex network like the internet. You can use REST-based architecture to support high-performing and reliable communication at scale. You can easily implement and modify it, bringing visibility and cross-platform portability to any API system.

API developers can design APIs using several different architectures. APIs that follow the REST architectural style are called REST APIs. Web services that implement REST architecture are called RESTful web services. The term RESTful API generally refers to RESTful web APIs. However, you can use the terms REST API and RESTful API interchangeably.

The following are some of the principles of the REST architectural style:

## Uniform interface
<a name="uniform-interface"></a>

The uniform interface is fundamental to the design of any RESTful webservice. It indicates that the server transfers information in a standard format. The formatted resource is called a representation in REST. This format can be different from the internal representation of the resource on the server application. For example, the server can store data as text but send it in an HTML representation format.

Uniform interface imposes four architectural constraints:

1.  Requests should identify resources. They do so by using a uniform resource identifier. 

1.  Clients have enough information in the resource representation to modify or delete the resource if they want to. The server meets this condition by sending metadata that describes the resource further. 

1.  Clients receive information about how to process the representation further. The server achieves this by sending self-descriptive messages that contain metadata about how the client can best use them. 

1.  Clients receive information about all other related resources they need to complete a task. The server achieves this by sending hyperlinks in the representation so that clients can dynamically discover more resources. 

## Statelessness
<a name="statelessness"></a>

In REST architecture, statelessness refers to a communication method in which the server completes every client request independently of all previous requests. Clients can request resources in any order, and every request is stateless or isolated from other requests. This REST API design constraint implies that the server can completely understand and fulfill the request every time. 

## Layered system
<a name="layered-system"></a>

In a layered system architecture, the client can connect to other authorized intermediaries between the client and server, and it will still receive responses from the server. Servers can also pass on requests to other servers. You can design your RESTful web service to run on several servers with multiple layers such as security, application, and business logic, working together to fulfill client requests. These layers remain invisible to the client.

## Cacheability
<a name="cacheability"></a>

RESTful web services support caching, which is the process of storing some responses on the client or on an intermediary to improve server response time. For example, suppose that you visit a website that has common header and footer images on every page. Every time you visit a new website page, the server must resend the same images. To avoid this, the client caches or stores these images after the first response and then uses the images directly from the cache. RESTful web services control caching by using API responses that define themselves as cacheable or noncacheable.

## What is a RESTful API?
<a name="what-is-a-restful-api"></a>

RESTful API is an interface that two computer systems use to exchange information securely over the internet. Most business applications have to communicate with other internal and third-party applications to perform various tasks. For example, to generate monthly payslips, your internal accounts system has to share data with your customer's banking system to automate invoicing and communicate with an internal timesheet application. RESTful APIs support this information exchange because they follow secure, reliable, and efficient software communication standards.

## How do RESTful APIs work?
<a name="how-do-restful-apis-work"></a>

The basic function of a RESTful API is the same as browsing the internet. The client contacts the server by using the API when it requires a resource. API developers explain how the client should use the REST API in the server application API documentation. These are the general steps for any REST API call:

1.  The client sends a request to the server. The client follows the API documentation to format the request in a way that the server understands. 

1.  The server authenticates the client and confirms that the client has the right to make that request. 

1.  The server receives the request and processes it internally. 

1.  The server returns a response to the client. The response contains information that tells the client whether the request was successful. The response also includes any information that the client requested. 

The REST API request and response details vary slightly depending on how the API developers design the API.

# What is GraphQL?
<a name="what-is-graphql"></a>

GraphQL is both a query language for APIs and a runtime for executing those queries. GraphQL allows clients to request exactly the data they need, providing a more flexible and efficient alternative to REST in many scenarios. Unlike REST, which relies on predefined endpoints, GraphQL uses a single endpoint where clients can specify their data requirements in the form of queries and mutations. 

See [Components of a GraphQL API](https://docs.aws.amazon.com/appsync/latest/devguide/api-components.html) for more information on how GraphQL APIs are structured.

# Comparing REST and GraphQL
<a name="comparing-rest-graphql"></a>

APIs (Application Programming Interfaces) play a crucial role in facilitating data exchange between applications. As stated earlier, two prominent approaches for designing APIs have emerged: GraphQL and REST. While both serve the fundamental purpose of enabling client-server communication, they differ significantly in their implementation and use cases.

GraphQL and REST share several key characteristics: 

1. **Client-Server Model**: Both use a client-server architecture for data exchange. 

1. **Statelessness**: Neither maintains client session information between requests. 

1. **HTTP-Based**: Both typically use HTTP as the underlying communication protocol. 

1. **Resource-Oriented Design**: Both design their data interchange around resources, which refer to any data or object that the client can access and manipulate through the API. 

1. **Data Format Flexibility**: JSON is the most commonly used data exchange format in both, though other formats like XML and HTML are also supported. 

1. **Language and Database Agnostic**: Both can work with any programming language or database structure, making them highly interoperable. 

1. **Caching Support**: Both support caching, allowing clients and servers to store frequently accessed data for improved performance. 

While sharing some fundamental principles, GraphQL and REST differ significantly in their approach to API design and data fetching:

1. **Request Structure and Data Fetching**

   REST uses different HTTP methods (GET, POST, PUT, DELETE) to perform operations on resources. This often requires multiple endpoints for different resources, which can lead to inefficiencies in data retrieval. For example, running a GET operation to retrieve a user's data may lead to data over-fetching or under-fetching. To get the correct data, truncation or multiple operations may be called. 

   GraphQL uses a single endpoint for all operations. It relies on queries for fetching data and mutations for modifying data. Clients can use queries to fetch exactly the data they need in a single request, which reduces network overhead by minimizing data transfer. 

1. **Server-side Schema**

   REST doesn't require a server-side schema, though one can be optionally defined for efficient API design and documentation.

   GraphQL uses a strongly-typed server-side schema to define data and data services. The schema, written in GraphQL Schema Definition Language (SDL), includes object types and fields for each object and server-side resolver functions that define operations for each field.

1. **Versioning**

   REST often includes versioning in the URL, which can lead to maintaining multiple API versions simultaneously. Versioning is not mandatory but can help prevent breaking changes. 

   GraphQL promotes a continuous evolution of the API without explicit versioning by requiring backward compatibility. Deleted fields return error messages, while deprecation tags phase out old fields and return warning messages. 

1. **Error Handling** 

   REST is weakly typed, requiring error handling to be built into the surrounding code. This may not automatically identify type-related errors (e.g., parsing a number as text). 

   By contrast, GraphQL is strongly typed and requires a comprehensive schema definition. This allows your service to automatically identify many request errors with a high level of detail.

1. **Use Cases**

   REST is better suited for: 
   + Smaller applications with less complex data requirements. 
   + Scenarios where data and operations are used similarly by all clients. 
   + Applications without complex data querying needs. 

   GraphQL is better suited for: 
   + Scenarios with limited bandwidth, where minimizing requests and responses is crucial. 
   + Applications with multiple data sources that need to be combined at a single endpoint. 
   + Cases where client requests vary significantly and expect different response structures.

   Note that it's possible to use both GraphQL and REST APIs within a single application for different areas of functionality. Furthermore, you can upgrade a RESTful API to include GraphQL capabilities without a complete rewrite. See [How to build GraphQL resolvers for AWS data sources](https://aws.amazon.com/graphql/resolvers/) for an example.

# Why Use GraphQL over REST?
<a name="why-use-graphql"></a>

REST is one of the cornerstone architectural styles of web APIs. However, as the world becomes more interconnected, the need to develop robust and scalable applications will become a more pressing issue. While REST is often used to build web APIs, there are several recurring drawbacks to RESTful implementations that have been identified:

1. **Data requests**: Using RESTful APIs, you would typically request the data you need through endpoints. The problem arises when you have data that may not be so neatly packaged. The data you need may be sitting behind multiple layers of abstraction, and the only way to fetch the data is by using multiple endpoints, which means making multiple requests to extract all of the data.

1. **Overfetching and underfetching**: To add to the problems of multiple requests, the data from each endpoint is strictly defined, meaning you will return whatever data was defined for that API, even if you didn't technically want it.

   This can result in *over-fetching*, which means our requests return superfluous data. For example, let's say you're requesting company personnel data and want to know the names of the employees in a certain department. The endpoint that returns the data will contain the names, but it might also contain other data like job title or date of birth. Because the API is fixed, you can't just request the names alone; the rest of the data comes with it.

   The opposite situation in which we don't return enough data is called *under-fetching*. To get all of the requested data, you may have to make multiple requests to the service. Depending on how the data was structured, you could run into inefficient queries resulting in issues like the dreaded n\$11 problem.

1. **Slow development iterations**: Many developers tailor their RESTful APIs to fit the flow of their applications. However, as their applications grow, both the front- and backends may require extensive changes. As a result, the APIs may no longer fit the shape of the data in a way that's efficient or impactful. This results in slower product iterations due to the need for API modifications.

1. **Performance at scale**: Due to these compounding issues, there are many areas where scalability will be impacted. Performance on the application side may be impacted because your requests will return too much data or too little (resulting in more requests). Both situations cause unnecessary strain on the network resulting in poor performance. On the developer side, the speed of development may be reduced because your APIs are fixed and no longer fit the data they're requesting.

GraphQL's selling point is to overcome the drawbacks of REST. Here are some of the key solutions GraphQL offers to developers:

1. **Single endpoints**: GraphQL uses a single endpoint to query data. There's no need to build multiple APIs to fit the shape of the data. This results in fewer requests going over the network.

1. **Fetching**: GraphQL solves the perennial issues of over- and under-fetching by simply defining the data you need. GraphQL lets you shape the data to fit your needs so you only receive what you asked for.

1. **Abstraction**: GraphQL APIs contain a few components and systems that describe the data using a language-agnostic standard. In other words, the shape and structure of the data are standardized so both the front- and backends know how it will be sent over the network. This allows developers on both ends to work with GraphQL's systems and not around them.

1. **Rapid iterations**: Because of the standardization of data, changes on one end of development may not be required on the other. For example, frontend presentation changes may not result in extensive backend changes because GraphQL allows the data specification to be modified readily. You can simply define or modify the shape of the data to fit the needs of the application as it grows. This results in less potential development work.

These are only some of the benefits of GraphQL. In the next few sections, you'll learn how GraphQL is structured and the properties that make it a unique alternative to REST.

# Components of a GraphQL API
<a name="api-components"></a>

A standard GraphQL API is composed of a single schema that handles the shape of the data that will be queried. Your schema is linked to one or more of your data sources like a database or Lambda function. In between the two sits one or more resolvers that handle the business logic for your requests. Each component plays an important role in your GraphQL implementation. The following sections will introduce these three components and the role they play in the GraphQL service.

![\[GraphQL API components: schema, resolvers, and data sources interconnected with AppSync.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/appsync-architecture-graphql-api.png)


**Topics**
+ [GraphQL schemas](schema-components.md)
+ [Data sources](data-source-components.md)
+ [Resolvers](resolver-components.md)

# GraphQL schemas
<a name="schema-components"></a>

The GraphQL schema is the foundation of a GraphQL API. It serves as the blueprint that defines the shape of your data. It's also a contract between your client and server that defines how your data will be retrieved and/or modified.

GraphQL schemas are written in the *Schema Definition Language* (SDL). SDL is composed of types and fields with an established structure:
+ **Types**: Types are how GraphQL defines the shape and behavior of the data. GraphQL supports a multitude of types that will be explained later in this section. Each type that's defined in your schema will contain its own scope. Inside the scope will be one or more fields that can contain a value or logic that will be used in your GraphQL service. Types fill many different roles, the most common being objects or scalars (primitive value types).
+ **Fields**: Fields exist within the scope of a type and hold the value that's requested from the GraphQL service. These are very similar to variables in other programming languages. The shape of the data you define in your fields will determine how the data is structured in a request/response operation. This allows developers to predict what will be returned without knowing how the backend of the service is implemented.

To visualize what a schema would look like, let's review the contents of a simple GraphQL schema. In production code, your schema will typically be in a file called `schema.graphql` or `schema.json`. Let's assume that we're peering into a project that implements a GraphQL service. This project is storing company personnel data, and the `schema.graphql` file is being used to retrieve personnel data and add new personnel to a database. The code may look like this:

------
#### [ schema.graphql ]

```
type Person {                                  
   id: ID!
   name: String                                  
   age: Int
}
type Query {                                   
  people: [Person]
}
type Mutation {
  addPerson(id: ID!, name: String, age: Int): Person
}
```

------

We can see that there are three types defined in the schema: `Person`, `Query`, and `Mutation`. Looking at `Person`, we can guess that this is the blueprint for an instance of a company employee, which would make this type an object. Inside its scope, we see `id`, `name`, and `age`. These are the fields that define the properties of a `Person`. This means our data source stores each `Person`'s `name` as a `String` scalar (primitive) type and `age` as an `Int` scalar (primitive) type. The `id` acts as a special, unique identifier for each `Person`. It's also a required value as denoted by the `!` symbol.

The next two object types behave differently. GraphQL reserves a few keywords for special object types that define how the data will be populated in the schema. A `Query` type will retrieve data from the source. In our example, our query might retrieve `Person` objects from a database. This may remind you of `GET` operations in RESTful terminology. A `Mutation` will modify data. In our example, our mutation may add more `Person` objects to the database. This may remind you of state-changing operations like `PUT` or `POST`. The behaviors of all special object types will be explained later in this section.

Let's assume the `Query` in our example will retrieve something from the database. If we look at the fields of `Query`, we see one field called `people`. Its field value is `[Person]`. This means we want to retrieve some instance of `Person` in the database. However, the addition of brackets means that we want to return a list of all `Person` instances and not just a specific one.

The `Mutation` type is responsible for performing state-changing operations like data modification. A mutation is responsible for performing some state-changing operation on the data source. In our example, our mutation contains an operation called `addPerson` that adds a new `Person` object to the database. The mutation uses a `Person` and expects an input for the `id`, `name`, and `age` fields.

At this point, you may be wondering how operations like `addPerson` work without a code implementation given that it supposedly performs some behavior and looks a lot like a function with a function name and parameters. Currently, it won't work because a schema only serves as the declaration. To implement the behavior of `addPerson`, we would have to add a resolver to it. A resolver is a unit of code that is executed whenever its associated field (in this case, the `addPerson` operation) is called. If you want to use an operation, you'll have to add the resolver implementation at some point. In a way, you can think of the schema operation as the function declaration and the resolver as the definition. Resolvers will be explained in a different section.

This example shows only the simplest ways a schema can manipulate data. You build complex, robust, and scalable applications by leveraging the features of GraphQL and AWS AppSync. In the next section, we'll define all of the different types and field behaviors you can utilize in your schema.

# GraphQL types
<a name="graphql-types"></a>

GraphQL supports many different types. As you saw in the previous section, types define the shape or behavior of your data. They are the fundamental building blocks of a GraphQL schema. 

Types can be categorized into inputs and outputs. Inputs are types that are allowed to be passed in as the argument for the special object types (`Query`, `Mutation`, etc.), whereas output types are strictly used to store and return data. A list of types and their categorizations are listed below:
+ **Objects**: An object contains fields describing an entity. For instance, an object could be something like a `book` with fields describing its characteristics like `authorName`, `publishingYear`, etc. They are strictly output types.
+ **Scalars**: These are primitive types like int, string, etc. They are typically assigned to fields. Using the `authorName` field as an example, it could be assigned the `String` scalar to store a name like "John Smith". Scalars can be both input and output types.
+ **Inputs**: Inputs allow you to pass a group of fields as an argument. They are structured very similarly to objects, but they can be passed in as arguments to special objects. Inputs allow you to define scalars, enums, and other inputs in its scope. Inputs can only be input types.
+ **Special objects**: Special objects perform state-changing operations and do the bulk of the heavy lifting of the service. There are three special object types: query, mutation, and subscription. Queries typically fetch data; mutations manipulate data; subscriptions open and maintain a two-way connection between clients and servers for constant communication. Special objects are neither input nor output given their functionality.
+ **Enums**: Enums are predefined lists of legal values. If you call an enum, its values can only be what's defined in its scope. For example, if you had an enum called `trafficLights` depicting a list of traffic signals, it could have values like `redLight` and `greenLight` but not `purpleLight`. A real traffic light will only have so many signals, so you could use the enum to define them and force them to be the only legal values when referencing `trafficLight`. Enums can be both input and output types.
+ **Unions/interfaces**: Unions allow you to return one or more things in a request depending on the data that was requested by the client. For example, if you had a `Book` type with a `title` field and an `Author` type with a `name` field, you could create a union between both types. If your client wanted to query a database for the phrase "Julius Caesar", the union could return *Julius Caesar* (the play by William Shakespeare) from the `Book` `title` and *Julius Caesar* (the author of *Commentarii de Bello Gallico*) from the `Author` `name`. Unions can only be output types.

  Interfaces are sets of fields that objects must implement. This is a bit similar to interfaces in programming languages like Java where you must implement the fields defined in the interface. For example, let's say you made an interface called `Book` that contained a `title` field. Let's say you later created a type called `Novel` that implemented `Book`. Your `Novel` would have to include a `title` field. However, your `Novel` could also include other fields not in the interface like `pageCount` or `ISBN`. Interfaces can only be output types.

The following sections will explain how each type works in GraphQL.

## Objects
<a name="object-components"></a>

GraphQL objects are the main type you will see in production code. In GraphQL, you can think of an object as a grouping of different fields (similar to variables in other languages), with each field being defined by a type (typically a scalar or another object) that can hold a value. Objects represent a unit of data that can be retrieved/manipulated from your service implementation.

Object types are declared using the `Type` keyword. Let's modify our schema example slightly:

```
type Person {
  id: ID!
  name: String
  age: Int
  occupation: Occupation
}

type Occupation {
  title: String
}
```

The object types here are `Person` and `Occupation`. Each object has its own fields with its own types. One feature of GraphQL is the ability to set fields to other types. You can see the `occupation` field in `Person` contains an `Occupation` object type. We can make this association because GraphQL is only describing the data and not the implementation of the service.

## Scalars
<a name="scalar-components"></a>

Scalars are essentially primitive types that hold values. In AWS AppSync, there are two types of scalars: the default GraphQL scalars and AWS AppSync scalars. Scalars are typically used to store field values within object types. Default GraphQL types include `Int`, `Float`, `String`, `Boolean`, and `ID`. Let's use the previous example again:

```
type Person { 
  id: ID!
  name: String
  age: Int
  occupation: Occupation
}

type Occupation {
  title: String
}
```

Singling out the `name` and `title` fields, both hold a `String` scalar. `Name` could return a string value like "`John Smith`" and the title could return something like "`firefighter`". Some GraphQL implementations also support custom scalars using the `Scalar` keyword and implementing the type's behavior. However, AWS AppSync currently **doesn't support** custom scalars. For a list of scalars, see [Scalar types in AWS AppSync](https://docs.aws.amazon.com//appsync/latest/devguide/scalars.html).

## Inputs
<a name="input-components"></a>

Due to the concept of input and output types, there are certain restrictions in place when passing in arguments. Types that commonly need to be passed in, especially objects, are restricted. You can use the input type to bypass this rule. Inputs are types that contain scalars, enums, and other input types.

Inputs are defined using the `input` keyword:

```
type Person { 
  id: ID!
  name: String
  age: Int
  occupation: Occupation
}

type Occupation {
  title: String
}

input personInput { 
  id: ID!
  name: String
  age: Int
  occupation: occupationInput
}

input occupationInput {
  title: String
}
```

As you can see, we can have separate inputs that mimic the original type. These inputs will often be used in your field operations like this:

```
type Person { 
  id: ID!
  name: String
  age: Int
  occupation: Occupation
}

type Occupation {
  title: String
}

input occupationInput {
  title: String
}

type Mutation {
  addPerson(id: ID!, name: String, age: Int, occupation: occupationInput): Person
}
```

Note how we're still passing `occupationInput` in place of `Occupation` to create a `Person`. 

This is but one scenario for inputs. They don't necessarily need to copy objects 1:1, and in production code, you most likely won't be using it like this. It's good practice to take advantage of GraphQL schemas by defining only what you need to input as arguments.

Also, the same inputs can be used in multiple operations, but we don't recommend doing this. Each operation should ideally contain its own unique copy of the inputs in case the schema's requirements change.

## Special objects
<a name="special-object-components"></a>

GraphQL reserves a few keywords for special objects that define some of the business logic for how your schema will retrieve/manipulate data. At most, there can be one of each of these keywords in a schema. They act as entry points for all requested data that your clients run against your GraphQL service. 

Special objects are also defined using the `type` keyword. Though they're used differently from regular object types, their implementation is very similar.

------
#### [ Queries ]

Queries are very similar to `GET` operations in that they perform a read-only fetch to get data from your source. In GraphQL, the `Query` defines all of the entry points for clients making requests against your server. There will always be a `Query` in your GraphQL implementation.

Here are the `Query` and modified object types we used in our previous schema example:

```
type Person { 
  id: ID!
  name: String
  age: Int
  occupation: Occupation
}
type Occupation {
  title: String
}
type Query {                                   
  people: [Person]
}
```

Our `Query` contains a field called `people` that returns a list of `Person` instances from the data source. Let's say we need to change the behavior of our application, and now we need to return a list of only the `Occupation` instances for some separate purpose. We could simply add it to the query:

```
type Query {                                   
  people: [Person]
  occupations: [Occupation]
}
```

In GraphQL, we can treat our query as the single source of requests. As you can see, this is potentially much simpler than RESTful implementations that might use different endpoints to achieve the same thing (`.../api/1/people` and `.../api/1/occupations`).

Assuming we have a resolver implementation for this query, we can now perform an actual query. While the `Query` type exists, we have to explicitly call it for it to run in the application's code. This can be done using the `query` keyword:

```
query getItems {
   people {
      name
   }
   occupations {
      title
   }
}
```

As you can see, this query is called `getItems` and returns `people` (a list of `Person` objects) and `occupations` (a list of `Occupation` objects). In `people`, we're returning only the `name` field of each `Person`, while we're returning the `title` field of each `Occupation`. The response may look like this:

```
{
  "data": {
    "people": [
      {
        "name": "John Smith"
      },
      {
        "name": "Andrew Miller"
      },
      .
      .
      .
    ],
    "occupations": [
      {
        "title": "Firefighter"
      },
      {
        "title": "Bookkeeper"
      },
      .
      .
      .
    ]
  }
}
```

The example response shows how the data follows the shape of the query. Each entry retrieved is listed within the scope of the field. `people` and `occupations` are returning things as separate lists. Though useful, it might be more convenient to modify the query to return a list of people's names and occupations:

```
query getItems {
   people {
      name   
      occupation {
        title
      }
}
```

This is a legal modification because our `Person` type contains an `occupation` field of type `Occupation`. When listed within the scope of `people`, we're returning each `Person`'s `name` along with their associated `Occupation` by `title`. The response may look like this:

```
}
  "data": {
    "people": [
      {
        "name": "John Smith",
        "occupation": {
          "title": "Firefighter"
        }
      },
      {
        "name": "Andrew Miller",
        "occupation": {
          "title": "Bookkeeper"
        }
      },
      .
      .
      .
    ]
  }
}
```

------
#### [ Mutations ]

Mutations are similar to state-changing operations like `PUT` or `POST`. They perform a write operation to modify data in the source, then fetch the response. They define your entry points for data modification requests. Unlike queries, a mutation may or may not be included in the schema depending on the project's needs. Here's the mutation from the schema example:

```
type Mutation {
  addPerson(id: ID!, name: String, age: Int): Person
}
```

The `addPerson` field represents one entry point that adds a `Person` to the data source. `addPerson` is the field name; `id`, `name`, and `age` are the parameters; and `Person` is the return type. Looking back at the `Person` type:

```
type Person { 
  id: ID!
  name: String
  age: Int
  occupation: Occupation
}
```

We added the `occupation` field. However, we cannot set this field to `Occupation` directly because objects cannot be passed in as arguments; they are strictly output types. We should instead pass an input with the same fields as an argument:

```
input occupationInput {
  title: String
}
```

 We can also easily update our `addPerson` to include this as a parameter when making new `Person` instances:

```
type Mutation {
  addPerson(id: ID!, name: String, age: Int, occupation: occupationInput): Person
}
```

Here's the updated schema:

```
type Person { 
  id: ID!
  name: String
  age: Int
  occupation: Occupation
}

type Occupation {
  title: String
}

input occupationInput {
  title: String
}

type Mutation {
  addPerson(id: ID!, name: String, age: Int, occupation: occupationInput): Person
}
```

Note that `occupation` will pass in the `title` field from `occupationInput` to complete the creation of the `Person` instead of the original `Occupation` object. Assuming we have a resolver implementation for `addPerson`, we can now perform an actual mutation. While the `Mutation` type exists, we have to explicitly call it for it to run in the application's code. This can be done using the `mutation` keyword:

```
mutation createPerson {
  addPerson(id: ID!, name: String, age: Int, occupation: occupationInput) {
    name
    age
    occupation {
      title
    }
  }
}
```

This mutation is called `createPerson`, and `addPerson` is the operation. To create a new `Person`, we can enter the arguments for `id`, `name`, `age`, and `occupation`. In the scope of `addPerson`, we can also see other fields like `name`, `age`, etc. This is your response; these are the fields that will be returned after the `addPerson` operation is complete. Here's the final part of the example:

```
mutation createPerson {
  addPerson(id: "1", name: "Steve Powers", age: "50", occupation: "Miner") {
    id
    name
    age
    occupation {
      title
    }
  }
}
```

Using this mutation, a result might look like this:

```
{
  "data": {
    "addPerson": {
      "id": "1",
      "name": "Steve Powers",
      "age": "50",
      "occupation": {
        "title": "Miner"
      }
    }
  }
}
```

As you can see, the response returned the values we requested in the same format that was defined in our mutation. It's good practice to return all values that were modified to reduce confusion and the need for more queries in the future. Mutations allow you to include multiple operations within its scope. They will be run sequentially in the order listed in the mutation. For example, if we create another operation called `addOccupation` that adds job titles to the data source, we can call this in the mutation after `addPerson`. `addPerson` will be handled first followed by `addOccupation`.

------
#### [ Subscriptions ]

Subscriptions use [WebSockets](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_client_applications) to open a lasting, two-way connection between the server and its clients. Typically, a client will subscribe, or listen, to the server. Whenever the server makes a server-side change or performs an event, the subscribed client will receive the updates. This type of protocol is useful when multiple clients are subscribed and need to be notified about changes happening in the server or other clients. For instance, subscriptions can be used to update social media feeds. There could be two users, User A and User B, who are both subscribed to automatic notification updates whenever they receive direct messages. User A on Client A could send a direct message to User B on Client B. User A's client would send the direct message, which would be processed by the server. The server would then send the direct message to User B's account while sending an automatic notification to Client B.

Here's an example of a `Subscription` that we could add to the schema example:

```
type Subscription {                                   
  personAdded: Person
}
```

The `personAdded` field will send a message to subscribed clients whenever a new `Person` is added to the data source. Assuming we have a resolver implementation for `personAdded`, we can now use the subscription. While the `Subscription` type exists, we have to explicitly call it for it to run in the application's code. This can be done using the `subscription` keyword:

```
subscription personAddedOperation {
  personAdded {
    id
    name
  }
}
```

The subscription is called `personAddedOperation`, and the operation is `personAdded`. `personAdded` will return the `id` and `name` fields of new `Person` instances. Looking at the mutation example, we added a `Person` using this operation:

```
addPerson(id: "1", name: "Steve Powers", age: "50", occupation: "Miner")
```

If our clients were subscribed to updates to the newly added `Person`, they might see this after `addPerson` runs:

```
{
  "data": {
    "personAdded": {
      "id": "1",
      "name": "Steve Powers"
    }
  }
}
```

Below is a summary of what subscriptions offer:

Subscriptions are two-way channels that allow the client and server to receive quick, but steady, updates. They typically use the WebSocket protocol, which creates standardized and secure connections.

Subscriptions are nimble in that they reduce connection setup overhead. Once subscribed, a client can just keep running on that subscription for long periods of time. They generally use computing resources efficiently by allowing developers to tailor the lifetime of the subscription and to configure what information will be requested.

In general, subscriptions allow the client to make multiple subscriptions at once. As it pertains to AWS AppSync, subscriptions are only used for receiving real-time updates from the AWS AppSync service. They cannot be used to perform queries or mutations.

The main alternative to subscriptions is polling, which sends queries at set intervals to request data. This process is typically less efficient than subscriptions and puts a lot of strain on both the client and the backend.

------

One thing that wasn't mentioned in our schema example was the fact that your special object types must also be defined in a `schema` root. So when you export a schema in AWS AppSync, it might look like this:

------
#### [ schema.graphql ]

```
schema {
  query: Query
  mutation: Mutation
  subscription: Subscription
}

.
.
.

type Query {                                   
  # code goes here
}
type Mutation {                                   
  # code goes here
}
type Subscription {                                   
  # code goes here
}
```

------

## Enumerations
<a name="enum-components"></a>

Enumerations, or enums, are special scalars that limit the legal arguments a type or field may have. This means that whenever an enum is defined in the schema, its associated type or field will be limited to the values in the enum. Enums are serialized as string scalars. Note that different programming languages may handle GraphQL enums differently. For example, JavaScript has no native enum support, so the enum values may be mapped to int values instead.

Enums are defined using the `enum` keyword. Here's an example:

```
enum trafficSignals {
  solidRed
  solidYellow
  solidGreen
  greenArrowLeft
  ...
}
```

When calling the `trafficLights` enum, the argument(s) can only be `solidRed`, `solidYellow`, `solidGreen`, etc. It's common to use enums to depict things that have a distinct but limited number of choices.

## Unions/Interfaces
<a name="union-interface-components"></a>

See [Interfaces and unions](https://docs.aws.amazon.com/appsync/latest/devguide/interfaces-and-unions.html) in GraphQL.

# GraphQL fields
<a name="graphql-fields"></a>

Fields exist within the scope of a type and hold the value that's requested from the GraphQL service. These are very similar to variables in other programming languages. For example, here's a `Person` object type:

```
type Person {                                  
   name: String                                  
   age: Int
}
```

The fields in this case are `name` and `age` and hold a `String` and `Int` value, respectively. Object fields like the ones shown above can be used as the inputs in the fields (operations) of your queries and mutations. For example, see the `Query` below:

```
type Query {                                   
  people: [Person]
}
```

The `people` field is requesting all instances of `Person` from the data source. When you add or retrieve a `Person` in your GraphQL server, you can expect the data to follow the format of your types and fields, that is, the structure of your data in the schema determines how it'll be structured in your response:

```
}
  "data": {
    "people": [
      {
        "name": "John Smith",
        "age": "50"
      },
      {
        "name": "Andrew Miller",
        "age": "60"
      },
      .
      .
      .
    ]
  }
}
```

Fields play an important role in structuring data. There are a couple of additional properties explained below that can be applied to fields for more customization.

## Lists
<a name="list-components"></a>

Lists return all items of a specified type. A list can be added to a field's type using brackets `[]`: 

```
type Person { 
  name: String
  age: Int
}
type Query {                                   
  people: [Person]
}
```

In `Query`, the brackets surrounding `Person` indicate that you want to return all instances of `Person` from the data source as an array. In the response, the `name` and `age` values of each `Person` will be returned as a single, delimited list:

```
}
  "data": {
    "people": [
      {
        "name": "John Smith",         # Data of Person 1
        "age": "50"
      },
      {
        "name": "Andrew Miller",      # Data of Person 2
        "age": "60"
      },
      .                               # Data of Person N
      .
      .
    ]
  }
}
```

You aren't limited to special object types. You can also use lists in the fields of regular object types.

## Non-nulls
<a name="non-null-components"></a>

Non-nulls indicate a field that cannot be null in the response. You can set a field to non-null by using the `!` symbol:

```
type Person { 
  name: String!
  age: Int
}
type Query {                                   
  people: [Person]
}
```

The `name` field cannot be explicitly null. If you were to query the data source and provided a null input for this field, an error would be thrown.

You can combine lists and non-nulls. Compare these queries:

```
type Query {                                   
  people: [Person!]      # Use case 1
}

.
.
.

type Query {                                   
  people: [Person]!      # Use case 2
}

.
.
.

type Query {                                   
  people: [Person!]!     # Use case 3
}
```

In use case 1, the list cannot contain null items. In use case 2, the list itself cannot be set to null. In use case 3, the list and its items cannot be null. However, in any case, you can still return empty lists.

As you can see, there are many moving components in GraphQL. In this section, we showed the structure of a simple schema and the different types and fields a schema supports. In the following section, you will discover the other components of a GraphQL API and how they work with the schema.

# Data sources
<a name="data-source-components"></a>

In the previous section, we learned that a schema defines the shape of your data. However, we never explained where that data came from. In real projects, your schema is like a gateway that handles all requests made to the server. When a request is made, the schema acts as the single endpoint that interfaces with the client. The schema will access, process, and relay data from the data source back to the client. See the infographic below:

![\[GraphQL schema integrating multiple AWS services for a single endpoint API architecture.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/aws-flow-infographic.png)


AWS AppSync and GraphQL superbly implement Backend For Frontend (BFF) solutions. They work in tandem to reduce complexity at scale by abstracting the backend. If your service uses different data sources and/or microservices, you can essentially abstract some of the complexity away by defining the shape of the data of each source (subgraph) in a single schema (supergraph). This means your GraphQL API is not limited to using one data source. You can associate any number of data sources with your GraphQL API and specify in your code how they will interact with the service.

As you can see in the infographic, the GraphQL schema contains all of the information clients need to request data. This means everything can be processed in a single request rather than multiple requests as is the case with REST. These requests go through the schema, which is the sole endpoint of the service. When requests are processed, a resolver (explained in the next section) executes its code to process the data from the relevant data source. When the response is returned, the subgraph tied to the data source will be populated with the data in the schema. 

AWS AppSync supports many different data source types. In the table below, we'll describe each type, list some of the benefits of each, and provide useful links for additional context.


| Data source | Description | Benefits | Supplemental information | 
| --- | --- | --- | --- | 
| Amazon DynamoDB | "Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. DynamoDB also offers encryption at rest, which eliminates the operational burden and complexity involved in protecting sensitive data." |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  | 
| AWS Lambda | "AWS Lambda is a compute service that lets you run code without provisioning or managing servers.Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, and logging. With Lambda, all you need to do is supply your code in one of the language runtimes that Lambda supports." |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  | 
| OpenSearch | "Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale OpenSearch clusters in the AWS Cloud. Amazon OpenSearch Service supports OpenSearch and legacy Elasticsearch OSS (up to 7.10, the final open-source version of the software). When you create a cluster, you have the option of which search engine to use.**OpenSearch** is a fully open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis. For more information, see the [OpenSearch documentation](https://opensearch.org/docs/).**Amazon OpenSearch Service** provisions all the resources for your OpenSearch cluster and launches it. It also automatically detects and replaces failed OpenSearch Service nodes, reducing the overhead associated with self-managed infrastructures. You can scale your cluster with a single API call or a few clicks in the console." |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  | 
| HTTP endpoints | You can use HTTP endpoints as data sources. AWS AppSync can send requests to the endpoints with the relevant information like params and payload. The HTTP response will be exposed to the resolver, which will return the final response after it finishes its operation(s). |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  | 
| Amazon EventBridge | "EventBridge is a serverless service that uses events to connect application components together, making it easier for you to build scalable event-driven applications. Use it to route events from sources such as home-grown applications, AWS services, and third-party software to consumer applications across your organization. EventBridge provides a simple and consistent way to ingest, filter, transform, and deliver events so you can build new applications quickly." |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  | 
| Relational databases | "Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks." |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  | 
| None data source | If you aren't planning on using a data source service, you can set it to none. A none data source, while still explicitly categorized as a data source, isn't a storage medium. Despite that, it's still useful in certain instances for data manipulation and pass-throughs. |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/appsync/latest/devguide/data-source-components.html)  | 

**Tip**  
For more information about how data sources interact with AWS AppSync, see [Attaching a data source](https://docs.aws.amazon.com//appsync/latest/devguide/attaching-a-data-source.html).

# Resolvers
<a name="resolver-components"></a>

From the previous sections, you learned about the components of the schema and data source. Now, we need to address how the schema and data sources interact. It all begins with the resolver.

A resolver is a unit of code that handles how that field's data will be resolved when a request is made to the service. Resolvers are attached to specific fields within your types in your schema. They are most commonly used to implement the state-changing operations for your query, mutation, and subscription field operations. The resolver will process a client's request, then return the result, which can be a group of output types like objects or scalars:

![\[GraphQL schema with resolvers connecting to various AWS data sources for a single endpoint.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/aws-flow-infographic.png)


## Resolver runtime
<a name="resolver-components-runtime"></a>

In AWS AppSync, you must first specify a runtime for your resolver. A resolver runtime indicates the environment in which a resolver is executed. It also dictates the language your resolvers will be written in. AWS AppSync currently supports APPSYNC\$1JS for JavaScript and Velocity Template Language (VTL). See [JavaScript runtime features for resolvers and functions](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference-js.html) for JavaScript or [Resolver mapping template utility reference](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference.html) for VTL.

## Resolver structure
<a name="resolver-components-structure"></a>

Code-wise, resolvers can be structured in a couple of ways. There are **unit** and **pipeline** resolvers.

### Unit resolvers
<a name="resolver-components-unit"></a>

A unit resolver is composed of code that defines a single request and response handler that are executed against a data source. The request handler takes a context object as an argument and returns the request payload used to call your data source. The response handler receives a payload back from the data source with the result of the executed request. The response handler transforms the payload into a GraphQL response to resolve the GraphQL field.

![\[GraphQL request flow showing request and response handlers interacting with a data source.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/unit-resolver-js.png)


### Pipeline resolvers
<a name="resolver-components-pipeline"></a>

When implementing pipeline resolvers, there is a general structure they follow:
+ **Before step**: When a request is made by the client, the resolvers for the schema fields being used (typically your queries, mutations, subscriptions) are passed the request data. The resolver will begin processing the request data with a before step handler, which allows some preprocessing operations to be performed before the data moves through the resolver.
+ **Function(s)**: After the before step runs, the request is passed to the functions list. The first function in the list will execute against the data source. A function is a subset of your resolver's code containing its own request and response handler. A request hander will take the request data and perform operations against the data source. The response handler will process the data source's response before passing it back to the list. If there is more than one function, the request data will be sent to the next function in the list to be executed. Functions in the list will be executed serially in the order defined by the developer. Once all functions have been executed, the final result is passed to the after step.
+ **After step**: The after step is a handler function that allows you to perform some final operations on the final function's response before passing it to the GraphQL response.

![\[GraphQL request flow diagram showing interactions between request, data sources, and response components.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/appsync-js-resolver-logic.png)


## Resolver handler structure
<a name="resolver-components-handlers"></a>

Handlers are typically functions called `Request` and `Response`:

```
export function request(ctx) {
    // Code goes here
}

export function response(ctx) {
    // Code goes here
}
```

In a unit resolver, there will only be one set of these functions. In a pipeline resolver, there will be a set of these for the before and after step and an additional set per function. To visualize how this could look, let's review a simple `Query` type:

```
type Query {
	helloWorld: String!
}
```

This is a simple query with one field called `helloWorld` of type `String`. Let's assume we always want this field to return the string "Hello World". To implement this behavior, we need to add the resolver to this field. In a unit resolver, we could add something like this:

```
export function request(ctx) {
    return {}
}

export function response(ctx) {
    return "Hello World"
}
```

The `request` can just be left blank because we're not requesting or processing data. We can also assume our data source is `None`, indicating this code doesn't need to perform any invocations. The response simply returns "Hello World". To test this resolver, we need to make a request using the query type:

```
query helloWorldTest {
  helloWorld
}
```

This is a query called `helloWorldTest` that returns the `helloWorld` field. When executed, the `helloWorld` field resolver also executes and returns the response:

```
{
  "data": {
    "helloWorld": "Hello World"
  }
}
```

Returning constants like this is the simplest thing you could do. In reality, you'll be returning inputs, lists, and more. Here's a more complicated example:

```
type Book {
  id: ID!
  title: String
}

type Query {
  getBooks: [Book]
}
```

Here we're returning a list of `Books`. Let's assume we're using a DynamoDB table to store book data. Our handlers may look like this:

```
/**
 * Performs a scan on the dynamodb data source
 */
export function request(ctx) {
  return { operation: 'Scan' };
}

/**
 * return a list of scanned post items
 */
export function response(ctx) {
  return ctx.result.items;
}
```

Our request used a built-in scan operation to search for all entries in the table, stored the findings in the context, then passed it to the response. The response took the result items and returned them in the response:

```
{
  "data": {
    "getBooks": {
      "items": [
        {
          "id": "abcdefgh-1234-1234-1234-abcdefghijkl",
          "title": "book1"
        },
        {
          "id": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
          "title": "book2"
        },

        ...

      ]
    }
  }
}
```

## Resolver context
<a name="resolver-components-context"></a>

In a resolver, each step in the chain of handlers must be aware of the state of the data from the previous steps. The result from one handler can be stored and passed to another as an argument. GraphQL defines four basic resolver arguments:


****  

| Resolver base arguments | Description | 
| --- | --- | 
| obj, root, parent, etc. | The result of the parent. | 
| args | The arguments provided to the field in the GraphQL query. | 
| context | A value which is provided to every resolver and holds important contextual information like the currently logged in user, or access to a database. | 
| info | A value which holds field-specific information relevant to the current query as well as the schema details. | 

In AWS AppSync, the `[context](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference-js.html)` (ctx) argument can hold all of the data mentioned above. It's an object that's created per request and contains data like authorization credentials, result data, errors, request metadata, etc. The context is an easy way for programmers to manipulate data coming from other parts of the request. Take this snippet again:

```
/**
 * Performs a scan on the dynamodb data source
 */
export function request(ctx) {
  return { operation: 'Scan' };
}

/**
 * return a list of scanned post items
 */
export function response(ctx) {
  return ctx.result.items;
}
```

The request is given the context (ctx) as the argument; this is the state of the request. It performs a scan for all items in a table, then stores the result back in the context in `result`. The context is then passed to the response argument, which accesses the `result` and returns its contents.

## Requests and Parsing
<a name="resolver-ast"></a>

When you make a query to your GraphQL service, it must run through a parsing and validation process before being executed. Your request will be parsed and translated into an abstract syntax tree. The content of the tree is validated by running through several validation algorithms against your schema. After the validation step, the nodes of the tree are traversed and processed. Resolvers are invoked, the results are stored in the context, and the response is returned. For example, take this query:

```
query {
  Person {  //object type
    name  //scalar
    age   //scalar
  } 
}
```

We're returning `Person` with a `name` and `age` fields. When running this query, the tree will look something like this:

![\[Hierarchical diagram showing query, Person, name, and age nodes connected by arrows.\]](http://docs.aws.amazon.com/appsync/latest/devguide/images/ast-1.png)


From the tree, it appears that this request will search the root for the `Query` in the schema. Inside of the query, the `Person` field will be resolved. From previous examples, we know that this could be an input from the user, a list of values, etc. `Person` is most likely tied to an object type holding the fields we need (`name` and `age`). Once these two child fields are found, they are resolved in the order given (`name` followed by `age`). Once the tree is completely resolved, the request is completed and will be sent back to the client.

# Additional properties of GraphQL
<a name="graphql-properties"></a>

GraphQL consists of several design principles to maintain simplicity and robustness at scale.

## Declarative
<a name="declarative-property"></a>

GraphQL is declarative, which means the user will describe (shape) the data by only declaring the fields they want to query. The response will only return the data for these properties. For example, here's an operation that retrieves a `Book` object in a DynamoDB table with the ISBN 13 `id` value of *9780199536061*:

```
{
  getBook(id: "9780199536061") {
    name
    year
    author
  }
}
```

The response will return the fields in the payload (`name`, `year`, and `author`) and nothing else:

```
{
  "data": {
    "getBook": {
      "name": "Anna Karenina",
      "year": "1878",
      "author": "Leo Tolstoy",
    }
  }
}
```

Because of this design principle, GraphQL eliminates the perennial issues of over- and under-fetching that REST APIs deal with in complex systems. This results in more efficient data gathering and improved network performance.

## Hierarchical
<a name="hierarchical-property"></a>

GraphQL is flexible in that the data requested can be shaped by the user to fit the needs of the application. Requested data always follows the types and syntax of the properties defined in your GraphQL API. For instance, the following snippet shows the `getBook` operation with a new field scope called `quotes` that returns all stored quote strings and pages linked to the `Book` *9780199536061*:

```
{
  getBook(id: "9780199536061") {
    name
    year
    author
    quotes {
      description
      page
    }
  }
}
```

Running this query returns the following result:

```
{
  "data": {
    "getBook": {
      "name": "Anna Karenina",
      "year": "1878",
      "author": "Leo Tolstoy",
      "quotes": [
         {
            "description": "The highest Petersburg society is essentially one: in it everyone knows everyone else, everyone even visits everyone else.",
            "page": 135
         },
         { 
            "description": "Happy families are all alike; every unhappy family is unhappy in its own way.",
            "page": 1
         },
         {        
            "description": "To Konstantin, the peasant was simply the chief partner in their common labor.",
            "page": 251
         }
      ]
    }
  }
}
```

As you can see, the `quotes` fields linked to the requested book was returned as an array in the same format that was described by our query. Although it wasn't shown here, GraphQL has the added advantage of not being particular about the location of the data it's retrieving. `Books` and `quotes` could be stored separately, but GraphQL will still retrieve the information so long as the association exists. This means your query can retrieve multitudes of standalone data in a single request.

## Introspective
<a name="introspective-property"></a>

GraphQL is self-documenting, or introspective. It supports several built-in operations that allow users to view the underlying types and fields within the schema. For example, here's a `Foo` type with a `date` and `description` field:

```
type Foo {
	date: String
	description: String
}
```

We could use the `_type` operation to find the typing metadata underneath the schema:

```
{
  __type(name: "Foo") {
    name                   # returns the name of the type
    fields {               # returns all fields in the type
      name                 # returns the name of each field
      type {               # returns all types for each field
        name               # returns the scalar type
      }
    }
  }
}
```

This will return a response:

```
{
  "__type": {
    "name": "Foo",                     # The type name
    "fields": [
      {
        "name": "date",                # The date field
        "type": { "name": "String" }   # The date's type
      },
      {
        "name": "description",         # The description field
        "type": { "name": "String" }   # The description's type
      },
    ]
  }
}
```

This feature can be used to find out what types and fields a particular GraphQL schema supports. GraphQL supports a wide variety of these introspective operations. For more information, see [Introspection](https://graphql.org/learn/introspection/).

## Strong typing
<a name="strong-typing-property"></a>

GraphQL supports strong typing through its types and fields system. When you define something in your schema, it must have a type that can be validated before runtime. It must also follow GraphQL's syntax specification. This concept is no different from programming in other languages. For example, here's the `Foo` type from earlier:

```
type Foo {
	date: String
	description: String
}
```

We can see that `Foo` is the object that will be created. Inside an instance of `Foo`, there will be a `date` and `description` field, both of the `String` primitive type (scalar). Syntactically, we see that `Foo` was declared, and its fields exist inside its scope. This combination of type checking and logical syntax ensures that your GraphQL API is concise and self-evident. GraphQL's typing and syntax specification can be found [here](https://spec.graphql.org/).