

# Capturing graph changes in real time using Neptune streams
<a name="streams"></a>

Neptune Streams logs every change to your graph as it happens, in the order that it is made, in a fully managed way. Once you enable Streams, Neptune takes care of availability, backup, security and expiry.

The following are some of the many use cases where you might want to capture changes to a graph as they occur:
+ You might want your application to notify people automatically when certain changes are made.
+ You might want to maintain a current version of your graph data in another data store also, such as Amazon OpenSearch Service, Amazon ElastiCache, or Amazon Simple Storage Service (Amazon S3).

Neptune uses the same native storage for the change-log stream as for graph data. It writes change log entries synchronously together with the transaction that makes those changes. You retrieve these change records from the log stream using an HTTP REST API. (For information, see [Calling the Streams API](streams-using-api-call.md).)

The following diagram shows how change-log data can be retrieved from Neptune Streams.

![\[Diagram showing how change-log data can be retrieved from both writer instances and read-replicas.\]](http://docs.aws.amazon.com/neptune/latest/userguide/images/neptune-streams.png)


**Neptune streams guarantees**
+ Changes made by a transaction are immediately available for reading from both writer and readers as soon as the transaction is complete (aside from any normal replication lag in readers).
+ Change records appear strictly sequentially, in the order in which they occurred (this includes the changes made within a transaction).
+ The changes streams contain no duplicates. Each change is logged only once.
+ The changes streams are complete. No changes are lost or omitted.
+ The changes streams contain all the information needed to determine the complete state of the database itself at any point in time, provided that the starting state is known.
+ Streams can be turned on or off at any time.

**Neptune streams operational properties**
+ The change-log stream is fully managed.
+ Change-log data is written synchronously as part of the same transaction that makes a change.
+ When Neptune Streams are enabled, you incur I/O and storage charges associated with the change-log data.
+ By default, change records are automatically purged one week after they are created. Starting with [engine release 1.2.0.0](engine-releases-1.2.0.0.md), this retention period can be changed using the the [neptune\$1streams\$1expiry\$1days](parameters.md#parameters-db-cluster-parameters-neptune_streams_expiry_days) DB cluster parameter to any number of days between 1 and 90.
+ Read performance on the streams scales with instances.
+ You can achieve high availability and read throughput using read replicas. There is no limit on the number of stream readers that you can create and use concurrently.
+ Change-log data is replicated across multiple Availability Zones, making it highly durable.
+ The log data is as secure as your graph data itself. It can be encrypted at rest and in transit. Access can be controlled using IAM, Amazon VPC, and AWS Key Management Service (AWS KMS). Like the graph data, it can be backed up and later restored using point-in-time restores (PITR).
+ The synchronous writing of stream data as part of each transaction causes a slight degradation in overall write performance.
+ Stream data is not sharded, because Neptune is single-sharded by design.
+ The log stream `GetRecords` API uses the same resources as all other Neptune graph operations. This means that clients need to load balance between stream requests and other DB requests.
+ When streams are disabled, all log data becomes inaccessible immediately. This means that you must read all log data of interest to you before you disable logging.
+ There is currently no native integration with AWS Lambda. The log stream does not generate an event that can trigger a Lambda function.

**Topics**
+ [Using Neptune Streams](streams-using.md)
+ [Serialization Formats in Neptune Streams](streams-change-formats.md)
+ [Neptune Streams Examples](streams-examples.md)
+ [Using AWS CloudFormation to Set Up Neptune-to-Neptune Replication with the Streams Consumer Application](streams-consumer-setup.md)
+ [Using Neptune streams cross-region replication for disaster recovery](streams-disaster-recovery.md)

# Using Neptune Streams
<a name="streams-using"></a>

With the Neptune Streams feature, you can generate a complete sequence of change-log entries that record every change made to your graph data as it happens. For an overview of this feature, see [Capturing graph changes in real time using Neptune streams](streams.md).

**Topics**
+ [Enabling Neptune Streams](streams-using-enabling.md)
+ [Disabling Neptune Streams](streams-using-disabling.md)
+ [Calling the Neptune Streams REST API](streams-using-api-call.md)
+ [Neptune Streams API Response Format](streams-using-api-reponse.md)
+ [Neptune Streams API Exceptions](streams-using-api-exceptions.md)

# Enabling Neptune Streams
<a name="streams-using-enabling"></a>

You can enable or disable Neptune Streams at any time by setting the [`neptune_streams` DB cluster parameter](parameters.md#parameters-db-cluster-parameters-neptune_streams). Setting the parameter to `1` enables Streams, and setting it to `0` disables Streams.

**Note**  
After changing the `neptune_streams` DB cluster parameter, you must reboot all DB instances in the cluster for the change to take effect.

You can set the [neptune\$1streams\$1expiry\$1days](parameters.md#parameters-db-cluster-parameters-neptune_streams_expiry_days) DB cluster parameter to control how many days, from 1 to 90, that stream records remain on the server before being deleted. The default is 7.

Neptune Streams was initially introduced as an experimental feature that you enabled or disabled in Lab Mode using the DB Cluster `neptune_lab_mode` parameter (see [Neptune Lab Mode](features-lab-mode.md)). Using Lab Mode to enable Streams is now deprecated and will be disabled in the future.

# Disabling Neptune Streams
<a name="streams-using-disabling"></a>

You can turn Neptune Streams off any time that it is running.

To turn Streams off, update the DB Cluster parameter group so that the value of the `neptune_streams` parameter is set to 0.

**Important**  
As soon as Streams is turned off, you can't access the change-log data any more. Be sure to read what you are interested in *before* turning Streams off.

# Calling the Neptune Streams REST API
<a name="streams-using-api-call"></a>

You access Neptune Streams using a REST API that sends an HTTP GET request to one of the following local endpoints:
+ For a SPARQL graph DB:   `https://Neptune-DNS:8182/sparql/stream`.
+ For a Gremlin or openCypher graph DB:   `https://Neptune-DNS:8182/propertygraph/stream` or `https://Neptune-DNS:8182/pg/stream`.

**Note**  
The Gremlin stream endpoint (`https://Neptune-DNS:8182/gremlin/stream`) is deprecated, along with its associated output format (`GREMLIN_JSON`). It is still supported for backward compatibility but may be removed in future releases.

Only an HTTP `GET` operation is allowed.

Neptune supports `gzip` compression of the response, provided that the HTTP request includes an `Accept-Encoding` header that specifies `gzip` as an accepted compression format (that is, `"Accept-Encoding: gzip"`).

**Parameters**
+ `limit`   –   long, optional. Range: 1–100,000. Default: 10.

  Specifies the maximum number of records to return. There is also a size limit of 10 MB on the response that can't be modified and that takes precedence over the number of records specified in the `limit` parameter. The response does include a threshold-breaching record if the 10 MB limit was reached.
+ `iteratorType`   –   String, optional.

  This parameter can take one of the following values:
  + `AT_SEQUENCE_NUMBER`(default)   –   Indicates that reading should start from the event sequence number specified jointly by the `commitNum` and `opNum` parameters.
  + `AFTER_SEQUENCE_NUMBER`   –   Indicates that reading should start right after the event sequence number specified jointly by the `commitNum` and `opNum` parameters.
  + `TRIM_HORIZON`   –   Indicates that reading should start at the last untrimmed record in the system, which is the oldest unexpired (not yet deleted) record in the change-log stream. This mode is useful during application startup, when you don't have a specific starting event sequence number.
  + `LATEST`   –   Indicates that reading should start at the most recent record in the system, which is the latest unexpired (not yet deleted) record in the change-log stream. This is useful when there is a need to read records from current top of the streams so as not to process older records, such as during disaster recovery or a zero-downtime upgrade. Note that in this mode, there is at most only one record returned.
+ `commitNum`   –   long, required when iteratorType is `AT_SEQUENCE_NUMBER` or `AFTER_SEQUENCE_NUMBER`.

  The commit number of the starting record to read from the change-log stream.

  This parameter is ignored when `iteratorType` is `TRIM_HORIZON` or `LATEST`.
+ `opNum`   –   long, optional (the default is `1`).

  The operation sequence number within the specified commit to start reading from in the change-log stream data.

Operations that change SPARQL graph data generally only generate a single change record per operation. However, operations that change Gremlin graph data can generate multiple change records per operation, as in the following examples:
+ `INSERT`   –   A Gremlin vertex can have multiple labels, and a Gremlin element can have multiple properties. A separate change record is generated for each label and property when an element is inserted.
+ `UPDATE`   –   When a Gremlin element property is changed, two change records are generated: the first for removing the previous value, and the second for inserting the new value.
+ `DELETE`   –   A separate change record is generated for each element property that is deleted. For example, when a Gremlin edge with properties is deleted, one change record is generated for each of the properties, and after that, one is generated for deletion of the edge label.

  When a Gremlin vertex is deleted, all the incoming and outgoing edge properties are deleted first, then the edge labels, then the vertex properties, and finally the vertex labels. Each of these deletions generates a change record.

# Neptune Streams API Response Format
<a name="streams-using-api-reponse"></a>

A response to a Neptune Streams REST API request has the following fields:
+ `lastEventId`   –   Sequence identifier of the last change in the stream response. An event ID is composed of two fields: A `commitNum` identifies a transaction that changed the graph, and an `opNum` identifies a specific operation within that transaction. This is shown in the following example.

  ```
    "eventId": {
      "commitNum": 12,
      "opNum": 1
    }
  ```
+ `lastTrxTimestamp`   –   The time at which the commit for the transaction was requested, in milliseconds from the Unix epoch.
+ `format`   –   Serialization format for the change records being returned. The possible values are `PG_JSON` for Gremlin or openCypher change records, and `NQUADS` for SPARQL change records.
+ `records`   –   An array of serialized change-log stream records included in the response. Each record in the `records` array contains these fields:
  + `commitTimestamp`   –   The time at which the commit for the transaction was requested, in milliseconds from the Unix epoch.
  + `eventId`   –   The sequence identifier of the stream change record.
  + `data`   –   The serialized Gremlin, SPARQL, or OpenCypher change record. The serialization formats of each record are described in more detail in the next section, [Serialization Formats in Neptune Streams](streams-change-formats.md).
  + `op`   –   The operation that created the change.
  + `isLastOp`   –   Only present if this operation is the last one in its transaction. When present, it is set to `true`. Useful for ensuring that an entire transaction is consumed.
+ `totalRecords`   –   The total number of records in the response.

For example, the following response returns Gremlin change data, for a transaction that contains more than one operation:

```
{
  "lastEventId": {
    "commitNum": 12,
    "opNum": 1
  },
  "lastTrxTimestamp": 1560011610678,
  "format": "PG_JSON",
  "records": [
    {
      "commitTimestamp": 1560011610678,
      "eventId": {
        "commitNum": 1,
        "opNum": 1
      },
      "data": {
        "id": "d2b59bf8-0d0f-218b-f68b-2aa7b0b1904a",
        "type": "vl",
        "key": "label",
        "value": {
          "value": "vertex",
          "dataType": "String"
        }
      },
      "op": "ADD"
    }
  ],
  "totalRecords": 1
}
```

The following response returns SPARQL change data for the last operation in a transaction (the operation identified by `EventId(97, 1)` in transaction number 97).

```
{
  "lastEventId": {
    "commitNum": 97,
    "opNum": 1
  },
  "lastTrxTimestamp": 1561489355102,
  "format": "NQUADS",
  "records": [
    {
      "commitTimestamp": 1561489355102,
      "eventId": {
        "commitNum": 97,
        "opNum": 1
      },
      "data": {
        "stmt": "<https://test.com/s> <https://test.com/p> <https://test.com/o> .\n"
      },
      "op": "ADD",
      "isLastOp": true
    }
  ],
  "totalRecords": 1
}
```

# Neptune Streams API Exceptions
<a name="streams-using-api-exceptions"></a>

The following table describes Neptune Streams exceptions.


| Error Code | HTTP Code | OK to Retry? | Message | 
| --- | --- | --- | --- | 
| `InvalidParameterException` | 400 | No | An invalid or out-of-range value was supplied as an input parameter. | 
| `ExpiredStreamException` | 400 | No | All of the requested records exceed the maximum age allowed and have expired. | 
| `ThrottlingException` | 500 | Yes | Rate of requests exceeds the maximum throughput. | 
| `StreamRecordsNotFoundException` | 404 | No | The requested resource could not be found. The stream may not be specified correctly. | 
| `MemoryLimitExceededException` | 500 | Yes | The request processing did not succeed due to lack of memory, but can be retried when the server is less busy. | 

# Serialization Formats in Neptune Streams
<a name="streams-change-formats"></a>

Amazon Neptune uses two different formats for serializing graph-changes data to log streams, depending on whether the graph was created using Gremlin or SPARQL.

Both formats share a common record serialization format, as described in [Neptune Streams API Response Format](streams-using-api-reponse.md), that contains the following fields:
+ `commitTimestamp`   –   The time at which the commit for the transaction was requested, in milliseconds from the Unix epoch.
+ `eventId`   –   The sequence identifier of the stream change record.
+ `data`   –   The serialized Gremlin, SPARQL, or OpenCypher change record. The serialization formats of each record are described in more detail in the next sections.
+ `op`   –   The operation that created the change.

**Topics**
+ [PG\$1JSON Change Serialization Format](#streams-change-formats-gremlin)
+ [SPARQL NQUADS Change Serialization Format](#streams-change-formats-sparql)

## PG\$1JSON Change Serialization Format
<a name="streams-change-formats-gremlin"></a>

**Note**  
The Gremlin stream output format (`GREMLIN_JSON`) output by the Gremlin stream endpoint (`https://Neptune-DNS:8182/gremlin/stream`) is deprecated. It is replaced by PG\$1JSON, which is currently identical to `GREMLIN_JSON`.

A Gremlin or openCypher change record, contained in the `data` field of a log stream response, has the following fields:
+ `id` – String, required.

  The ID of the Gremlin or openCypher element.
+ `type` – String, required.

  The type of this Gremlin or openCypher element. Must be one of the following:
  + `vl` – Vertex label for Gremlin; node label for openCypher.
  + `vp` – Vertex properties for Gremlin; node properties for openCypher.
  + `e` – Edge and edge label for Gremlin; relationship and relationship type for openCypher.
  + `ep` – Edge properties for Gremlin; relationship properties for openCypher.
+ `key` – String, required.

  The property name. For element labels, this is "label".
+ `value` – `value` object, required.

  This is a JSON object that contains a `value` field for the value itself, and a `datatype` field for the JSON data type of that value.

  ```
    "value": {
      "value": "the new value",
      "dataType": "the JSON datatype of the new value"
    }
  ```
+ `from` – String, optional.

  If this is an edge (type="e"), the ID of the corresponding *from* vertex or source node.
+ `to` – String, optional.

  If this is an edge (type="e"), the ID of the corresponding *to* vertex or target node.

**Gremlin Examples**
+ The following is an example of a Gremlin vertex label.

  ```
  {
    "id": "an ID string",
    "type": "vl",
    "key": "label",
    "value": {
      "value": "the new value of the vertex label",
      "dataType": "String"
    }
  }
  ```
+ The following is an example of a Gremlin vertex property.

  ```
  {
    "id": "an ID string",
    "type": "vp",
    "key": "the property name",
    "value": {
      "value": "the new value of the vertex property",
      "dataType": "the datatype of the vertex property"
    }
  }
  ```
+ The following is an example of a Gremlin edge.

  ```
  {
    "id": "an ID string",
    "type": "e",
    "key": "label",
    "value": {
      "value": "the new value of the edge",
      "dataType": "String"
    },
    "from": "the ID of the corresponding "from" vertex",
    "to": "the ID of the corresponding "to" vertex"
  }
  ```

**openCypher Examples**
+ The following is an example of an openCypher node label.

  ```
  {
    "id": "an ID string",
    "type": "vl",
    "key": "label",
    "value": {
      "value": "the new value of the node label",
      "dataType": "String"
    }
  }
  ```
+ The following is an example of an openCypher node property.

  ```
  {
    "id": "an ID string",
    "type": "vp",
    "key": "the property name",
    "value": {
      "value": "the new value of the node property",
      "dataType": "the datatype of the node property"
    }
  }
  ```
+ The following is an example of an openCypher relationship.

  ```
  {
    "id": "an ID string",
    "type": "e",
    "key": "label",
    "value": {
      "value": "the new value of the relationship",
      "dataType": "String"
    },
    "from": "the ID of the corresponding source node",
    "to": "the ID of the corresponding target node"
  }
  ```

## SPARQL NQUADS Change Serialization Format
<a name="streams-change-formats-sparql"></a>

Neptune logs changes to SPARQL quads in the graph using the Resource Description Framework (RDF) `N-QUADS` language defined in the [W3C RDF 1.1 N-Quads](https://www.w3.org/TR/n-quads/) specification.

The `data` field in the change record simply contains a `stmt` field that holds an N-QUADS statement expressing the changed quad, as in the following example.

```
  "stmt" : "<https://test.com/s> <https://test.com/p> <https://test.com/o> .\n"
```

# Neptune Streams Examples
<a name="streams-examples"></a>

The following examples show how to access change-log stream data in Amazon Neptune.

**Topics**
+ [`AT_SEQUENCE_NUMBER` Change Log](#streams-examples-at_seq)
+ [`AFTER_SEQUENCE_NUMBER` Change Log](#streams-examples-after_seq)
+ [`TRIM_HORIZON` Change Log](#streams-examples-trim)
+ [`LATEST` Change Log](#streams-examples-trim)
+ [Compression Change Log](#streams-examples-compress)

## `AT_SEQUENCE_NUMBER` Change Log
<a name="streams-examples-at_seq"></a>

The following example shows a Gremlin or openCypher `AT_SEQUENCE_NUMBER` change log.

```
curl -s "https://Neptune-DNS:8182/propertygraph/stream?limit=1&commitNum=1&opNum=1&iteratorType=AT_SEQUENCE_NUMBER" |jq
{
  "lastEventId": {
    "commitNum": 1,
    "opNum": 1
  },
  "lastTrxTimestamp": 1560011610678,
  "format": "PG_JSON",
  "records": [
    {
      "eventId": {
        "commitNum": 1,
        "opNum": 1
      },
      "commitTimestamp": 1560011610678,
      "data": {
        "id": "d2b59bf8-0d0f-218b-f68b-2aa7b0b1904a",
        "type": "vl",
        "key": "label",
        "value": {
          "value": "vertex",
          "dataType": "String"
        }
      },
      "op": "ADD",
      "isLastOp": true
    }
  ],
  "totalRecords": 1
}
```

This one shows a SPARQL example of an `AT_SEQUENCE_NUMBER` change log.

```
curl -s "https://localhost:8182/sparql/stream?limit=1&commitNum=1&opNum=1&iteratorType=AT_SEQUENCE_NUMBER" |jq
{
  "lastEventId": {
    "commitNum": 1,
    "opNum": 1
  },
  "lastTrxTimestamp": 1571252030566,
  "format": "NQUADS",
  "records": [
    {
      "eventId": {
        "commitNum": 1,
        "opNum": 1
      },
      "commitTimestamp": 1571252030566,
      "data": {
        "stmt": "<https://test.com/s> <https://test.com/p> <https://test.com/o> .\n"
      },
      "op": "ADD",
      "isLastOp": true
    }
  ],
  "totalRecords": 1
}
```

## `AFTER_SEQUENCE_NUMBER` Change Log
<a name="streams-examples-after_seq"></a>

The following example shows a Gremlin or openCypher `AFTER_SEQUENCE_NUMBER` change log.

```
curl -s "https://Neptune-DNS:8182/propertygraph/stream?limit=1&commitNum=1&opNum=1&iteratorType=AFTER_SEQUENCE_NUMBER" |jq
{
  "lastEventId": {
    "commitNum": 2,
    "opNum": 1
  },
  "lastTrxTimestamp": 1560011633768,
  "format": "PG_JSON",
  "records": [
    {
      "commitTimestamp": 1560011633768,
      "eventId": {
        "commitNum": 2,
        "opNum": 1
      },
      "data": {
        "id": "d2b59bf8-0d0f-218b-f68b-2aa7b0b1904a",
        "type": "vl",
        "key": "label",
        "value": {
          "value": "vertex",
          "dataType": "String"
        }
      },
      "op": "REMOVE",
      "isLastOp": true
    }
  ],
  "totalRecords": 1
}
```

## `TRIM_HORIZON` Change Log
<a name="streams-examples-trim"></a>

The following example shows a Gremlin or openCypher `TRIM_HORIZON` change log.

```
curl -s "https://Neptune-DNS:8182/propertygraph/stream?limit=1&iteratorType=TRIM_HORIZON" |jq
{
  "lastEventId": {
    "commitNum": 1,
    "opNum": 1
  },
  "lastTrxTimestamp": 1560011610678,
  "format": "PG_JSON",
  "records": [
    {
      "commitTimestamp": 1560011610678,
      "eventId": {
        "commitNum": 1,
        "opNum": 1
      },
      "data": {
        "id": "d2b59bf8-0d0f-218b-f68b-2aa7b0b1904a",
        "type": "vl",
        "key": "label",
        "value": {
          "value": "vertex",
          "dataType": "String"
        }
      },
      "op": "ADD",
      "isLastOp": true
    }
  ],
  "totalRecords": 1
}
```

## `LATEST` Change Log
<a name="streams-examples-trim"></a>

The following example shows a Gremlin or openCypher `LATEST` change log. Note that the API parameters `limit`, `commitNum`, and `opNum` are completely optional.

```
curl -s "https://Neptune-DNS:8182/propertygraph/stream?iteratorType=LATEST" | jq
{
  "lastEventId": {
    "commitNum": 21,
    "opNum": 4
  },
  "lastTrxTimestamp": 1634710497743,
  "format": "PG_JSON",
  "records": [
    {
      "commitTimestamp": 1634710497743,
      "eventId": {
        "commitNum": 21,
        "opNum": 4
      },
      "data": {
        "id": "24be4e2b-53b9-b195-56ba-3f48fa2b60ac",
        "type": "e",
        "key": "label",
        "value": {
          "value": "created",
          "dataType": "String"
        },
        "from": "4",
        "to": "5"
      },
      "op": "REMOVE",
      "isLastOp": true
    }
  ],
  "totalRecords": 1
}
```

## Compression Change Log
<a name="streams-examples-compress"></a>

The following example shows a Gremlin or openCypher compression change log.

```
curl -sH \
  "Accept-Encoding: gzip" \
  "https://Neptune-DNS:8182/propertygraph/stream?limit=1&commitNum=1" \
  -H "Accept-Encoding: gzip" \
  -v |gunzip -|jq
> GET /propertygraph/stream?limit=1 HTTP/1.1
> Host: localhost:8182
> User-Agent: curl/7.64.0
> Accept: /
> Accept-Encoding: gzip
*> Accept-Encoding: gzip*
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=UTF-8
< Connection: keep-alive
*< content-encoding: gzip*
< content-length: 191
<
{ [191 bytes data]
Connection #0 to host localhost left intact
{
  "lastEventId": "1:1",
  "lastTrxTimestamp": 1558942160603,
  "format": "PG_JSON",
  "records": [
    {
      "commitTimestamp": 1558942160603,
      "eventId": "1:1",
      "data": {
        "id": "v1",
        "type": "vl",
        "key": "label",
        "value": {
          "value": "person",
          "dataType": "String"
        }
      },
      "op": "ADD",
      "isLastOp": true
    }
  ],
  "totalRecords": 1
}
```

# Using AWS CloudFormation to Set Up Neptune-to-Neptune Replication with the Streams Consumer Application
<a name="streams-consumer-setup"></a>

You can use an CloudFormation template to set up the Neptune streams consumer application to support Neptune-to-Neptune replication.

**Topics**
+ [Choose an CloudFormation template for Your Region](#streams-consumer-cfn-by-region)
+ [Add details About the Neptune streams consumer stack you're creating](#streams-consumer-cfn-stack-details)
+ [Run the CloudFormation Template](#streams-consumer-cfn-complete)
+ [To update the stream poller with the latest Lambda artifacts](#streams-consumer-cfn-update)

## Choose an CloudFormation template for Your Region
<a name="streams-consumer-cfn-by-region"></a>

To launch the appropriate CloudFormation stack on the CloudFormation console, choose one of the **Launch Stack** buttons in the following table, depending on the AWS Region that you want to use.


| Region | View | View in Designer | Launch | 
| --- | --- | --- | --- | 
| US East (N. Virginia) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=us-east-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| US East (Ohio) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=us-east-2&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| US West (N. California) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=us-west-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=us-west-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=us-west-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| US West (Oregon) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=us-west-2&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Canada (Central) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=us-west-2&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=ca-central-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=ca-central-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| South America (São Paulo) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=sa-east-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=sa-east-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=sa-east-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Europe (Stockholm) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=eu-north-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=eu-north-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=eu-north-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Europe (Ireland) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=eu-west-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=eu-west-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Europe (London) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=eu-west-2&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=eu-west-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=eu-west-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Europe (Paris) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=eu-west-3&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=eu-west-3#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=eu-west-3#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Europe (Frankfurt) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=eu-central-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=eu-central-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=eu-central-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Middle East (Bahrain) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=me-south-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=me-south-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=me-south-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Middle East (UAE) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=me-central-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=me-central-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=me-central-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Israel (Tel Aviv) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=il-central-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=il-central-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=il-central-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Africa (Cape Town) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=af-south-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=af-south-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=af-south-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Asia Pacific (Tokyo) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=ap-northeast-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=ap-northeast-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=ap-northeast-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Asia Pacific (Hong Kong) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=ap-east-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=ap-east-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=ap-east-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Asia Pacific (Seoul) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=ap-northeast-2&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=ap-northeast-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=ap-northeast-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Asia Pacific (Singapore) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=ap-southeast-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Asia Pacific (Sydney) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=ap-southeast-2&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=ap-southeast-2#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| Asia Pacific (Mumbai) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.aws.amazon.com/cloudformation/designer/home?region=ap-south-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.aws.amazon.com/cloudformation/home?region=ap-south-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.aws.amazon.com/cloudformation/home?region=ap-south-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| China (Beijing) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.amazonaws.cn/cloudformation/designer/home?region=cn-north-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.amazonaws.cn/cloudformation/home?region=cn-north-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.amazonaws.cn/cloudformation/home?region=cn-north-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| China (Ningxia) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.amazonaws.cn/cloudformation/designer/home?region=cn-northwest-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.amazonaws.cn/cloudformation/home?region=cn-northwest-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.amazonaws.cn/cloudformation/home?region=cn-northwest-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| AWS GovCloud (US-West) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.amazonaws-us-gov.com/cloudformation/designer/home?region=us-gov-west-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.amazonaws-us-gov.com/cloudformation/home?region=us-gov-west-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.amazonaws-us-gov.com/cloudformation/home?region=us-gov-west-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 
| AWS GovCloud (US-East) | [View](https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [View in Designer](https://console.amazonaws-us-gov.com/cloudformation/designer/home?region=us-gov-east-1&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json) | [https://console.amazonaws-us-gov.com/cloudformation/home?region=us-gov-east-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json](https://console.amazonaws-us-gov.com/cloudformation/home?region=us-gov-east-1#/stacks/new?stackName=NeptuneStreamPoller&templateURL=https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json)  | 

On the **Create Stack** page, choose **Next**.

## Add details About the Neptune streams consumer stack you're creating
<a name="streams-consumer-cfn-stack-details"></a>

The **Specify Stack Details** page provides properties and parameters that you can use to control the setup of the application:

**Stack Name**   –   The name of the new CloudFormation stack that you're creating. You can generally use the default value, `NeptuneStreamPoller`.

Under **Parameters**, provide the following:

**Network configuration for the VPC Where the streams consumer runs**
+ **`VPC`**   –   Provide the name of the VPC where the polling Lambda function will run.
+ **`SubnetIDs`**   –   The subnets to which a network interface is established. Add subnets corresponding to your Neptune cluster.
+ **`SecurityGroupIds`**   –   Provide the IDs of security groups that grant write inbound access to your source Neptune DB cluster.
+ **`RouteTableIds`**   –   This is needed to create an Amazon DynamoDB endpoint in your Neptune VPC, if you do not already have one. You must provide a comma-separated list of route table IDs associated with the subnets.
+ **`CreateDDBVPCEndPoint`**   –   A Boolean value that defaults to `true`, indicating whether or not it is necessary to create a Dynamo DB VPC endpoint. You only need to change it to `false` if you have already created a DynamoDB endpoint in your VPC.
+ **`CreateMonitoringEndPoint`**   –   A Boolean value that defaults to `true`, indicating whether or not it is necessary to create a monitoring VPC endpoint.. You only need to change it to `false` if you have already created a monitoring endpoint in your VPC.

**Stream Poller**
+ **`ApplicationName`**   –   You can generally leave this set to the default (`NeptuneStream`). If you use a different name, it must be unique.
+ **`LambdaMemorySize`**   –   Used to set the memory size available to the Lambda poller function. The default value is 2,048 megabytes.
+ **`LambdaRuntime`**   –   The language used in the Lambda function that retrieves items from the Neptune stream. You can set this either to `python3.9` or to `java8`.
+ **`LambdaS3Bucket`**   –   The Amazon S3 bucket that contains Lambda code artifacts. Leave this blank unless you are using a custom Lambda polling function that loads from a different Amazon S3 bucket.
+ **`LambdaS3Key`**   –   The Amazon S3 key that corresponds to your Lambda code artifacts. Leave this blank unless you are using a custom Lambda polling function.
+ **`LambdaLoggingLevel`**   –   In general, leave this set to the default value, which is `INFO`.
+ **`ManagedPolicies`**   –   Lists the managed policies to use for execution of your Lambda function. In general, leave this blank unless you are using a custom Lambda polling function.
+ **`StreamRecordsHandler`**   –   In general, leave this blank unless you are using a custom handler for the records in Neptune streams.
+ **`StreamRecordsBatchSize`**   –   The maximum number of records to be fetched from stream. You can use this parameter to tune performance. The default (`5000`) is a good place to start. The maximum allowable is 10,000. The higher the number, the fewer network calls are needed to read records from the stream, but the more memory is required to process the records. Lower values of this parameter result in lower throughput.
+ **`MaxPollingWaitTime`**   –   The maximum wait time between two polls (in seconds). Determines how frequently the Lambda poller is invoked to poll the Neptune streams. Set this value to 0 for continuous polling. The maximum value is 3,600 seconds (1 hour). The default value (60 seconds) is a good place to start, depending on how fast your graph data changes.
+ **`MaxPollingInterval`**   –   The maximum continuous polling period (in seconds). Use this to set a timeout for the Lambda polling function. The value should be in the range between 5 seconds and 900 seconds. The default value (600 seconds) is a good place to start.
+ **`StepFunctionFallbackPeriod`**   –   The number of units of step-function-fallback-period to wait for the poller, after which the step function is called through Amazon CloudWatch Events to recover from a failure. The default (5 minutes) is a good place to start.
+ **`StepFunctionFallbackPeriodUnit`**   –   The time units used to measure the preceding `StepFunctionFallbackPeriodUnit` (`minutes`, `hours`, or `days`). The default (`minutes`) is generally sufficient.
+ **`StartingCheckpoint`**   –   The starting checkpoint for the stream poller. The default is `0:0`, which signifies starting from the beginning of the Neptune stream. 
+ **`StreamPollerInitialState`**   –   The initial state of the poller. The default is `ENABLED`, which means that the stream replication will start as soon as the entire stack creation is complete. 

**Neptune stream**
+ **`NeptuneStreamEndpoint`**   –   (*Required*) The endpoint of the Neptune source stream. This takes one of two forms:
  + **`https://your DB cluster:port/propertygraph/stream`** (or its alias, `https://your DB cluster:port/pg/stream`).
  + **`https://your DB cluster:port/sparql/stream`**.
+ **`Neptune Query Engine`**   –   Choose Gremlin, openCypher, or SPARQL.
+ **`IAMAuthEnabledOnSourceStream`**   –   If your Neptune DB cluster is using IAM authentication, set this parameter to `true`.
+ **`StreamDBClusterResourceId`**   –   If your Neptune DB cluster is using IAM authentication, set this parameter to the cluster resource ID. The resource ID is not the same as the cluster ID. Instead, it takes the form: `cluster-` followed by 28 alpha-numeric characters. It can be found under **Cluster Details** in the Neptune console.

**Target Neptune DB cluster**
+ **`TargetNeptuneClusterEndpoint`**   –   The cluster endpoint (hostname only) of the target backup cluster.

  Note that if you specify `TargetNeptuneClusterEndpoint`, you cannot also specify `TargetSPARQLUpdateEndpoint`.
+ **`TargetNeptuneClusterPort`**   –   The port number for the target cluster.

  Note that if you specify `TargetSPARQLUpdateEndpoint`, the setting for `TargetNeptuneClusterPort` is ignored.
+ **`IAMAuthEnabledOnTargetCluster`**   –   Set to true if IAM authentication is to be enabled on the target cluster.
+ **`TargetAWSRegion`**   –   The target backup cluster's AWS region, such as `us-east-1`). You must provide this parameter only when the AWS region of the target backup cluster is different from the region of the Neptune source cluster, as in the case of cross-region replication. If the source and target regions are the same, this parameter is optional.

  Note that if the `TargetAWSRegion` value is not a [valid AWS region that Neptune supports](limits.md#limits-regions), the process fails.
+ **`TargetNeptuneDBClusterResourceId`**   –   *Optional*: this is only needed when IAM authentication is enabled on the target DB cluster. Set to the resource ID of the target cluster.
+ **`SPARQLTripleOnlyMode`**   –   Boolean flag that determines whether triple-only mode is enabled. In triple-only mode, there is no named-graph replication. The default value is `false`.
+ **`TargetSPARQLUpdateEndpoint`**   –   URL of the target endpoint for SPARQL update, such as `https://abc.com/xyz`. This endpoint can be any SPARQL store that supports quad or triples.

  Note that if you specify `TargetSPARQLUpdateEndpoint`, you cannot also specify `TargetNeptuneClusterEndpoint`, and the setting of `TargetNeptuneClusterPort` is ignored.
+ **`BlockSparqlReplicationOnBlankNode `**   –   Boolean flag which, if set to `true`, stops replication for BlankNode in SPARQL (RDF) data. The default value is `false`.

**Alarm**
+ **`Required to create Cloud watch Alarm`**   –   Set this to `true` if you want to create a CloudWatch alarm for the new stack.
+ **`SNS Topic ARN for Cloudwatch Alarm Notifications`**   –   The SNS topic ARN where CloudWatch alarm notifications should be sent (only needed if alarms are enabled).
+ **`Email for Alarm Notifications`**   –   The email address to which alarm notifications should be sent (only needed if alarms are enabled).

For destination of the alarm notification, you can add SNS only, email only, or both SNS and email.

## Run the CloudFormation Template
<a name="streams-consumer-cfn-complete"></a>

Now you can complete the process of provisioning a Neptune streams consumer application instance as follows:

1. In CloudFormation, on the **Specify Stack Details** page, choose **Next**.

1. On the **Options** page, choose **Next**.

1. On the **Review** page, select the first check box to acknowledge that CloudFormation will create IAM resources. Select the second check box to acknowledge `CAPABILITY_AUTO_EXPAND` for the new stack. 
**Note**  
`CAPABILITY_AUTO_EXPAND` explicitly acknowledges that macros will be expanded when creating the stack, without prior review. Users often create a change set from a processed template so that the changes made by macros can be reviewed before actually creating the stack. For more information, see the CloudFormation [CreateStack](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html) API in the *AWS CloudFormation API Reference*.

   Then choose **Create**.

## To update the stream poller with the latest Lambda artifacts
<a name="streams-consumer-cfn-update"></a>

You can update the stream poller with the latest Lambda code artifacts as follows:

1. In the AWS Management Console, navigate to CloudFormation and select the main parent CloudFormation stack.

1. Select the **Update** option for the stack.

1. Select **Replace current template**.

1. For the template source, choose **Amazon S3 URL** and enter the following S3 URL:

   ```
   https://aws-neptune-customer-samples.s3.amazonaws.com/neptune-stream/neptune_to_neptune.json
   ```

1. Select **Next** without changing any CloudFormation parameters.

1. Choose **Update Stack**.

The stack will now update the Lambda artifacts with the most recent ones.

# Using Neptune streams cross-region replication for disaster recovery
<a name="streams-disaster-recovery"></a>

Neptune provides two ways of implementing cross-region failover capabilities:
+ Cross-region snapshot copy and restore
+ Using Neptune streams to replicate data between two clusters in two different regions.

Cross-region snapshot copy and restore has the lowest operational overhead for recovering a Neptune cluster in a different region. However, copying a snapshot between regions can requires significant data-transfer time, since a snapshot is a full backup of the Neptune cluster. As a result, cross-region snapshot copy and restore can be used for scenarios that only require a Recovery Point Objective (RPO) of hours and a Recovery Time Objective (RTO) of hours.

A Recovery Point Objective (RPO) is measured by the time in between backups. It defines how much data may be lost between the time the last backup was made and the time at which the database is recovered.

A Recovery Time Objective (RTO) is measured by the time it takes to perform a recovery operation. This is the time it takes the DB cluster to fail over to a recovered database after a failure occurs.

Neptune streams provides a way to keep a backup Neptune cluster in sync with the primary production cluster at all times. If a failure occurs, your database then fails over to the backup cluster. This reduces RPO and RTO to minutes, since data is constantly being copied to the backup cluster, which is immediately available as a failover target at any time.

The drawback of using Neptune streams in this way is that both the operational overhead required to maintain the replication components, and the cost of having a second Neptune DB cluster online all of the time, can be significant.

# Setting up Neptune-to-Neptune replication
<a name="streams-disaster-recovery-setup"></a>

Your primary production DB cluster resides in a VPC in a given source region. There are three main things that you need to replicate or emulate in a different, recovery region for the purposes of disaster recovery:
+ The data stored in the cluster.
+ The configuration of the primary cluster. This would include whether it uses IAM authentication, whether it is encrypted, its DB cluster parameters, its instance parameters, instance sizes, and so forth).
+ The networking topology it uses, including the target VPC, its security groups, and so forth.

You can use Neptune management APIs such as the following to gather that information:
+ [`DescribeDBClusters`](api-clusters.md#DescribeDBClusters)
+ [`DescribeDBInstances`](api-instances.md#DescribeDBInstances)
+ [`DescribeDBClusterParameters`](api-parameters.md#DescribeDBClusterParameters)
+ [`DescribeDBParameters`](api-parameters.md#DescribeDBParameters)
+ [https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html)

With the information you gather, you can use the following procedure to set up a backup cluster in a different region, to which your production cluster can fail over in the event of a failure.

## Enable Neptune streams
<a name="streams-disaster-recovery-setup-enable-streams"></a>

You can use the [ModifyDBClusterParameterGroup](api-parameters.md#ModifyDBClusterParameterGroup) to set the `neptune_streams` parameter to 1. Then, reboot all the instances in the DB cluster so that change takes effect.

It's a good idea to perform at least one add or update operation on the source DB cluster after Neptune streams has been enabled. This populates the change stream with data points that can be referenced later when re-syncing the production cluster with the backup cluster.

## Create a new VPC in the region where you want to set up your backup cluster
<a name="streams-disaster-recovery-setup-new-vpc"></a>

Before creating a new Neptune DB cluster in a different region from your primary cluster, you need to establish a new VPC in the target region to host the cluster. Connectivity between the primary and backup clusters is established through VPC peering, which uses traffic across private subnets in different VPCs. However, to establish VPC peering between two VPCs, they must not have overlapping CIDR blocks or IP address spaces. This that you can't just use the default VPC in both regions, because the CIDR block for a default VPC is always the same (`172.31.0.0/16`).

You can use an existing VPC in the target region as long as it meets the following conditions:
+ It does not have a CIDR block that overlaps with the CIDR block of the VPC where your primary cluster is located.
+ It is not already peered with another VPC that has the same CIDR block as the VPC where your primary cluster is located.

If there is no suitable VPC available in the target region, create one using the Amazon EC2 [https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateVpc.html](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateVpc.html) API.

## Create a snapshot of your primary cluster and restore it to the target backup region
<a name="streams-disaster-recovery-setup-snapshot-restore"></a>

Now you create a new Neptune cluster in an appropriate VPC in the target backup region that is a copy of your production cluster:

**Make a copy of your production cluster in the backup region**

1. In your target backup region, re-create the parameters and parameter groups used by your production DB cluster. You can do this using [`CreateDBClusterParameterGroup`](api-parameters.md#CreateDBClusterParameterGroup), [`CreateDBParameterGroup`](api-parameters.md#CreateDBParameterGroup), [`ModifyDBClusterParameterGroup`](api-parameters.md#ModifyDBClusterParameterGroup) and [`ModifyDBParameterGroup`](api-parameters.md#ModifyDBParameterGroup).

   Note that the [`CopyDBClusterParameterGroup`](api-parameters.md#CopyDBClusterParameterGroup) and [`CopyDBParameterGroup`](api-parameters.md#CopyDBParameterGroup) APIs do not currently support cross-region copying.

1. Use [`CreateDBClusterSnapshot`](api-snapshots.md#CreateDBClusterSnapshot) to create a snapshot of your production cluster in the VPC in your production region.

1. Use [`CopyDBClusterSnapshot`](api-snapshots.md#CopyDBClusterSnapshot) to copy the snapshot to the VPC in your target backup region.

1. Use [`RestoreDBClusterFromSnapshot`](api-snapshots.md#RestoreDBClusterFromSnapshot) to create a new DB cluster in the VPC in your target backup region using the copied snapshot. Use the configuration settings and parameters that you copied from your primary production cluster.

1. The new Neptune cluster now exists but doesn't contain any instances. Use [`CreateDBInstance`](api-instances.md#CreateDBInstance) to create a new primary/writer instance that has the same instance type and size as your production cluster's writer instance. There's no need to create additional read-replicas at this point unless your backup instance will be used to service read I/O in the target region prior to a failover.

## Establish VPC peering between your primary cluster's VPC and your new backup cluster's VPC
<a name="streams-disaster-recovery-setup-vpc-peering"></a>

By setting up VPC peering, you enable your primary cluster's VPC to communicate with your backup cluster's VPC as if they are a single private network. To do this, take the following steps:

1. From your production cluster's VPC, call the [https://docs.aws.amazon.com/AWSEC2/latest/APIReference/CreateVpcPeeringConnection.html](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/CreateVpcPeeringConnection.html) API to establish the peering connection.

1. From your target backup cluster's VPC, call the [https://docs.aws.amazon.com/AWSEC2/latest/APIReference/AcceptVpcPeeringConnection.html](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/AcceptVpcPeeringConnection.html) API to accept the peering connection.

1. From your production cluster's VPC, use the [https://docs.aws.amazon.com/AWSEC2/latest/APIReference/CreateRoute.html](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/CreateRoute.html) API to add a route to the VPC's route table that redirects all traffic to the target VPC's CIDR block so that it uses the VPC peering prefix list.

1. Similarly, from your target backup cluster's VPC, use the [https://docs.aws.amazon.com/AWSEC2/latest/APIReference/CreateRoute.html](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/CreateRoute.html) API to add a route to the VPC's route table that routes traffic to the primary cluster's VPC.

## Set up the Neptune streams replication infrastructure
<a name="streams-disaster-recovery-setup-streams-replication"></a>

Now that both clusters are deployed and network communication between both regions has been established, use the [Neptune-to-Neptune CloudFormation template](streams-consumer-setup.md) to deploy the Neptune streams consumer Lambda function with the additional infrastructure that supports data replication. Do this in your primary production cluster's VPC.

The parameters that you will need to provide for this CloudFormation stack are:
+ **`NeptuneStreamEndpoint`**   –   The stream endpoint for the primary cluster, in URL format. For example: `https://(cluster name):8182/pg/stream`.
+ **`QueryEngine`**   –   This must be either `gremlin`, `sparql`, or `openCypher`.
+ **`RouteTableIds`**   –   Lets you add routes for both a DynamoDB VPC Endpoint and a monitoring VPC Endpoint.

  Two additional parameters, namely `CreateMonitoringEndpoint` and `CreateDynamoDBEndpoint`, must also be set to true if they do not already exists on the primary cluster's VPC. If they do already exist, make sure they are set to false or the CloudFormation creation will fail.
+ **`SecurityGroupIds`**   –   Specifies the security group used by the Lambda consumer to communicate with the primary cluster's Neptune stream endpoint.

  In the target backup cluster, attach a security group that allows traffic originating from this security group.
+ **`SubnetIds`**   –   A list of subnet ID in the primary cluster's VPC that can be used by the Lambda consumer to communicate with the primary cluster.
+ **`TargetNeptuneClusterEndpoint`**   –   The cluster endpoint (hostname only) of the target backup cluster.
+ **`TargetAWSRegion`**   –   The target backup cluster's AWS region, such as `us-east-1`). You must provide this parameter only when the AWS region of the target backup cluster is different from the region of the Neptune source cluster, as in the case of cross-region replication. If the source and target regions are the same, this parameter is optional.

  Note that if the `TargetAWSRegion` value is not a [valid AWS region that Neptune supports](limits.md#limits-regions), the process fails.
+ **`VPC`**   –   The ID of the primary cluster's VPC.

All other parameters can be left with their default values.

Once the CloudFormation template has been deployed, Neptune will begin replicating any changes from the primary cluster to the backup cluster. You can monitor this replication in the CloudWatch logs generated by the Lambda consumer function.

# Other considerations
<a name="streams-disaster-recovery-setup-other"></a>
+ If you need to use IAM authentication between the primary and backup clusters, you can also set it up when you invokde the CloudFormation template.
+ If encryption at rest is enabled on your primary cluster, consider how to manage the associated KMS keys when copying the snapshot across to the target region and associate a new KMS key in the target region.
+ A best practice is to use DNS CNAMEs in front of the Neptune endpoints used in your applications. Then, if you need to manually failover to the target backup cluster, these CNAMEs can be changed to point to the target cluster and/or instance endpoints.