

End of support notice: On October 7th, 2026, AWS will discontinue support for AWS IoT Greengrass Version 1. After October 7th, 2026, you will no longer be able to access the AWS IoT Greengrass V1 resources. For more information, please visit [Migrate from AWS IoT Greengrass Version 1](https://docs.aws.amazon.com/greengrass/v2/developerguide/migrate-from-v1.html).

# Integrate with services and protocols using Greengrass connectors
<a name="connectors"></a>

This feature is available for AWS IoT Greengrass Core v1.7 and later.

Connectors in AWS IoT Greengrass are prebuilt modules that make it more efficient to interact with local infrastructure, device protocols, AWS, and other cloud services. By using connectors, you can spend less time learning new protocols and APIs and more time focusing on the logic that matters to your business.

The following diagram shows where connectors can fit into the AWS IoT Greengrass landscape.

![\[Connectors connect to devices, services, and local resources.\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/images/connectors/connectors-arch.png)


Many connectors use MQTT messages to communicate with client devices and Greengrass Lambda functions in the group, or with AWS IoT and the local shadow service. In the following example, the Twilio Notifications connector receives MQTT messages from a user-defined Lambda function, uses a local reference of a secret from AWS Secrets Manager, and calls the Twilio API.

![\[A connector receiving an MQTT message from a Lambda function and calling a service.\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/images/connectors/twilio-solution.png)


For tutorials that create this solution, see [Getting started with Greengrass connectors (console)](connectors-console.md) and [Getting started with Greengrass connectors (CLI)](connectors-cli.md).

Greengrass connectors can help you extend device capabilities or create single-purpose devices. By using connectors, you can:
+ Implement reusable business logic.
+ Interact with cloud and local services, including AWS and third-party services.
+ Ingest and process device data.
+ Enable device-to-device calls using MQTT topic subscriptions and user-defined Lambda functions.

AWS provides a set of Greengrass connectors that simplify interactions with common services and data sources. These prebuilt modules enable scenarios for logging and diagnostics, replenishment, industrial data processing, and alarm and messaging. For more information, see [AWS-provided Greengrass connectors](connectors-list.md).

## Requirements
<a name="connectors-reqs"></a>

To use connectors, keep these points in mind:
+ Each connector that you use has requirements that you must meet. These requirements might include the minimum AWS IoT Greengrass Core software version, device prerequisites, required permissions, and limits. For more information, see [AWS-provided Greengrass connectors](connectors-list.md).
+ A Greengrass group can contain only one configured instance of a given connector. However, you can use the instance in multiple subscriptions. For more information, see [Configuration parameters](#connectors-parameters).
+ When the [default containerization](lambda-group-config.md#lambda-containerization-groupsettings) for the Greengrass group is set to **No container**, the connectors in the group must run without containerization. To find connectors that support **No container** mode, see [AWS-provided Greengrass connectors](connectors-list.md).

## Using Greengrass connectors
<a name="use-applications"></a>

A connector is a type of group component. Like other group components, such as client devices and user-defined Lambda functions, you add connectors to groups, configure their settings, and deploy them to the AWS IoT Greengrass core. Connectors run in the core environment.

You can deploy some connectors as simple standalone applications. For example, the Device Defender connector reads system metrics from the core device and sends them to AWS IoT Device Defender for analysis.

You can add other connectors as building blocks in larger solutions. The following example solution uses the Modbus-RTU Protocol Adapter connector to process messages from sensors and the Twilio Notifications connector to initiate Twilio messages.

![\[Data flow from Lambda function to Modbus-RTU Protocol Adapter connector to Lambda function to Twilio Notifications connector to Twilio.\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/images/connectors/modbus-twilio-solution.png)


Solutions often include user-defined Lambda functions that sit next to connectors and process the data that the connector sends or receives. In this example, the TempMonitor function receives data from Modbus-RTU Protocol Adapter, runs some business logic, and then sends data to Twilio Notifications.

To create and deploy a solution, you follow this general process:

1. Map out the high-level data flow. Identify the data sources, data channels, services, protocols, and resources that you need to work with. In the example solution, this includes data over the Modbus RTU protocol, the physical Modbus serial port, and Twilio.

1. Identify the connectors to include in the solution, and add them to your group. The example solution uses Modbus-RTU Protocol Adapter and Twilio Notifications. To help you find connectors that apply to your scenario, and to learn about their individual requirements, see [AWS-provided Greengrass connectors](connectors-list.md).

1. Identify whether user-defined Lambda functions, client devices, or resources are needed, and then create and add them to the group. This might include functions that contain business logic or process data into a format required by another entity in the solution. The example solution uses functions to send Modbus RTU requests and initiate Twilio notifications. It also includes a local device resource for the Modbus RTU serial port and a secret resource for the Twilio authentication token.
**Note**  
Secret resources reference passwords, tokens, and other secrets from AWS Secrets Manager. Secrets can be used by connectors and Lambda functions to authenticate with services and applications. By default, AWS IoT Greengrass can access secrets with names that start with "*greengrass-*". For more information, see [Deploy secrets to the AWS IoT Greengrass core](secrets.md).

1. Create subscriptions that allow the entities in the solution to exchange MQTT messages. If a connector is used in a subscription, the connector and the message source or target must use the predefined topic syntax supported by the connector. For more information, see [Inputs and outputs](#connectors-inputs-outputs).

1. Deploy the group to the Greengrass core.

For information about creating and deploying a connector, see the following tutorials:
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)

## Configuration parameters
<a name="connectors-parameters"></a>

Many connectors provide parameters that let you customize the behavior or output. These parameters are used during initialization, at runtime, or at other times in the connector lifecycle.

Parameter types and usage vary by connector. For example, the SNS connector has a parameter that configures the default SNS topic, and Device Defender has a parameter that configures the data sampling rate.

A group version can contain multiple connectors, but only one instance of a given connector at a time. This means that each connector in the group can have only one active configuration. However, the connector instance can be used in multiple subscriptions in the group. For example, you can create subscriptions that allow many devices to send data to the Kinesis Firehose connector.

### Parameters used to access group resources
<a name="connectors-parameters-resources"></a>

Greengrass connectors use group resources to access the file system, ports, peripherals, and other local resources on the core device. If a connector requires access to a group resource, then it provides related configuration parameters.

Group resources include:
+ [Local resources](access-local-resources.md). Directories, files, ports, pins, and peripherals that are present on the Greengrass core device.
+ [Machine learning resources](ml-inference.md). Machine learning models that are trained in the cloud and deployed to the core for local inference.
+ [Secret resources](secrets.md). Local, encrypted copies of passwords, keys, tokens, or arbitrary text from AWS Secrets Manager. Connectors can securely access these local secrets and use them to authenticate to services or local infrastructure.

For example, parameters for Device Defender enable access to system metrics in the host `/proc` directory, and parameters for Twilio Notifications enable access to a locally stored Twilio authentication token.

### Updating connector parameters
<a name="update-application-parameters-"></a>

Parameters are configured when the connector is added to a Greengrass group. You can change parameter values after the connector is added.
+ In the console: From the group configuration page, open **Connectors**, and from the connector's contextual menu, choose **Edit**.
**Note**  
If the connector uses a secret resource that's later changed to reference a different secret, you must edit the connector's parameters and confirm the change.
+ In the API: Create another version of the connector that defines the new configuration.

  The AWS IoT Greengrass API uses versions to manage groups. Versions are immutable, so to add or change group components—for example, the group's client devices, functions, and resources—you must create versions of new or updated components. Then, you create and deploy a group version that contains the target version of each component.

After you make changes to the connector configuration, you must deploy the group to propagate the changes to the core.

## Inputs and outputs
<a name="connectors-inputs-outputs"></a>

Many Greengrass connectors can communicate with other entities by sending and receiving MQTT messages. MQTT communication is controlled by subscriptions that allow a connector to exchange data with Lambda functions, client devices, and other connectors in the Greengrass group, or with AWS IoT and the local shadow service. To allow this communication, you must create subscriptions in the group that the connector belongs to. For more information, see [Managed subscriptions in the MQTT messaging workflow](gg-sec.md#gg-msg-workflow).

Connectors can be message publishers, message subscribers, or both. Each connector defines the MQTT topics that it publishes or subscribes to. These predefined topics must be used in the subscriptions where the connector is a message source or message target. For tutorials that include steps for configuring subscriptions for a connector, see [Getting started with Greengrass connectors (console)](connectors-console.md) and [Getting started with Greengrass connectors (CLI)](connectors-cli.md).

**Note**  
Many connectors also have built-in modes of communication to interact with cloud or local services. These vary by connector and might require that you configure parameters or add permissions to the [group role](group-role.md). For information about connector requirements, see [AWS-provided Greengrass connectors](connectors-list.md).

### Input topics
<a name="connectors-multiple-topics"></a>

Most connectors receive input data on MQTT topics. Some connectors subscribe to multiple topics for input data. For example, the Serial Stream connector supports two topics:
+ `serial/+/read/#`
+ `serial/+/write/#`

For this connector, read and write requests are sent to the corresponding topic. When you create subscriptions, make sure to use the topic that aligns with your implementation.

The `+` and `#` characters in the previous examples are wildcards. These wildcards allow subscribers to receive messages on multiple topics and publishers to customize the topics that they publish to.
+ The `+` wildcard can appear anywhere in the topic hierarchy. It can be replaced by one hierarchy item.

  As an example, for topic `sensor/+/input`, messages can be published to topics `sensor/id-123/input` but not to `sensor/group-a/id-123/input`.
+ The `#` wildcard can appear only at the end of the topic hierarchy. It can be replaced by zero or more hierarchy items.

  As an example, for topic `sensor/#`, messages can be published to `sensor/`, `sensor/id-123`, and `sensor/group-a/id-123`, but not to `sensor`.

Wildcard characters are valid only when subscribing to topics. Messages can't be published to topics that contain wildcards. Check the documentation for the connector for more information about its input or output topic requirements. For more information, see [AWS-provided Greengrass connectors](connectors-list.md).

## Containerization support
<a name="connector-containerization"></a>

By default, most connectors run on the Greengrass core in an isolated runtime environment that's managed by AWS IoT Greengrass. These runtime environments, called *containers*, provide isolation between connectors and the host system, which offers more security for the host and the connector.

However, this Greengrass containerization isn't supported in some environments, such as when you run AWS IoT Greengrass in a Docker container or on older Linux kernels without cgroups. In these environments, the connectors must run in **No container** mode. To find connectors that support **No container** mode, see [AWS-provided Greengrass connectors](connectors-list.md). Some connectors run in this mode natively, and some connectors allow you to set the isolation mode.

You can also set the isolation mode to **No container** in environments that support Greengrass containerization, but we recommend using **Greengrass container** mode when possible.

**Note**  
The [default containerization](lambda-group-config.md#lambda-containerization-groupsettings) setting for the Greengrass group doesn't apply to connectors.

## Upgrading connector versions
<a name="upgrade-connector-versions"></a>

Connector providers might release new versions of a connector that add features, fix issues, or improve performance. For information about available versions and related changes, see the [documentation for each connector](connectors-list.md).

In the AWS IoT console, you can check for new versions for the connectors in your Greengrass group.

1. <a name="console-gg-groups"></a>In the AWS IoT console navigation pane, under **Manage**, expand **Greengrass devices**, and then choose **Groups (V1)**.

1. Under **Greengrass groups**, choose your group.

1. Choose **Connectors** to display the connectors in the group.

   If the connector has a new version, an **Available** button appears in the **Upgrade** column.

1. To upgrade the connector version:

   1. On the **Connectors** page, in the **Upgrade** column, choose **Available**. The **Upgrade connector** page opens and displays the current parameter settings, where applicable.

      Choose the new connector version, define parameters as needed, and then choose **Upgrade**.

   1. On the **Subscriptions** page, add new subscriptions in the group to replace any that use the connector as a source or target. Then, remove the old subscriptions.

      Subscriptions reference connectors by version, so they become invalid if you change the connector version in the group.

   1. From the **Actions** menu, choose **Deploy** to deploy your changes to the core.

To upgrade a connector from the AWS IoT Greengrass API, create and deploy a group version that includes the updated connector and subscriptions. Use the same process as when you add a connector to a group. For detailed steps that show you how to use the AWS CLI to configure and deploy an example Twilio Notifications connector, see [Getting started with Greengrass connectors (CLI)](connectors-cli.md).

## Logging for connectors
<a name="connectors-logging"></a>

Greengrass connectors contain Lambda functions that write events and errors to Greengrass logs. Depending on your group settings, logs are written to CloudWatch Logs, the local file system, or both. Logs from connectors include the ARN of the corresponding function. The following example ARN is from the Kinesis Firehose connector:

```
arn:aws:lambda:aws-region:account-id:function:KinesisFirehoseClient:1
```

The default logging configuration writes info-level logs to the file system using the following directory structure:

```
greengrass-root/ggc/var/log/user/region/aws/function-name.log
```

For more information about Greengrass logging, see [Monitoring with AWS IoT Greengrass logs](greengrass-logs-overview.md).

# AWS-provided Greengrass connectors
<a name="connectors-list"></a>

AWS provides the following connectors that support common AWS IoT Greengrass scenarios. For more information about how connectors work, see the following documentation:
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Get started with connectors (console)](connectors-console.md) or [Get started with connectors (CLI)](connectors-cli.md)


| Connector | Description | Supported Lambda runtimes | Supports **No container** mode | 
| --- | --- | --- | --- | 
| [CloudWatch Metrics](cloudwatch-metrics-connector.md) | Publishes custom metrics to Amazon CloudWatch. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [Device Defender](device-defender-connector.md) | Sends system metrics to AWS IoT Device Defender. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | No | 
| [Docker Application Deployment](docker-app-connector.md) | Runs a Docker Compose file to start a Docker application on the core device. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [IoT Analytics](iot-analytics-connector.md) | Sends data from devices and sensors to AWS IoT Analytics. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [IoT Ethernet IP Protocol Adapter](ethernet-ip-connector.md) | Collects data from Ethernet/IP devices. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [IoT SiteWise](iot-sitewise-connector.md) | Sends data from devices and sensors to asset properties in AWS IoT SiteWise. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [Kinesis Firehose](kinesis-firehose-connector.md) | Sends data to Amazon Data Firehose delivery streams. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [ML Feedback](ml-feedback-connector.md) | Publishes machine learning model input to the cloud and output to an MQTT topic. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | No | 
| [ML Image Classification](image-classification-connector.md) | Runs a local image classification inference service. This connector provides versions for several platforms. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | No | 
| [ML Object Detection](obj-detection-connector.md) | Runs a local object detection inference service. This connector provides versions for several platforms. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | No | 
| [Modbus-RTU Protocol Adapter](modbus-protocol-adapter-connector.md) | Sends requests to Modbus RTU devices. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | No | 
| [Modbus-TCP Protocol Adapter](modbus-tcp-connector.md) | Collects data from ModbusTCP devices. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [Raspberry Pi GPIO](raspberrypi-gpio-connector.md) | Controls GPIO pins on a Raspberry Pi core device. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | No | 
| [Serial Stream](serial-stream-connector.md) | Reads and writes to a serial port on the core device. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | No | 
| [ServiceNow MetricBase Integration](servicenow-connector.md) | Publishes time series metrics to ServiceNow MetricBase. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [SNS](sns-connector.md) | Sends messages to an Amazon SNS topic. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [Splunk Integration](splunk-connector.md) | Publishes data to Splunk HEC. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 
| [Twilio Notifications](twilio-notifications-connector.md) | Initiates a Twilio text or voice message. | <a name="python-connectors-runtime"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-list.html) | Yes | 

\$1 To use the Python 3.8 runtimes, you must create a symbolic link from the default Python 3.7 installation folder to the installed Python 3.8 binaries. For more information, see the connector-specific requirements.

**Note**  
We recommend that you [upgrade connector versions](connectors.md#upgrade-connector-versions) from Python 2.7 to Python 3.7. Continued support for Python 2.7 connectors depends on AWS Lambda runtime support. For more information, see [Runtime support policy](https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html) in the *AWS Lambda Developer Guide*.

# CloudWatch Metrics connector
<a name="cloudwatch-metrics-connector"></a>

The CloudWatch Metrics [connector](connectors.md) publishes custom metrics from Greengrass devices to Amazon CloudWatch. The connector provides a centralized infrastructure for publishing CloudWatch metrics, which you can use to monitor and analyze the Greengrass core environment, and act on local events. For more information, see [Using Amazon CloudWatch metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) in the *Amazon CloudWatch User Guide*.

This connector receives metric data as MQTT messages. The connector batches metrics that are in the same namespace and publishes them to CloudWatch at regular intervals.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 5 | `arn:aws:greengrass:region::/connectors/CloudWatchMetrics/versions/5` | 
| 4 | `arn:aws:greengrass:region::/connectors/CloudWatchMetrics/versions/4` | 
| 3 | `arn:aws:greengrass:region::/connectors/CloudWatchMetrics/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/CloudWatchMetrics/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/CloudWatchMetrics/versions/1` | 

For information about version changes, see the [Changelog](#cloudwatch-metrics-connector-changelog).

## Requirements
<a name="cloudwatch-metrics-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 3 - 5 ]
+ <a name="conn-req-ggc-v1.9.3"></a>AWS IoT Greengrass Core software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="conn-cloudwatch-metrics-req-iam-policy"></a>The [Greengrass group role](group-role.md) configured to allow the `cloudwatch:PutMetricData` action, as shown in the following example AWS Identity and Access Management (IAM) policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "Stmt1528133056761",
              "Action": [
                  "cloudwatch:PutMetricData"
              ],
              "Effect": "Allow",
              "Resource": "*"
          }
      ]
  }
  ```

------

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

  For more information about CloudWatch permissions, see [ Amazon CloudWatch permissions reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/permissions-reference-cw.html) in the *IAM User Guide*.

------
#### [ Versions 1 - 2 ]
+ <a name="conn-req-ggc-v1.7.0"></a>AWS IoT Greengrass Core software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-cloudwatch-metrics-req-iam-policy"></a>The [Greengrass group role](group-role.md) configured to allow the `cloudwatch:PutMetricData` action, as shown in the following example AWS Identity and Access Management (IAM) policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "Stmt1528133056761",
              "Action": [
                  "cloudwatch:PutMetricData"
              ],
              "Effect": "Allow",
              "Resource": "*"
          }
      ]
  }
  ```

------

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

  For more information about CloudWatch permissions, see [ Amazon CloudWatch permissions reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/permissions-reference-cw.html) in the *IAM User Guide*.

------

## Connector Parameters
<a name="cloudwatch-metrics-connector-param"></a>

This connector provides the following parameters:

------
#### [ Versions 4 - 5 ]

`PublishInterval`  <a name="cw-metrics-PublishInterval"></a>
The maximum number of seconds to wait before publishing batched metrics for a given namespace. The maximum value is 900. To configure the connector to publish metrics as they are received (without batching), specify 0.  
The connector publishes to CloudWatch after it receives 20 metrics in the same namespace or after the specified interval.  
The connector doesn't guarantee the order of publish events.
Display name in the AWS IoT console: **Publish interval**  
Required: `true`  
Type: `string`  
Valid values: `0 - 900`  
Valid pattern: `[0-9]|[1-9]\d|[1-9]\d\d|900`

`PublishRegion`  <a name="cw-metrics-PublishRegion"></a>
The AWS Region to post CloudWatch metrics to. This value overrides the default Greengrass metrics Region. It is required only when posting cross-Region metrics.  
Display name in the AWS IoT console: **Publish region**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|([a-z]{2}-[a-z]+-\d{1})`

`MemorySize`  <a name="cw-metrics-MemorySize"></a>
The memory (in KB) to allocate to the connector.  
Display name in the AWS IoT console: **Memory size**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`MaxMetricsToRetain`  <a name="cw-metrics-MaxMetricsToRetain"></a>
The maximum number of metrics across all namespaces to save in memory before they are replaced with new metrics. The minimum value is 2000.  
This limit applies when there's no connection to the internet and the connector starts to buffer the metrics to publish later. When the buffer is full, the oldest metrics are replaced by new metrics. Metrics in a given namespace are replaced only by metrics in the same namespace.  
Metrics are not saved if the host process for the connector is interrupted. For example, this interruption can happen during group deployment or when the device restarts.
Display name in the AWS IoT console: **Maximum metrics to retain**  
Required: `true`  
Type: `string`  
Valid pattern: `^([2-9]\d{3}|[1-9]\d{4,})$`

`IsolationMode`  <a name="IsolationMode"></a>
The [containerization](connectors.md#connector-containerization) mode for this connector. The default is `GreengrassContainer`, which means that the connector runs in an isolated runtime environment inside the AWS IoT Greengrass container.  
The default containerization setting for the group does not apply to connectors.
Display name in the AWS IoT console: **Container isolation mode**  
Required: `false`  
Type: `string`  
Valid values: `GreengrassContainer` or `NoContainer`  
Valid pattern: `^NoContainer$|^GreengrassContainer$`

------
#### [ Versions 1 - 3 ]

`PublishInterval`  <a name="cw-metrics-PublishInterval"></a>
The maximum number of seconds to wait before publishing batched metrics for a given namespace. The maximum value is 900. To configure the connector to publish metrics as they are received (without batching), specify 0.  
The connector publishes to CloudWatch after it receives 20 metrics in the same namespace or after the specified interval.  
The connector doesn't guarantee the order of publish events.
Display name in the AWS IoT console: **Publish interval**  
Required: `true`  
Type: `string`  
Valid values: `0 - 900`  
Valid pattern: `[0-9]|[1-9]\d|[1-9]\d\d|900`

`PublishRegion`  <a name="cw-metrics-PublishRegion"></a>
The AWS Region to post CloudWatch metrics to. This value overrides the default Greengrass metrics Region. It is required only when posting cross-Region metrics.  
Display name in the AWS IoT console: **Publish region**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|([a-z]{2}-[a-z]+-\d{1})`

`MemorySize`  <a name="cw-metrics-MemorySize"></a>
The memory (in KB) to allocate to the connector.  
Display name in the AWS IoT console: **Memory size**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`MaxMetricsToRetain`  <a name="cw-metrics-MaxMetricsToRetain"></a>
The maximum number of metrics across all namespaces to save in memory before they are replaced with new metrics. The minimum value is 2000.  
This limit applies when there's no connection to the internet and the connector starts to buffer the metrics to publish later. When the buffer is full, the oldest metrics are replaced by new metrics. Metrics in a given namespace are replaced only by metrics in the same namespace.  
Metrics are not saved if the host process for the connector is interrupted. For example, this interruption can happen during group deployment or when the device restarts.
Display name in the AWS IoT console: **Maximum metrics to retain**  
Required: `true`  
Type: `string`  
Valid pattern: `^([2-9]\d{3}|[1-9]\d{4,})$`

------

### Create Connector Example (AWS CLI)
<a name="cloudwatch-metrics-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the CloudWatch Metrics connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyCloudWatchMetricsConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/CloudWatchMetrics/versions/4",
            "Parameters": {
                "PublishInterval" : "600",
                "PublishRegion" : "us-west-2",
                "MemorySize" : "16",
                "MaxMetricsToRetain" : "2500",
                "IsolationMode" : "GreengrassContainer"
            }
        }
    ]
}'
```

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="cloudwatch-metrics-connector-data-input"></a>

This connector accepts metrics on an MQTT topic and publishes the metrics to CloudWatch. Input messages must be in JSON format.

<a name="topic-filter"></a>**Topic filter in subscription**  
`cloudwatch/metric/put`

**Message properties**    
`request`  
Information about the metric in this message.  
The request object contains the metric data to publish to CloudWatch. The metric values must meet the specifications of the [https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html) API. Only the `namespace`, `metricData.metricName`, and `metricData.value` properties are required.  
Required: `true`  
Type: `object` that includes the following properties:    
`namespace`  
The user-defined namespace for the metric data in this request. CloudWatch uses namespaces as containers for metric data points.  
You can't specify a namespace that begins with the reserved string `AWS/`.
Required: `true`  
Type: `string`  
Valid pattern: `[^:].*`  
`metricData`  
The data for the metric.  
Required: `true`  
Type: `object` that includes the following properties:    
`metricName`  
The name of the metric.  
Required: `true`  
Type: `string`  
`dimensions`  
The dimensions that are associated with the metric. Dimensions provide more information about the metric and its data. A metric can define up to 10 dimensions.  
This connector automatically includes a dimension named `coreName`, where the value is the name of the core.  
Required: `false`  
Type: `array` of dimension objects that include the following properties:    
`name`  
The dimension name.  
Required: `false`  
Type: `string`  
`value`  
The dimension value.  
Required: `false`  
Type: `string`  
`timestamp`  
The time that the metric data was received, expressed as the number of seconds since `Jan 1, 1970 00:00:00 UTC`. If this value is omitted, the connector uses the time that it received the message.  
Required: `false`  
Type: `timestamp`  
If you use between versions 1 and 4 of this connector, we recommend that you retrieve the timestamp separately for each metric when you send multiple metrics from a single source. Don't use a variable to store the timestamp.  
`value`  
The value for the metric.  
CloudWatch rejects values that are too small or too large. Values must be in the range of `8.515920e-109` to `1.174271e+108` (Base 10) or `2e-360` to `2e360` (Base 2). Special values (for example, `NaN`, `+Infinity`, `-Infinity`) are not supported.
Required: `true`  
Type: `double`  
`unit`  
The unit of the metric.  
Required: `false`  
Type: `string`  
Valid values: `Seconds, Microseconds, Milliseconds, Bytes, Kilobytes, Megabytes, Gigabytes, Terabytes, Bits, Kilobits, Megabits, Gigabits, Terabits, Percent, Count, Bytes/Second, Kilobytes/Second, Megabytes/Second, Gigabytes/Second, Terabytes/Second, Bits/Second, Kilobits/Second, Megabits/Second, Gigabits/Second, Terabits/Second, Count/Second, None`

Limits  
All limits that are imposed by the CloudWatch [https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html) API apply to metrics when using this connector. The following limits are especially important:  
+ 40 KB limit on API payload
+ 20 metrics per API request
+ 150 transactions per second (TPS) for the `PutMetricData` API
For more information, see [CloudWatch limits](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html) in the *Amazon CloudWatch User Guide*.

**Example input**  

```
{
   "request": {
       "namespace": "Greengrass",
       "metricData":
           {
               "metricName": "latency",
               "dimensions": [
                   {
                       "name": "hostname",
                       "value": "test_hostname"
                   }
               ],
               "timestamp": 1539027324,
               "value": 123.0,
               "unit": "Seconds"
            }
    }
}
```

## Output data
<a name="cloudwatch-metrics-connector-data-output"></a>

This connector publishes status information as output data on an MQTT topic.

<a name="topic-filter"></a>**Topic filter in subscription**  
`cloudwatch/metric/put/status`

**Example output: Success**  
The response includes the namespace of the metric data and the `RequestId` field from the CloudWatch response.  

```
{
   "response": {
        "cloudwatch_rid":"70573243-d723-11e8-b095-75ff2EXAMPLE",
        "namespace": "Greengrass",
        "status":"success"
    }
}
```

**Example output: Failure**  

```
{
   "response" : {
        "namespace": "Greengrass",
        "error": "InvalidInputException",
        "error_message":"cw metric is invalid",
        "status":"fail"
   }
}
```
If the connector detects a retryable error (for example, connection errors), it retries the publish in the next batch.

## Usage Example
<a name="cloudwatch-metrics-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
If you use other Python runtimes, you can create a symlink from Python3.x to Python 3.7.
The [Get started with connectors (console)](connectors-console.md) and [Get started with connectors (CLI)](connectors-cli.md) topics contain detailed steps that show you how to configure and deploy an example Twilio Notifications connector.

1. Make sure you meet the [requirements](#cloudwatch-metrics-connector-req) for the connector.

   <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#cloudwatch-metrics-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. Add the connector and configure its [parameters](#cloudwatch-metrics-connector-param).

   1. Add subscriptions that allow the connector to receive [input data](#cloudwatch-metrics-connector-data-input) and send [output data](#cloudwatch-metrics-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="cloudwatch-metrics-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector.

```
import greengrasssdk
import time
import json

iot_client = greengrasssdk.client('iot-data')
send_topic = 'cloudwatch/metric/put'

def create_request_with_all_fields():
    return  {
        "request": {
            "namespace": "Greengrass_CW_Connector",
            "metricData": {
                "metricName": "Count1",
                "dimensions": [
                    {
                        "name": "test",
                        "value": "test"
                    }
                ],
                "value": 1,
                "unit": "Seconds",
                "timestamp": time.time()
            }
        }
    }

def publish_basic_message():
    messageToPublish = create_request_with_all_fields()
    print("Message To Publish: ", messageToPublish)
    iot_client.publish(topic=send_topic,
        payload=json.dumps(messageToPublish))

publish_basic_message()

def lambda_handler(event, context):
    return
```

## Licenses
<a name="cloudwatch-metrics-connector-license"></a>

The CloudWatch Metrics connector includes the following third-party software/licensing:<a name="boto-3-licenses"></a>
+ [AWS SDK for Python (Boto3)](https://pypi.org/project/boto3/)/Apache License 2.0
+ [botocore](https://pypi.org/project/botocore/)/Apache License 2.0
+ [dateutil](https://pypi.org/project/python-dateutil/1.4/)/PSF License
+ [docutils](https://pypi.org/project/docutils/)/BSD License, GNU General Public License (GPL), Python Software Foundation License, Public Domain
+ [jmespath](https://pypi.org/project/jmespath/)/MIT License
+ [s3transfer](https://pypi.org/project/s3transfer/)/Apache License 2.0
+ [urllib3](https://pypi.org/project/urllib3/)/MIT License

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="cloudwatch-metrics-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 5 | Fix to add support for duplicate timestamps in input data. | 
| 4 | <a name="isolation-mode-changelog"></a>Added the `IsolationMode` parameter to configure the containerization mode for the connector. | 
| 3 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 2 | Fix to reduce excessive logging. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="cloudwatch-metrics-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+ [ Using Amazon CloudWatch metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) in the *Amazon CloudWatch User Guide*
+ [ PutMetricData](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html) in the *Amazon CloudWatch API Reference*

# Device Defender connector
<a name="device-defender-connector"></a>

The Device Defender [connector](connectors.md) notifies administrators of changes in the state of a Greengrass core device. This can help identify unusual behavior that might indicate a compromised device.

This connector reads system metrics from the `/proc` directory on the core device, and then publishes the metrics to AWS IoT Device Defender. For metrics reporting details, see [Device metrics document specification](https://docs.aws.amazon.com/iot/latest/developerguide/device-defender-detect.html#DetectMetricsMessagesSpec) in the *AWS IoT Developer Guide*.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 3 | `arn:aws:greengrass:region::/connectors/DeviceDefender/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/DeviceDefender/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/DeviceDefender/versions/1` | 

For information about version changes, see the [Changelog](#device-defender-connector-changelog).

## Requirements
<a name="device-defender-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 3 ]
+ <a name="conn-req-ggc-v1.9.3"></a>AWS IoT Greengrass Core software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="conn-device-defender-req-itdd-config"></a>AWS IoT Device Defender configured to use the Detect feature to keep track of violations. For more information, see [Detect](https://docs.aws.amazon.com/iot/latest/developerguide/device-defender-detect.html) in the *AWS IoT Developer Guide*.
+ <a name="conn-device-defender-req-proc-dir-resource"></a>A [local volume resource](access-local-resources.md) in the Greengrass group that points to the `/proc` directory. The resource must use the following properties:
  + Source path: `/proc`
  + Destination path: `/host_proc` (or a value that matches the [valid pattern](#param-ProcDestinationPath))
  + AutoAddGroupOwner: `true`
+ <a name="conn-device-defender-req-psutil-v3"></a>The [psutil](https://pypi.org/project/psutil/) library installed on the Greengrass core. Version 5.7.0 is the latest version that is verified to work with the connector.
+ <a name="conn-device-defender-req-cbor-v3"></a>The [cbor](https://pypi.org/project/cbor/) library installed on the Greengrass core. Version 1.0.0 is the latest version that is verified to work with the connector.

------
#### [ Versions 1 - 2 ]
+ <a name="conn-req-ggc-v1.7.0"></a>AWS IoT Greengrass Core software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-device-defender-req-itdd-config"></a>AWS IoT Device Defender configured to use the Detect feature to keep track of violations. For more information, see [Detect](https://docs.aws.amazon.com/iot/latest/developerguide/device-defender-detect.html) in the *AWS IoT Developer Guide*.
+ <a name="conn-device-defender-req-proc-dir-resource"></a>A [local volume resource](access-local-resources.md) in the Greengrass group that points to the `/proc` directory. The resource must use the following properties:
  + Source path: `/proc`
  + Destination path: `/host_proc` (or a value that matches the [valid pattern](#param-ProcDestinationPath))
  + AutoAddGroupOwner: `true`
+ <a name="conn-device-defender-req-psutil"></a>The [psutil](https://pypi.org/project/psutil/) library installed on the Greengrass core.
+ <a name="conn-device-defender-req-cbor"></a>The [cbor](https://pypi.org/project/cbor/) library installed on the Greengrass core.

------

## Connector Parameters
<a name="device-defender-connector-param"></a>

This connector provides the following parameters:

`SampleIntervalSeconds`  
The number of seconds between each cycle of gathering and reporting metrics. The minimum value is 300 seconds (5 minutes).  
Display name in the AWS IoT console: **Metrics reporting interval**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]*(?:3[0-9][0-9]|[4-9][0-9]{2}|[1-9][0-9]{3,})$`

`ProcDestinationPath-ResourceId`  
The ID of the `/proc` volume resource.  
This connector is granted read-only access to the resource.
Display name in the AWS IoT console: **Resource for /proc directory**  
Required: `true`  
Type: `string`  
Valid pattern: `[a-zA-Z0-9_-]+`

`ProcDestinationPath`  <a name="param-ProcDestinationPath"></a>
The destination path of the `/proc` volume resource.  
Display name in the AWS IoT console: **Destination path of /proc resource**  
Required: `true`  
Type: `string`  
Valid pattern: `\/[a-zA-Z0-9_-]+`

### Create Connector Example (AWS CLI)
<a name="device-defender-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the Device Defender connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyDeviceDefenderConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/DeviceDefender/versions/3",
            "Parameters": {
                "SampleIntervalSeconds": "600",
                "ProcDestinationPath": "/host_proc",
                "ProcDestinationPath-ResourceId": "my-proc-resource"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in this connector has a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="device-defender-connector-data-input"></a>

This connector doesn't accept MQTT messages as input data.

## Output data
<a name="device-defender-connector-data-output"></a>

This connector publishes security metrics to AWS IoT Device Defender as output data.

<a name="topic-filter"></a>**Topic filter in subscription**  
`$aws/things/+/defender/metrics/json`  
This is the topic syntax that AWS IoT Device Defender expects. The connector replaces the `+` wildcard with the device name (for example, `$aws/things/thing-name/defender/metrics/json`).

**Example output**  
For metrics reporting details, see [ Device metrics document specification](https://docs.aws.amazon.com/iot/latest/developerguide/device-defender-detect.html#DetectMetricsMessagesSpec) in the *AWS IoT Developer Guide*.  

```
{
    "header": {
        "report_id": 1529963534,
        "version": "1.0"
    },
    "metrics": {
        "listening_tcp_ports": {
            "ports": [
                {
                    "interface": "eth0",
                    "port": 24800
                },
                {
                    "interface": "eth0",
                    "port": 22
                },
                {
                    "interface": "eth0",
                    "port": 53
                }
            ],
            "total": 3
        },
        "listening_udp_ports": {
            "ports": [
                {
                    "interface": "eth0",
                    "port": 5353
                },
                {
                    "interface": "eth0",
                    "port": 67
                }
            ],
            "total": 2
        },
        "network_stats": {
            "bytes_in": 1157864729406,
            "bytes_out": 1170821865,
            "packets_in": 693092175031,
            "packets_out": 738917180
        },
        "tcp_connections": {
            "established_connections":{
                "connections": [
                    {
                    "local_interface": "eth0",
                    "local_port": 80,
                    "remote_addr": "192.168.0.1:8000"
                    },
                    {
                    "local_interface": "eth0",
                    "local_port": 80,
                    "remote_addr": "192.168.0.1:8000"
                    }
                ],
                "total": 2
            }
        }
    }
}
```

## Licenses
<a name="device-defender-connector-license"></a>

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="device-defender-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 3 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 2 | Fix to reduce excessive logging. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="device-defender-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+ [Device Defender](https://docs.aws.amazon.com/iot/latest/developerguide/device-defender.html) in the *AWS IoT Developer Guide*

# Docker application deployment connector
<a name="docker-app-connector"></a>

The Greengrass Docker application deployment connector makes it easier to run your Docker images on an AWS IoT Greengrass core. The connector uses Docker Compose to start a multi-container Docker application from a `docker-compose.yml` file. Specifically, the connector runs `docker-compose` commands to manage Docker containers on a single core device. For more information, see [Overview of Docker Compose](https://docs.docker.com/compose/) in the Docker documentation. The connector can access Docker images stored in Docker container registries, such as Amazon Elastic Container Registry (Amazon ECR), Docker Hub, and private Docker trusted registries.

After you deploy the Greengrass group, the connector pulls the latest images and starts the Docker containers. It runs the `docker-compose pull` and `docker-compose up` command. Then, the connector publishes the status of the command to an [output MQTT topic](#docker-app-connector-data-output). It also logs status information about running Docker containers. This makes it possible for you to monitor your application logs in Amazon CloudWatch. For more information, see [Monitoring with AWS IoT Greengrass logs](greengrass-logs-overview.md). The connector also starts Docker containers each time the Greengrass daemon restarts. The number of Docker containers that can run on the core depends on your hardware.

The Docker containers run outside of the Greengrass domain on the core device, so they can't access the core's inter-process communication (IPC). However, you can configure some communication channels with Greengrass components, such as local Lambda functions. For more information, see [Communicating with Docker containers](#docker-app-connector-communicating).

You can use the connector for scenarios such as hosting a web server or MySQL server on your core device. Local services in your Docker applications can communicate with each other, other processes in the local environment, and cloud services. For example, you can run a web server on the core that sends requests from Lambda functions to a web service in the cloud.

This connector runs in [No container](lambda-group-config.md#no-container-mode) isolation mode, so you can deploy it to a Greengrass group that runs without Greengrass containerization.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 7 | `arn:aws:greengrass:region::/connectors/DockerApplicationDeployment/versions/7` | 
| 6 | `arn:aws:greengrass:region::/connectors/DockerApplicationDeployment/versions/6` | 
| 5 | `arn:aws:greengrass:region::/connectors/DockerApplicationDeployment/versions/5` | 
| 4 | `arn:aws:greengrass:region::/connectors/DockerApplicationDeployment/versions/4` | 
| 3 | `arn:aws:greengrass:region::/connectors/DockerApplicationDeployment/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/DockerApplicationDeployment/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/DockerApplicationDeployment/versions/1` | 

For information about version changes, see the [Changelog](#docker-app-connector-changelog).

## Requirements
<a name="docker-app-connector-req"></a>

This connector has the following requirements:
+ AWS IoT Greengrass Core software v1.10 or later.
**Note**  
This connector is not supported on OpenWrt distributions.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ A minimum of 36 MB RAM on the Greengrass core for the connector to monitor running Docker containers. The total memory requirement depends on the number of Docker containers that run on the core.
+ [Docker Engine](https://docs.docker.com/install/) 1.9.1 or later installed on the Greengrass core. Version 19.0.3 is the latest version that is verified to work with the connector.

  The `docker` executable must be in the `/usr/bin` or `/usr/local/bin` directory.
**Important**  
We recommend that you install a credentials store to secure the local copies of your Docker credentials. For more information, see [Security notes](#docker-app-connector-security).

  For information about installing Docker on Amazon Linux distributions, see [Docker basics for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html) in the *Amazon Elastic Container Service Developer Guide*.
+ [Docker Compose](https://docs.docker.com/compose/install/) installed on the Greengrass core. The `docker-compose` executable must be in the `/usr/bin` or `/usr/local/bin` directory.

  The following Docker Compose versions are verified to work with the connector.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/docker-app-connector.html)
+ A single Docker Compose file (for example, `docker-compose.yml`), stored in Amazon Simple Storage Service (Amazon S3). The format must be compatible with the version of Docker Compose installed on the core. You should test the file before you use it on your core. If you edit the file after you deploy the Greengrass group, you must redeploy the group to update your local copy on the core.
+ A Linux user with permission to call the local Docker daemon and write to the directory that stores the local copy of your Compose file. For more information, see [Setting up the Docker user on the core](#docker-app-connector-linux-user).
+ The [Greengrass group role](group-role.md) configured to allow the `s3:GetObject` action on the S3 bucket that contains your Compose file. This permission is shown in the following example IAM policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "AllowAccessToComposeFileS3Bucket",
              "Action": [
                  "s3:GetObject",
                  "s3:GetObjectVersion"
              ],
              "Effect": "Allow",
              "Resource": "arn:aws:s3:::bucket-name/*" 
          }
      ]
  }
  ```

------
**Note**  
If your S3 bucket is versioning-enabled, then the role the must be configured to allow the `s3:GetObjectVersion` action as well. For more information, see [Using versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) in the *Amazon Simple Storage Service User Guide*.

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).
+ <a name="docker-app-connector-ecr-perms"></a>If your Docker Compose file references a Docker image stored in Amazon ECR, the [Greengrass group role](group-role.md) configured to allow the following:
  + `ecr:GetDownloadUrlForLayer` and `ecr:BatchGetImage` actions on your Amazon ECR repositories that contain the Docker images.
  + `ecr:GetAuthorizationToken` action on your resources.

  Repositories must be in the same AWS account and AWS Region as the connector.
**Important**  
Permissions in the group role can be assumed by all Lambda functions and connectors in the Greengrass group. For more information, see [Security notes](#docker-app-connector-security).

  These permissions are shown in the following example policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "AllowGetEcrRepositories",
              "Effect": "Allow",
              "Action": [
                  "ecr:GetDownloadUrlForLayer",
                  "ecr:BatchGetImage"
              ],
              "Resource": [
                  "arn:aws:ecr:us-east-1:123456789012:repository/repository-name"
              ]	
          },
          {
              "Sid": "AllowGetEcrAuthToken",
              "Effect": "Allow",
              "Action": "ecr:GetAuthorizationToken",
              "Resource": "*"
          }
      ]
  }
  ```

------

  For more information, see [Amazon ECR repository policy examples](https://docs.aws.amazon.com/AmazonECR/latest/userguide/RepositoryPolicyExamples.html) in the *Amazon ECR User Guide*.

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).
+ If your Docker Compose file references a Docker image from [AWS Marketplace](https://aws.amazon.com/marketplace), the connector also has the following requirements:
  + You must be subscribed to AWS Marketplace container products. For more information, see [Finding and subscribing to container products](https://docs.aws.amazon.com/marketplace/latest/buyerguide/buyer-finding-and-subscribing-to-container-products.html) in the *AWS Marketplace Subscribers Guide*.
  + AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs). The connector uses this feature only to retrieve your secrets from AWS Secrets Manager, not to store them.
  + You must create a secret in Secrets Manager for each AWS Marketplace registry that stores a Docker image referenced in your Compose file. For more information, see [Accessing Docker images from private repositories](#access-private-repositories).
+ If your Docker Compose file references a Docker image from private repositories in registries other than Amazon ECR, such as Docker Hub, the connector also has the following requirements:
  + AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs). The connector uses this feature only to retrieve your secrets from AWS Secrets Manager, not to store them.
  + You must create a secret in Secrets Manager for each private repository that stores a Docker image referenced in your Compose file. For more information, see [Accessing Docker images from private repositories](#access-private-repositories).
+ The Docker daemon must be running when you deploy a Greengrass group that contains this connector.

### Accessing Docker images from private repositories
<a name="access-private-repositories"></a>

If you use credentials to access your Docker images, then you must allow the connector to access them. The way you do this depends on where the Docker image is located.

For Docker images stored Amazon ECR, you grant permission to get your authorization token in the Greengrass group role. For more information, see [Requirements](#docker-app-connector-req).

For Docker images stored in other private repositories or registries, you must create a secret in AWS Secrets Manager to store your login information. This includes Docker images that you subscribed to in AWS Marketplace. Create one secret for each repository. If you update your secrets in Secrets Manager, the changes propagate to the core the next time that you deploy the group.

**Note**  
Secrets Manager is a service that you can use to securely store and manage your credentials, keys, and other secrets in the AWS Cloud. For more information, see [What is AWS Secrets Manager?](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) in the *AWS Secrets Manager User Guide*.

Each secret must contain the following keys:


| Key | Value | 
| --- | --- | 
| `username` | The user name used to access the repository or registry. | 
| `password` | The password used to access the repository or registry. | 
| `registryUrl` | The endpoint of the registry. This must match the corresponding registry URL in the Compose file. | 

**Note**  
To allow AWS IoT Greengrass to access a secret by default, the name of the secret must start with *greengrass-*. Otherwise, your Greengrass service role must grant access. For more information, see [Allow AWS IoT Greengrass to get secret values](secrets.md#secrets-config-service-role).

**To get login information for Docker images from AWS Marketplace**  

1. Get your password for Docker images from AWS Marketplace by using the `aws ecr get-login-password` command. For more information, see [get-login-password](https://docs.aws.amazon.com/cli/latest/reference/ecr/get-login.html) in the *AWS CLI Command Reference*.

   ```
   aws ecr get-login-password
   ```

1. Retrieve the registry URL for the Docker image. Open the AWS Marketplace website, and open the container product launch page. Under **Container Images**, choose **View container image details** to locate the user name and registry URL.
Use the retrieved user name, password, and registry URL to create a secret for each AWS Marketplace registry that stores Docker images referenced in your Compose file. 

**To create secrets (console)**  
In the AWS Secrets Manager console, choose **Other type of secrets**. Under **Specify the key-value pairs to be stored for this secret**, add rows for `username`, `password`, and `registryUrl`. For more information, see [Creating a basic secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) in the *AWS Secrets Manager User Guide*.  

![\[Creating a secret with username, password, and registryUrl keys.\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/images/connectors/secret-docker-trusted-registry.png)


**To create secrets (CLI)**  
In the AWS CLI, use the Secrets Manager `create-secret` command, as shown in the following example. For more information, see [create-secret](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/create-secret.html) in the *AWS CLI Command Reference*.  

```
aws secretsmanager create-secret --name greengrass-MySecret --secret-string [{"username":"Mary_Major"},{"password":"abc123xyz456"},{"registryUrl":"https://docker.io"}]
```

**Important**  
It is your responsibility to secure the `DockerComposeFileDestinationPath` directory that stores your Docker Compose file and the credentials for your Docker images from private repositories. For more information, see [Security notes](#docker-app-connector-security).

## Parameters
<a name="docker-app-connector-param"></a>

This connector provides the following parameters:

------
#### [ Version 7 ]<a name="docker-app-connector-parameters-v1"></a>

`DockerComposeFileS3Bucket`  
The name of the S3 bucket that contains your Docker Compose file. When you create the bucket, make sure to follow the [rules for bucket names](https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html) described in the *Amazon Simple Storage Service User Guide*.  
Display name in the AWS IoT console: **Docker Compose file in S3**  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `[a-zA-Z0-9\\-\\.]{3,63}`

`DockerComposeFileS3Key`  
The object key for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Object key and metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileS3Version`  
The object version for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Using versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `false`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileDestinationPath`  
The absolute path of the local directory used to store a copy of the Docker Compose file. This must be an existing directory. The user specified for `DockerUserId` must have permission to create a file in this directory. For more information, see [Setting up the Docker user on the AWS IoT Greengrass core](#docker-app-connector-linux-user).  
This directory stores your Docker Compose file and the credentials for your Docker images from private repositories. It is your responsibility to secure this directory. For more information, see [Security notes](#docker-app-connector-security).
Display name in the AWS IoT console: **Directory path for local Compose file**  
Required: `true`  
Type: `string`  
Valid pattern `\/.*\/?`  
Example: `/home/username/myCompose`

`DockerUserId`  
The UID of the Linux user that the connector runs as. This user must belong to the `docker` Linux group on the core device and have write permissions to the `DockerComposeFileDestinationPath` directory. For more information, see [Setting up the Docker user on the core](#docker-app-connector-linux-user).  
<a name="avoid-running-as-root"></a>We recommend that you avoid running as root unless absolutely necessary. If you do specify the root user, you must allow Lambda functions to run as root on the AWS IoT Greengrass core. For more information, see [Running a Lambda function as root](lambda-group-config.md#lambda-running-as-root).
Display name in the AWS IoT console: **Docker user ID**  
Required: `false`  
Type: `string`  
Valid pattern: `^[0-9]{1,5}$`

`AWSSecretsArnList`  
The Amazon Resource Names (ARNs) of the secrets in AWS Secrets Manager that contain the login information used to access your Docker images in private repositories. For more information, see [Accessing Docker images from private repositories](#access-private-repositories).  
Display name in the AWS IoT console: **Credentials for private repositories**  
Required: `false`. This parameter is required to access Docker images stored in private repositories.  
Type: `array` of `string`  
Valid pattern: `[( ?,? ?"(arn:(aws(-[a-z]+)):secretsmanager:[a-z0-9-]+:[0-9]{12}:secret:([a-zA-Z0-9\]+/)[a-zA-Z0-9/_+=,.@-]+-[a-zA-Z0-9]+)")]`

`DockerContainerStatusLogFrequency`  
The frequency (in seconds) at which the connector logs status information about the Docker containers running on the core. The default is 300 seconds (5 minutes).  
Display name in the AWS IoT console: **Logging frequency**  
Required: `false`  
Type: `string`  
Valid pattern: `^[1-9]{1}[0-9]{0,3}$`

`ForceDeploy`  
Indicates whether to force the Docker deployment if it fails because of the improper cleanup of the last deployment. The default value is `False`.  
Display name in the AWS IoT console: **Force deployment**  
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

`DockerPullBeforeUp`  
Indicates whether the deployer should run `docker-compose pull` before running `docker-compose up` for a pull-down-up behavior. The default value is `True`.  
Display name in the AWS IoT console: **Docker Pull Before Up**  
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

`StopContainersOnNewDeployment`  
Indicates whether the connector should stop Docker Deployer managed docker containers when GGC is stopped (GGC stops when a new group is deployed, or the kernel is shut down). The default value is `True`.  
Display name in the AWS IoT console: **Docker stop on new deployment**  
We recommend keeping this parameter set to its default `True` value. The parameter to `False` causes your Docker container to continue running even after terminating the AWS IoT Greengrass core or starting a new deployment. If you set this parameter to `False`, you must ensure that your Docker containers are maintained as necessary in the event of a `docker-compose` service name change or addition.   
For more information, see the `docker-compose` compose file documentation. 
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

`DockerOfflineMode`  
Indicates whether to use the existing Docker Compose file when AWS IoT Greengrass starts offline. The default value is `False`.  
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

------
#### [ Version 6 ]<a name="docker-app-connector-parameters-v1"></a>

`DockerComposeFileS3Bucket`  
The name of the S3 bucket that contains your Docker Compose file. When you create the bucket, make sure to follow the [rules for bucket names](https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html) described in the *Amazon Simple Storage Service User Guide*.  
Display name in the AWS IoT console: **Docker Compose file in S3**  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `[a-zA-Z0-9\\-\\.]{3,63}`

`DockerComposeFileS3Key`  
The object key for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Object key and metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileS3Version`  
The object version for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Using versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `false`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileDestinationPath`  
The absolute path of the local directory used to store a copy of the Docker Compose file. This must be an existing directory. The user specified for `DockerUserId` must have permission to create a file in this directory. For more information, see [Setting up the Docker user on the AWS IoT Greengrass core](#docker-app-connector-linux-user).  
This directory stores your Docker Compose file and the credentials for your Docker images from private repositories. It is your responsibility to secure this directory. For more information, see [Security notes](#docker-app-connector-security).
Display name in the AWS IoT console: **Directory path for local Compose file**  
Required: `true`  
Type: `string`  
Valid pattern `\/.*\/?`  
Example: `/home/username/myCompose`

`DockerUserId`  
The UID of the Linux user that the connector runs as. This user must belong to the `docker` Linux group on the core device and have write permissions to the `DockerComposeFileDestinationPath` directory. For more information, see [Setting up the Docker user on the core](#docker-app-connector-linux-user).  
<a name="avoid-running-as-root"></a>We recommend that you avoid running as root unless absolutely necessary. If you do specify the root user, you must allow Lambda functions to run as root on the AWS IoT Greengrass core. For more information, see [Running a Lambda function as root](lambda-group-config.md#lambda-running-as-root).
Display name in the AWS IoT console: **Docker user ID**  
Required: `false`  
Type: `string`  
Valid pattern: `^[0-9]{1,5}$`

`AWSSecretsArnList`  
The Amazon Resource Names (ARNs) of the secrets in AWS Secrets Manager that contain the login information used to access your Docker images in private repositories. For more information, see [Accessing Docker images from private repositories](#access-private-repositories).  
Display name in the AWS IoT console: **Credentials for private repositories**  
Required: `false`. This parameter is required to access Docker images stored in private repositories.  
Type: `array` of `string`  
Valid pattern: `[( ?,? ?"(arn:(aws(-[a-z]+)):secretsmanager:[a-z0-9-]+:[0-9]{12}:secret:([a-zA-Z0-9\]+/)[a-zA-Z0-9/_+=,.@-]+-[a-zA-Z0-9]+)")]`

`DockerContainerStatusLogFrequency`  
The frequency (in seconds) at which the connector logs status information about the Docker containers running on the core. The default is 300 seconds (5 minutes).  
Display name in the AWS IoT console: **Logging frequency**  
Required: `false`  
Type: `string`  
Valid pattern: `^[1-9]{1}[0-9]{0,3}$`

`ForceDeploy`  
Indicates whether to force the Docker deployment if it fails because of the improper cleanup of the last deployment. The default value is `False`.  
Display name in the AWS IoT console: **Force deployment**  
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

`DockerPullBeforeUp`  
Indicates whether the deployer should run `docker-compose pull` before running `docker-compose up` for a pull-down-up behavior. The default value is `True`.  
Display name in the AWS IoT console: **Docker Pull Before Up**  
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

`StopContainersOnNewDeployment`  
Indicates whether the connector should stop Docker Deployer managed docker containers when GGC is stopped (when a new group deployment is made, or the kernel is shutdown). The default value is `True`.  
Display name in the AWS IoT console: **Docker stop on new deployment**  
We recommend keeping this parameter set to its default `True` value. The parameter to `False` causes your Docker container to continue running even after terminating the AWS IoT Greengrass core or starting a new deployment. If you set this parameter to `False`, you must ensure that your Docker containers are maintained as necessary in the event of a `docker-compose` service name change or addition.   
 For more information, see the `docker-compose` compose file documentation. 
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

------
#### [ Version 5 ]<a name="docker-app-connector-parameters-v1"></a>

`DockerComposeFileS3Bucket`  
The name of the S3 bucket that contains your Docker Compose file. When you create the bucket, make sure to follow the [rules for bucket names](https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html) described in the *Amazon Simple Storage Service User Guide*.  
Display name in the AWS IoT console: **Docker Compose file in S3**  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `[a-zA-Z0-9\\-\\.]{3,63}`

`DockerComposeFileS3Key`  
The object key for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Object key and metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileS3Version`  
The object version for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Using versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `false`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileDestinationPath`  
The absolute path of the local directory used to store a copy of the Docker Compose file. This must be an existing directory. The user specified for `DockerUserId` must have permission to create a file in this directory. For more information, see [Setting up the Docker user on the AWS IoT Greengrass core](#docker-app-connector-linux-user).  
This directory stores your Docker Compose file and the credentials for your Docker images from private repositories. It is your responsibility to secure this directory. For more information, see [Security notes](#docker-app-connector-security).
Display name in the AWS IoT console: **Directory path for local Compose file**  
Required: `true`  
Type: `string`  
Valid pattern `\/.*\/?`  
Example: `/home/username/myCompose`

`DockerUserId`  
The UID of the Linux user that the connector runs as. This user must belong to the `docker` Linux group on the core device and have write permissions to the `DockerComposeFileDestinationPath` directory. For more information, see [Setting up the Docker user on the core](#docker-app-connector-linux-user).  
<a name="avoid-running-as-root"></a>We recommend that you avoid running as root unless absolutely necessary. If you do specify the root user, you must allow Lambda functions to run as root on the AWS IoT Greengrass core. For more information, see [Running a Lambda function as root](lambda-group-config.md#lambda-running-as-root).
Display name in the AWS IoT console: **Docker user ID**  
Required: `false`  
Type: `string`  
Valid pattern: `^[0-9]{1,5}$`

`AWSSecretsArnList`  
The Amazon Resource Names (ARNs) of the secrets in AWS Secrets Manager that contain the login information used to access your Docker images in private repositories. For more information, see [Accessing Docker images from private repositories](#access-private-repositories).  
Display name in the AWS IoT console: **Credentials for private repositories**  
Required: `false`. This parameter is required to access Docker images stored in private repositories.  
Type: `array` of `string`  
Valid pattern: `[( ?,? ?"(arn:(aws(-[a-z]+)):secretsmanager:[a-z0-9-]+:[0-9]{12}:secret:([a-zA-Z0-9\]+/)[a-zA-Z0-9/_+=,.@-]+-[a-zA-Z0-9]+)")]`

`DockerContainerStatusLogFrequency`  
The frequency (in seconds) at which the connector logs status information about the Docker containers running on the core. The default is 300 seconds (5 minutes).  
Display name in the AWS IoT console: **Logging frequency**  
Required: `false`  
Type: `string`  
Valid pattern: `^[1-9]{1}[0-9]{0,3}$`

`ForceDeploy`  
Indicates whether to force the Docker deployment if it fails because of the improper cleanup of the last deployment. The default value is `False`.  
Display name in the AWS IoT console: **Force deployment**  
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

`DockerPullBeforeUp`  
Indicates whether the deployer should run `docker-compose pull` before running `docker-compose up` for a pull-down-up behavior. The default value is `True`.  
Display name in the AWS IoT console: **Docker Pull Before Up**  
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

------
#### [ Versions 2 - 4 ]<a name="docker-app-connector-parameters-v1"></a>

`DockerComposeFileS3Bucket`  
The name of the S3 bucket that contains your Docker Compose file. When you create the bucket, make sure to follow the [rules for bucket names](https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html) described in the *Amazon Simple Storage Service User Guide*.  
Display name in the AWS IoT console: **Docker Compose file in S3**  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `[a-zA-Z0-9\\-\\.]{3,63}`

`DockerComposeFileS3Key`  
The object key for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Object key and metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileS3Version`  
The object version for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Using versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `false`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileDestinationPath`  
The absolute path of the local directory used to store a copy of the Docker Compose file. This must be an existing directory. The user specified for `DockerUserId` must have permission to create a file in this directory. For more information, see [Setting up the Docker user on the AWS IoT Greengrass core](#docker-app-connector-linux-user).  
This directory stores your Docker Compose file and the credentials for your Docker images from private repositories. It is your responsibility to secure this directory. For more information, see [Security notes](#docker-app-connector-security).
Display name in the AWS IoT console: **Directory path for local Compose file**  
Required: `true`  
Type: `string`  
Valid pattern `\/.*\/?`  
Example: `/home/username/myCompose`

`DockerUserId`  
The UID of the Linux user that the connector runs as. This user must belong to the `docker` Linux group on the core device and have write permissions to the `DockerComposeFileDestinationPath` directory. For more information, see [Setting up the Docker user on the core](#docker-app-connector-linux-user).  
<a name="avoid-running-as-root"></a>We recommend that you avoid running as root unless absolutely necessary. If you do specify the root user, you must allow Lambda functions to run as root on the AWS IoT Greengrass core. For more information, see [Running a Lambda function as root](lambda-group-config.md#lambda-running-as-root).
Display name in the AWS IoT console: **Docker user ID**  
Required: `false`  
Type: `string`  
Valid pattern: `^[0-9]{1,5}$`

`AWSSecretsArnList`  
The Amazon Resource Names (ARNs) of the secrets in AWS Secrets Manager that contain the login information used to access your Docker images in private repositories. For more information, see [Accessing Docker images from private repositories](#access-private-repositories).  
Display name in the AWS IoT console: **Credentials for private repositories**  
Required: `false`. This parameter is required to access Docker images stored in private repositories.  
Type: `array` of `string`  
Valid pattern: `[( ?,? ?"(arn:(aws(-[a-z]+)):secretsmanager:[a-z0-9-]+:[0-9]{12}:secret:([a-zA-Z0-9\]+/)[a-zA-Z0-9/_+=,.@-]+-[a-zA-Z0-9]+)")]`

`DockerContainerStatusLogFrequency`  
The frequency (in seconds) at which the connector logs status information about the Docker containers running on the core. The default is 300 seconds (5 minutes).  
Display name in the AWS IoT console: **Logging frequency**  
Required: `false`  
Type: `string`  
Valid pattern: `^[1-9]{1}[0-9]{0,3}$`

`ForceDeploy`  
Indicates whether to force the Docker deployment if it fails because of the improper cleanup of the last deployment. The default value is `False`.  
Display name in the AWS IoT console: **Force deployment**  
Required: `false`  
Type: `string`  
Valid pattern: `^(true|false)$`

------
#### [ Version 1 ]<a name="docker-app-connector-parameters-v1"></a>

`DockerComposeFileS3Bucket`  
The name of the S3 bucket that contains your Docker Compose file. When you create the bucket, make sure to follow the [rules for bucket names](https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html) described in the *Amazon Simple Storage Service User Guide*.  
Display name in the AWS IoT console: **Docker Compose file in S3**  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `[a-zA-Z0-9\\-\\.]{3,63}`

`DockerComposeFileS3Key`  
The object key for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Object key and metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `true`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileS3Version`  
The object version for your Docker Compose file in Amazon S3. For more information, including object key naming guidelines, see [Using versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) in the *Amazon Simple Storage Service User Guide*.  
In the console, the **Docker Compose file in S3** property combines the `DockerComposeFileS3Bucket`, `DockerComposeFileS3Key`, and `DockerComposeFileS3Version` parameters.
Required: `false`  
Type: `string`  
Valid pattern `.+`

`DockerComposeFileDestinationPath`  
The absolute path of the local directory used to store a copy of the Docker Compose file. This must be an existing directory. The user specified for `DockerUserId` must have permission to create a file in this directory. For more information, see [Setting up the Docker user on the AWS IoT Greengrass core](#docker-app-connector-linux-user).  
This directory stores your Docker Compose file and the credentials for your Docker images from private repositories. It is your responsibility to secure this directory. For more information, see [Security notes](#docker-app-connector-security).
Display name in the AWS IoT console: **Directory path for local Compose file**  
Required: `true`  
Type: `string`  
Valid pattern `\/.*\/?`  
Example: `/home/username/myCompose`

`DockerUserId`  
The UID of the Linux user that the connector runs as. This user must belong to the `docker` Linux group on the core device and have write permissions to the `DockerComposeFileDestinationPath` directory. For more information, see [Setting up the Docker user on the core](#docker-app-connector-linux-user).  
<a name="avoid-running-as-root"></a>We recommend that you avoid running as root unless absolutely necessary. If you do specify the root user, you must allow Lambda functions to run as root on the AWS IoT Greengrass core. For more information, see [Running a Lambda function as root](lambda-group-config.md#lambda-running-as-root).
Display name in the AWS IoT console: **Docker user ID**  
Required: `false`  
Type: `string`  
Valid pattern: `^[0-9]{1,5}$`

`AWSSecretsArnList`  
The Amazon Resource Names (ARNs) of the secrets in AWS Secrets Manager that contain the login information used to access your Docker images in private repositories. For more information, see [Accessing Docker images from private repositories](#access-private-repositories).  
Display name in the AWS IoT console: **Credentials for private repositories**  
Required: `false`. This parameter is required to access Docker images stored in private repositories.  
Type: `array` of `string`  
Valid pattern: `[( ?,? ?"(arn:(aws(-[a-z]+)):secretsmanager:[a-z0-9-]+:[0-9]{12}:secret:([a-zA-Z0-9\]+/)[a-zA-Z0-9/_+=,.@-]+-[a-zA-Z0-9]+)")]`

`DockerContainerStatusLogFrequency`  
The frequency (in seconds) at which the connector logs status information about the Docker containers running on the core. The default is 300 seconds (5 minutes).  
Display name in the AWS IoT console: **Logging frequency**  
Required: `false`  
Type: `string`  
Valid pattern: `^[1-9]{1}[0-9]{0,3}$`

------

### Create Connector Example (AWS CLI)
<a name="docker-app-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the Greengrass Docker application deployment connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyDockerAppplicationDeploymentConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/DockerApplicationDeployment/versions/5",
            "Parameters": {
                "DockerComposeFileS3Bucket": "amzn-s3-demo-bucket",
                "DockerComposeFileS3Key": "production-docker-compose.yml",
                "DockerComposeFileS3Version": "123",
                "DockerComposeFileDestinationPath": "/home/username/myCompose",
                "DockerUserId": "1000",
                "AWSSecretsArnList": "[\"arn:aws:secretsmanager:region:account-id:secret:greengrass-secret1-hash\",\"arn:aws:secretsmanager:region:account-id:secret:greengrass-secret2-hash\"]",
                "DockerContainerStatusLogFrequency": "30",
                "ForceDeploy": "True",
                "DockerPullBeforeUp": "True"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in this connector has a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

## Input data
<a name="docker-app-connector-data-input"></a>

This connector doesn't require or accept input data.

## Output data
<a name="docker-app-connector-data-output"></a>

This connector publishes the status of the `docker-compose up` command as output data.

<a name="topic-filter"></a>**Topic filter in subscription**  
`dockerapplicationdeploymentconnector/message/status`

**Example output: Success**  

```
{
  "status":"success",
  "GreengrassDockerApplicationDeploymentStatus":"Successfully triggered docker-compose up", 
  "S3Bucket":"amzn-s3-demo-bucket",
  "ComposeFileName":"production-docker-compose.yml",
  "ComposeFileVersion":"123"
}
```

**Example output: Failure**  

```
{
  "status":"fail",
  "error_message":"description of error",
  "error":"InvalidParameter"
}
```
The error type can be `InvalidParameter` or `InternalError`.

## Setting up the Docker user on the AWS IoT Greengrass core
<a name="docker-app-connector-linux-user"></a>

The Greengrass Docker application deployment connector runs as the user you specify for the `DockerUserId` parameter. If you don't specify a value, the connector runs as `ggc_user`, which is the default Greengrass access identity.

To allow the connector to interact with the Docker daemon, the Docker user must belong to the `docker` Linux group on the core. The Docker user must also have write permissions to the `DockerComposeFileDestinationPath` directory. This is where the connector stores your local `docker-compose.yml` file and Docker credentials.

**Note**  
We recommend that you create a Linux user instead of using the default `ggc_user`. Otherwise, any Lambda function in the Greengrass group can access the Compose file and Docker credentials.
<a name="avoid-running-as-root"></a>We recommend that you avoid running as root unless absolutely necessary. If you do specify the root user, you must allow Lambda functions to run as root on the AWS IoT Greengrass core. For more information, see [Running a Lambda function as root](lambda-group-config.md#lambda-running-as-root).

1. Create the user. You can run the `useradd` command and include the optional `-u` option to assign a UID. For example:

   ```
   sudo useradd -u 1234 user-name
   ```

1. Add the user to the `docker` group on the core. For example:

   ```
   sudo usermod -aG docker user-name
   ```

   For more information, including how to create the `docker` group, see [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) in the Docker documentation.

1. Give the user permissions to write to the directory specifed for the `DockerComposeFileDestinationPath` parameter. For example:

   1. To set the user as the owner of the directory. This example uses the UID from step 1.

      ```
      chown 1234 docker-compose-file-destination-path
      ```

   1. To give read and write permissions to the owner.

      ```
      chmod 700 docker-compose-file-destination-path
      ```

      For more information, see [How To Manage File And Folder Permissions In Linux](https://www.linux.com/tutorials/how-manage-file-and-folder-permissions-linux/) in the Linux Foundation documentation.

   1. If you didn't assign a UID when you created the user, or if you used an existing user, run the `id` command to look up the UID.

      ```
      id -u user-name
      ```

      You use the UID to configure the `DockerUserId` parameter for the connector.

## Usage information
<a name="docker-app-connector-usage-info"></a>

When you use the Greengrass Docker application deployment connector, you should be aware of the following implementation-specific usage information.
+ **Fixed prefix for project names.** The connector prepends the `greengrassdockerapplicationdeployment` prefix to the names of the Docker containers that it starts. The connector uses this prefix as the project name in the `docker-compose` commands that it runs.
+ **Logging behavior.** The connector writes status information and troubleshooting information to a log file. You can configure AWS IoT Greengrass to send logs to CloudWatch Logs and to write logs locally. For more information, see [Logging for connectors](connectors.md#connectors-logging). This is the path to the local log for the connector:

  ```
  /greengrass-root/ggc/var/log/user/region/aws/DockerApplicationDeployment.log
  ```

  You must have root permissions to access local logs.
+ **Updating Docker images.** Docker caches images on the core device. If you update a Docker image and want to propagate the change to the core device, make sure to change the tag for the image in the Compose file. Changes take effect after the Greengrass group is deployed.
+ **10-minute timeout for cleanup operations.** When the Greengrass daemon stops during a restart, the `docker-compose down` command is initiated. All Docker containers have a maximum of 10 minutes after `docker-compose down` is initiated to perform any cleanup operations. If the cleanup isn't completed in 10 minutes, you must clean up the remaining containers manually. For more information, see [docker rm](https://docs.docker.com/engine/reference/commandline/rm/) in the Docker CLI documentation.
+ **Running Docker commands.** To troubleshoot issues, you can run Docker commands in a terminal window on the core device. For example, run the following command to see the Docker containers that were started by the connector:

  ```
  docker ps --filter name="greengrassdockerapplicationdeployment"
  ```
+ **Reserved resource ID.** The connector uses the `DOCKER_DEPLOYER_SECRET_RESOURCE_RESERVED_ID_index` ID for the Greengrass resources it creates in the Greengrass group. Resource IDs must be unique in the group, so don't assign a resource ID that might conflict with this reserved resource ID.
+ **Offline mode.** When you set the `DockerOfflineMode` configuration parameter to `True`, then the Docker connector is able to operate in *offline mode*. This can happen when a Greengrass group deployment restarts while the core device is offline, and the connector cannot establish a connection to Amazon S3 or Amazon ECR to retrieve the Docker Compose file.

  With offline mode enabled, the connector attempts to download your Compose file, and run `docker login` commands as it would for a normal restart. If these attempts fail, then the connector looks for a locally stored Compose file in the folder that was specified using the `DockerComposeFileDestinationPath` parameter. If a local Compose file exists, then the connector follows the normal sequence of `docker-compose` commands and pulls from local images. If the Compose file or the local images are not present, then the connector fails. The behavior of the `ForceDeploy` and `StopContainersOnNewDeployment` parameters remains the same in offline mode. 

## Communicating with Docker containers
<a name="docker-app-connector-communicating"></a>

AWS IoT Greengrass supports the following communication channels between Greengrass components and Docker containers:
+ Greengrass Lambda functions can use REST APIs to communicate with processes in Docker containers. You can set up a server in a Docker container that opens a port. Lambda functions can communicate with the container on this port.
+ Processes in Docker containers can exchange MQTT messages through the local Greengrass message broker. You can set up the Docker container as a client device in the Greengrass group and then create subscriptions to allow the container to communicate with Greengrass Lambda functions, client devices, and other connectors in the group, or with AWS IoT and the local shadow service. For more information, see [Configure MQTT communication with Docker containers](#docker-app-connector-mqtt-communication).
+ Greengrass Lambda functions can update a shared file to pass information to Docker containers. You can use the Compose file to bind mount the shared file path for a Docker container.

### Configure MQTT communication with Docker containers
<a name="docker-app-connector-mqtt-communication"></a>

You can configure a Docker container as a client device and add it to a Greengrass group. Then, you can create subscriptions that allow MQTT communication between the Docker container and Greengrass components or AWS IoT. In the following procedure, you create a subscription that allows the Docker container device to receive shadow update messages from the local shadow service. You can follow this pattern to create other subscriptions.

**Note**  
This procedure assumes that you have already created a Greengrass group and a Greengrass core (v1.10 or later). For information about creating a Greengrass group and core, see [Getting started with AWS IoT Greengrass](gg-gs.md).

**To configure a Docker container as a client device and add it to a Greengrass group**

1. Create a folder on the core device to store the certificates and keys used to authenticate the Greengrass device.

   The file path must be mounted on the Docker container you want to start. The following snippet shows how to mount a file path in your Compose file. In this example, *path-to-device-certs* represents the folder you created in this step.

   ```
   version: '3.3'
   services:
     myService:
       image: user-name/repo:image-tag
       volumes:
         -  /path-to-device-certs/:/path-accessible-in-container
   ```

1. <a name="console-gg-groups"></a>In the AWS IoT console navigation pane, under **Manage**, expand **Greengrass devices**, and then choose **Groups (V1)**.

1. <a name="group-choose-target-group"></a>Choose the target group.

1. <a name="gg-group-add-device"></a>On the group configuration page, choose **Client devices**, and then choose **Associate**.

1. <a name="gg-group-create-device"></a>In the **Associate a client device with this group** modal, choose **Create new AWS IoT thing**.

   The **Create things** page opens in a new tab.

1. <a name="gg-group-create-single-thing"></a>On the **Create things** page, choose **Create single thing**, and then choose **Next**.

1. On the **Specify thing properties** page, enter a name for the device, and then choose **Next**.

1. <a name="gg-group-create-device-configure-certificate"></a>On the **Configure device certificate** page, choose **Next**.

1. <a name="gg-group-create-device-attach-policy"></a>On the **Attach policies to certificate** page, do one of the following:
   + Select an existing policy that grants permissions that client devices require, and then choose **Create thing**.

     A modal opens where you can download the certificates and keys that the device uses to connect to the AWS Cloud and the core.
   + Create and attach a new policy that grants client device permissions. Do the following:

     1. Choose **Create policy**.

        The **Create policy** page opens in a new tab.

     1. On the **Create policy** page, do the following:

        1. For **Policy name**, enter a name that describes the policy, such as **GreengrassV1ClientDevicePolicy**.

        1. On the **Policy statements** tab, under **Policy document**, choose **JSON**.

        1. Enter the following policy document. This policy allows the client device to discover Greengrass cores and communicate on all MQTT topics. For information about how to restrict this policy's access, see [Device authentication and authorization for AWS IoT Greengrass](device-auth.md).

------
#### [ JSON ]

****  

           ```
           {
             "Version":"2012-10-17",		 	 	 
             "Statement": [
               {
                 "Effect": "Allow",
                 "Action": [
                   "iot:Publish",
                   "iot:Subscribe",
                   "iot:Connect",
                   "iot:Receive"
                 ],
                 "Resource": [
                   "*"
                 ]
               },
               {
                 "Effect": "Allow",
                 "Action": [
                   "greengrass:*"
                 ],
                 "Resource": [
                   "*"
                 ]
               }
             ]
           }
           ```

------

        1. Choose **Create** to create the policy.

     1. Return to the browser tab with the **Attach policies to certificate** page open. Do the following:

        1. In the **Policies** list, select the policy that you created, such as **GreengrassV1ClientDevicePolicy**.

           If you don't see the policy, choose the refresh button.

        1. Choose **Create thing**.

           A modal opens where you can download the certificates and keys that the device uses to connect to the AWS Cloud and the core.

1. <a name="gg-group-create-device-download-certs"></a>In the **Download certificates and keys** modal, download the device's certificates.
**Important**  
Before you choose **Done**, download the security resources.

   Do the following:

   1. For **Device certificate**, choose **Download** to download the device certificate.

   1. For **Public key file**, choose **Download** to download the public key for the certificate.

   1. For **Private key file**, choose **Download** to download the private key file for the certificate.

   1. Review [Server Authentication](https://docs.aws.amazon.com/iot/latest/developerguide/server-authentication.html) in the *AWS IoT Developer Guide* and choose the appropriate root CA certificate. We recommend that you use Amazon Trust Services (ATS) endpoints and ATS root CA certificates. Under **Root CA certificates**, choose **Download** for a root CA certificate.

   1. Choose **Done**.

   Make a note of the certificate ID that's common in the file names for the device certificate and keys. You need it later.

1. Copy the certificates and keys into the folder that you created in step 1.

Next, create a subscription in the group. For this example, you create a subscription allows the Docker container device to receive MQTT messages from the local shadow service.

**Note**  
The maximum size of a shadow document is 8 KB. For more information, see [AWS IoT quotas](https://docs.aws.amazon.com/iot/latest/developerguide/limits-iot.html) in the *AWS IoT Developer Guide*.

**To create a subscription that allows the Docker container device to receive MQTT messages from the local shadow service**

1. <a name="shared-subscriptions-addsubscription"></a>On the group configuration page, choose the **Subscriptions** tab, and then choose **Add Subscription**.

1. On the **Select your source and target** page, configure the source and target, as follows:

   1. For **Select a source**, choose **Services**, and then choose **Local Shadow Service**.

   1. For **Select a target**, choose **Devices**, and then choose your device.

   1. Choose **Next**.

   1. On the **Filter your data with a topic** page, for **Topic filter**, choose **\$1aws/things/*MyDockerDevice*/shadow/update/accepted**, and then choose **Next**. Replace *MyDockerDevice* with the name of the device that you created earlier.

   1. Choose **Finish**.

Include the following code snippet in the Docker image that you reference in your Compose file. This is the Greengrass device code. Also, add code in your Docker container that starts the Greengrass device inside the container. It can run as a separate process in the image or in a separate thread.

```
import os
import sys
import time
import uuid

from AWSIoTPythonSDK.core.greengrass.discovery.providers import DiscoveryInfoProvider
from AWSIoTPythonSDK.exception.AWSIoTExceptions import DiscoveryInvalidRequestException
from AWSIoTPythonSDK.MQTTLib import AWSIoTMQTTClient

# Replace thingName with the name you registered for the Docker device.
thingName = "MyDockerDevice"
clientId = thingName

# Replace host with the IoT endpoint for your &AWS-account;.
host = "myPrefix.iot.region.amazonaws.com"

# Replace topic with the topic where the Docker container subscribes.
topic = "$aws/things/MyDockerDevice/shadow/update/accepted"

# Replace these paths based on the download location of the certificates for the Docker container.
rootCAPath = "/path-accessible-in-container/AmazonRootCA1.pem"
certificatePath = "/path-accessible-in-container/certId-certificate.pem.crt"
privateKeyPath = "/path-accessible-in-container/certId-private.pem.key"


# Discover Greengrass cores.
discoveryInfoProvider = DiscoveryInfoProvider()
discoveryInfoProvider.configureEndpoint(host)
discoveryInfoProvider.configureCredentials(rootCAPath, certificatePath, privateKeyPath)
discoveryInfoProvider.configureTimeout(10)  # 10 seconds.

GROUP_CA_PATH = "./groupCA/"
MQTT_QOS = 1

discovered = False
groupCA = None
coreInfo = None

try:
    # Get discovery info from AWS IoT.
    discoveryInfo = discoveryInfoProvider.discover(thingName)
    caList = discoveryInfo.getAllCas()
    coreList = discoveryInfo.getAllCores()

    # Use first discovery result.
    groupId, ca = caList[0]
    coreInfo = coreList[0]

    # Save the group CA to a local file.
    groupCA = GROUP_CA_PATH + groupId + "_CA_" + str(uuid.uuid4()) + ".crt"
    if not os.path.exists(GROUP_CA_PATH):
        os.makedirs(GROUP_CA_PATH)
    groupCAFile = open(groupCA, "w")
    groupCAFile.write(ca)
    groupCAFile.close()
    discovered = True
except DiscoveryInvalidRequestException as e:
    print("Invalid discovery request detected!")
    print("Type: %s" % str(type(e)))
    print("Error message: %s" % str(e))
    print("Stopping...")
except BaseException as e:
    print("Error in discovery!")
    print("Type: %s" % str(type(e)))
    print("Error message: %s" % str(e))
    print("Stopping...")

myAWSIoTMQTTClient = AWSIoTMQTTClient(clientId)
myAWSIoTMQTTClient.configureCredentials(groupCA, privateKeyPath, certificatePath)


# Try to connect to the Greengrass core.
connected = False
for connectivityInfo in coreInfo.connectivityInfoList:
    currentHost = connectivityInfo.host
    currentPort = connectivityInfo.port
    myAWSIoTMQTTClient.configureEndpoint(currentHost, currentPort)
    try:
        myAWSIoTMQTTClient.connect()
        connected = True
    except BaseException as e:
        print("Error in connect!")
        print("Type: %s" % str(type(e)))
        print("Error message: %s" % str(e))
    if connected:
        break

if not connected:
    print("Cannot connect to core %s. Exiting..." % coreInfo.coreThingArn)
    sys.exit(-2)

# Handle the MQTT message received from GGShadowService.
def customCallback(client, userdata, message):
    print("Received an MQTT message")
    print(message)

# Subscribe to the MQTT topic.
myAWSIoTMQTTClient.subscribe(topic, MQTT_QOS, customCallback)

# Keep the process alive to listen for messages.
while True:
    time.sleep(1)
```

## Security notes
<a name="docker-app-connector-security"></a>

When you use the Greengrass Docker application deployment connector, be aware of the following security considerations.

  
**Local storage of the Docker Compose file**  
The connector stores a copy of your Compose file in the directory specified for the `DockerComposeFileDestinationPath` parameter.  
It's your responsibility to secure this directory. You should use file system permissions to restrict access to the directory.

  
**Local storage of the Docker credentials**  
If your Docker images are stored in private repositories, the connector stores your Docker credentials in the directory specified for the `DockerComposeFileDestinationPath` parameter.  
It's your responsibility to secure these credentials. For example, you should use [credential-helper](https://docs.docker.com/engine/reference/commandline/login/#credentials-store) on the core device when you install Docker Engine.

  
**Install Docker Engine from a trusted source**  
It's your responsibility to install Docker Engine from a trusted source. This connector uses the Docker daemon on the core device to access your Docker assets and manage Docker containers.

  
**Scope of Greengrass group role permissions**  
Permissions that you add in the Greengrass group role can be assumed by all Lambda functions and connectors in the Greengrass group. This connector requires access to your Docker Compose file stored in an S3 bucket. It also requires access to your Amazon ECR authorization token if your Docker images are stored in a private repository in Amazon ECR.

## Licenses
<a name="docker-app-connector-license"></a>

The Greengrass Docker application deployment connector includes the following third-party software/licensing:<a name="boto-3-licenses"></a>
+ [AWS SDK for Python (Boto3)](https://pypi.org/project/boto3/)/Apache License 2.0
+ [botocore](https://pypi.org/project/botocore/)/Apache License 2.0
+ [dateutil](https://pypi.org/project/python-dateutil/1.4/)/PSF License
+ [docutils](https://pypi.org/project/docutils/)/BSD License, GNU General Public License (GPL), Python Software Foundation License, Public Domain
+ [jmespath](https://pypi.org/project/jmespath/)/MIT License
+ [s3transfer](https://pypi.org/project/s3transfer/)/Apache License 2.0
+ [urllib3](https://pypi.org/project/urllib3/)/MIT License

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="docker-app-connector-changelog"></a>

The following table describes the changes in each version of the connector.


|  Version  |  Changes  | 
| --- | --- | 
|  7  |  Added `DockerOfflineMode` to use an existing Docker Compose file when AWS IoT Greengrass starts offline. Implemented retries for the `docker login` command. Support for 32-bit UIDs.   | 
|  6  |  Added `StopContainersOnNewDeployment` to override container clean up when a new deployment is made or GGC stops. Safer shutdown and start up mechanisms. YAML validation bug fix.  | 
|  5  |  Images are pulled before running `docker-compose down`.  | 
|  4  |  Added pull-before-up behavior to update Docker images.  | 
|  3  |  Fixed an issue with finding environment variables.  | 
|  2  |  Added the `ForceDeploy` parameter.  | 
|  1  |  Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="docker-app-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)

# IoT Analytics connector
<a name="iot-analytics-connector"></a>

**Warning**  <a name="connectors-extended-life-phase-warning"></a>
This connector has moved into the *extended life phase*, and AWS IoT Greengrass won't release updates that provide features, enhancements to existing features, security patches, or bug fixes. For more information, see [AWS IoT Greengrass Version 1 maintenance policy](maintenance-policy.md).

The IoT Analytics connector sends local device data to AWS IoT Analytics. You can use this connector as a central hub to collect data from sensors on the Greengrass core device and from [connected client devices](what-is-gg.md#greengrass-devices). The connector sends the data to AWS IoT Analytics channels in the current AWS account and Region. It can send data to a default destination channel and to dynamically specified channels.

**Note**  
AWS IoT Analytics is a fully managed service that allows you to collect, store, process, and query IoT data. In AWS IoT Analytics, the data can be further analyzed and processed. For example, it can be used to train ML models for monitoring machine health or to test new modeling strategies. For more information, see [What is AWS IoT Analytics?](https://docs.aws.amazon.com/iotanalytics/latest/userguide/welcome.html) in the *AWS IoT Analytics User Guide*.

The connector accepts formatted and unformatted data on [input MQTT topics](#iot-analytics-connector-data-input). It supports two predefined topics where the destination channel is specified inline. It can also receive messages on customer-defined topics that are [configured in subscriptions](connectors.md#connectors-inputs-outputs). This can be used to route messages from client devices that publish to fixed topics or handle unstructured or stack-dependent data from resource-constrained devices.

This connector uses the [https://docs.aws.amazon.com/iotanalytics/latest/userguide/api.html#cli-iotanalytics-batchputmessage](https://docs.aws.amazon.com/iotanalytics/latest/userguide/api.html#cli-iotanalytics-batchputmessage) API to send data (as a JSON or base64-encoded string) to the destination channel. The connector can process raw data into a format that conforms to API requirements. The connector buffers input messages in per-channel queues and asynchronously processes the batches. It provides parameters that allow you to control queueing and batching behavior and to restrict memory consumption. For example, you can configure the maximum queue size, batch interval, memory size, and number of active channels.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 4 | `arn:aws:greengrass:region::/connectors/IoTAnalytics/versions/4` | 
| 3 | `arn:aws:greengrass:region::/connectors/IoTAnalytics/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/IoTAnalytics/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/IoTAnalytics/versions/1` | 

For information about version changes, see the [Changelog](#iot-analytics-connector-changelog).

## Requirements
<a name="iot-analytics-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 3 - 4 ]
+ <a name="conn-req-ggc-v1.9.3"></a>AWS IoT Greengrass Core software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="conn-iot-analytics-req-regions"></a>This connector can be used only in Amazon Web Services Regions where both [AWS IoT Greengrass](https://docs.aws.amazon.com/general/latest/gr/greengrass.html) and [AWS IoT Analytics](https://docs.aws.amazon.com/general/latest/gr/iot-analytics.html) are supported.
+ <a name="conn-iot-analytics-req-ita-config"></a>All related AWS IoT Analytics entities and workflows are created and configured. The entities include channels, pipeline, datastores, and datasets. For more information, see the [AWS CLI](https://docs.aws.amazon.com/iotanalytics/latest/userguide/getting-started.html) or [console](https://docs.aws.amazon.com/iotanalytics/latest/userguide/quickstart.html) procedures in the *AWS IoT Analytics User Guide*.
**Note**  
Destination AWS IoT Analytics channels must use the same account and be in the same AWS Region as this connector.
+ <a name="conn-iot-analytics-req-iam-policy"></a>The [Greengrass group role](group-role.md) configured to allow the `iotanalytics:BatchPutMessage` action on destination channels, as shown in the following example IAM policy. The channels must be in the current AWS account and Region.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "Stmt1528133056761",
              "Action": [
                  "iotanalytics:BatchPutMessage"
              ],
              "Effect": "Allow",
              "Resource": [
              "arn:aws:iotanalytics:us-east-1:123456789012:channel/channel_1_name",
      "arn:aws:iotanalytics:us-east-1:123456789012:channel/channel_2_name"
              ]
          }
      ]
  }
  ```

------

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

------
#### [ Versions 1 - 2 ]
+ <a name="conn-req-ggc-v1.7.0"></a>AWS IoT Greengrass Core software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-iot-analytics-req-regions"></a>This connector can be used only in Amazon Web Services Regions where both [AWS IoT Greengrass](https://docs.aws.amazon.com/general/latest/gr/greengrass.html) and [AWS IoT Analytics](https://docs.aws.amazon.com/general/latest/gr/iot-analytics.html) are supported.
+ <a name="conn-iot-analytics-req-ita-config"></a>All related AWS IoT Analytics entities and workflows are created and configured. The entities include channels, pipeline, datastores, and datasets. For more information, see the [AWS CLI](https://docs.aws.amazon.com/iotanalytics/latest/userguide/getting-started.html) or [console](https://docs.aws.amazon.com/iotanalytics/latest/userguide/quickstart.html) procedures in the *AWS IoT Analytics User Guide*.
**Note**  
Destination AWS IoT Analytics channels must use the same account and be in the same AWS Region as this connector.
+ <a name="conn-iot-analytics-req-iam-policy"></a>The [Greengrass group role](group-role.md) configured to allow the `iotanalytics:BatchPutMessage` action on destination channels, as shown in the following example IAM policy. The channels must be in the current AWS account and Region.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "Stmt1528133056761",
              "Action": [
                  "iotanalytics:BatchPutMessage"
              ],
              "Effect": "Allow",
              "Resource": [
              "arn:aws:iotanalytics:us-east-1:123456789012:channel/channel_1_name",
      "arn:aws:iotanalytics:us-east-1:123456789012:channel/channel_2_name"
              ]
          }
      ]
  }
  ```

------

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

------

## Parameters
<a name="iot-analytics-connector-param"></a>

`MemorySize`  
The amount of memory (in KB) to allocate to this connector.  
Display name in the AWS IoT console: **Memory size**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`PublishRegion`  
The AWS Region that your AWS IoT Analytics channels are created in. Use the same Region as the connector.  
This must also match the Region for the channels that are specified in the [group role](#iot-analytics-connector-req).
Display name in the AWS IoT console: **Publish region**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|([a-z]{2}-[a-z]+-\\d{1})`

`PublishInterval`  
The interval (in seconds) for publishing a batch of received data to AWS IoT Analytics.  
Display name in the AWS IoT console: **Publish interval**  
Required: `false`  
Type: `string`  
Default value: `1`  
Valid pattern: `$|^[0-9]+$`

`IotAnalyticsMaxActiveChannels`  
The maximum number of AWS IoT Analytics channels that the connector actively watches for. This must be greater than 0, and at least equal to the number of channels that you expect the connector to publish to at a given time.  
You can use this parameter to restrict memory consumption by limiting the total number of queues that the connector can manage at a given time. A queue is deleted when all queued messages are sent.  
Display name in the AWS IoT console: **Maximum number of active channels**  
Required: `false`  
Type: `string`  
Default value: `50`  
Valid pattern: `^$|^[1-9][0-9]*$`

`IotAnalyticsQueueDropBehavior`  
The behavior for dropping messages from a channel queue when the queue is full.  
Display name in the AWS IoT console: **Queue drop behavior**  
Required: `false`  
Type: `string`  
Valid values: `DROP_NEWEST` or `DROP_OLDEST`  
Default value: `DROP_NEWEST`  
Valid pattern: `^DROP_NEWEST$|^DROP_OLDEST$`

`IotAnalyticsQueueSizePerChannel`  
The maximum number of messages to retain in memory (per channel) before the messages are submitted or dropped. This must be greater than 0.  
Display name in the AWS IoT console: **Maximum queue size per channel**  
Required: `false`  
Type: `string`  
Default value: `2048`  
Valid pattern: `^$|^[1-9][0-9]*$`

`IotAnalyticsBatchSizePerChannel`  
The maximum number of messages to send to an AWS IoT Analytics channel in one batch request. This must be greater than 0.  
Display name in the AWS IoT console: **Maximum number of messages to batch per channel**  
Required: `false`  
Type: `string`  
Default value: `5`  
Valid pattern: `^$|^[1-9][0-9]*$`

`IotAnalyticsDefaultChannelName`  
The name of the AWS IoT Analytics channel that this connector uses for messages that are sent to a customer-defined input topic.  
Display name in the AWS IoT console: **Default channel name**  
Required: `false`  
Type: `string`  
Valid pattern: `^[a-zA-Z0-9_]$`

`IsolationMode`  <a name="IsolationMode"></a>
The [containerization](connectors.md#connector-containerization) mode for this connector. The default is `GreengrassContainer`, which means that the connector runs in an isolated runtime environment inside the AWS IoT Greengrass container.  
The default containerization setting for the group does not apply to connectors.
Display name in the AWS IoT console: **Container isolation mode**  
Required: `false`  
Type: `string`  
Valid values: `GreengrassContainer` or `NoContainer`  
Valid pattern: `^NoContainer$|^GreengrassContainer$`

### Create Connector Example (AWS CLI)
<a name="iot-analytics-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the IoT Analytics connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyIoTAnalyticsApplication",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/IoTAnalytics/versions/3",
            "Parameters": {
                "MemorySize": "65535",
                "PublishRegion": "us-west-1",
                "PublishInterval": "2",
                "IotAnalyticsMaxActiveChannels": "25",
                "IotAnalyticsQueueDropBehavior": "DROP_OLDEST",
                "IotAnalyticsQueueSizePerChannel": "1028",
                "IotAnalyticsBatchSizePerChannel": "5",
                "IotAnalyticsDefaultChannelName": "my_channel"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in this connector has a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="iot-analytics-connector-data-input"></a>

This connector accepts data on predefined and customer-defined MQTT topics. Publishers can be client devices, Lambda functions, or other connectors.

Predefined topics  
The connector supports the following two structured MQTT topics that allow publishers to specify the channel name inline.  
+ A [formatted message](#iot-analytics-connector-data-input-json) on the `iotanalytics/channels/+/messages/put` topic. The IoT data in these input messages must be formatted as a JSON or base64-encoded string.
+ An unformatted message on the `iotanalytics/channels/+/messages/binary/put` topic. Input messages received on this topic are treated as binary data and can contain any data type.

  To publish to predefined topics, replace the `+` wildcard with the channel name. For example:

  ```
  iotanalytics/channels/my_channel/messages/put
  ```

Customer-defined topics  
The connector supports the `#` topic syntax, which allows it to accept input messages on any MQTT topic that you configure in a subscription. We recommend that you specify a topic path instead of using only the `#` wildcard in your subscriptions. These messages are sent to the default channel that you specify for the connector.  
Input messages on customer-defined topics are treated as binary data. They can use any message format and can contain any data type. You can use customer-defined topics to route messages from devices that publish to fixed topics. You can also use them to accept input data from client devices that can't process the data into a formatted message to send to the connector.  
For more information about subscriptions and MQTT topics, see [Inputs and outputs](connectors.md#connectors-inputs-outputs).

The group role must allow the `iotanalytics:BatchPutMessage` action on all destination channels. For more information, see [Requirements](#iot-analytics-connector-req).

**Topic filter:** `iotanalytics/channels/+/messages/put`  <a name="iot-analytics-connector-data-input-json"></a>
Use this topic to send formatted messages to the connector and dynamically specify a destination channel. This topic also allows you to specify an ID that's returned in the response output. The connector verifies that IDs are unique for each message in the outbound `BatchPutMessage` request that it sends to AWS IoT Analytics. A message that has a duplicate ID is dropped.  
Input data sent to this topic must use the following message format.    
**Message properties**    
`request`  
The data to send to the specified channel.  
Required: `true`  
Type: `object` that includes the following properties:    
`message`  
The device or sensor data as a JSON or base64-encoded string.  
Required: `true`  
Type: `string`  
`id`  
An arbitrary ID for the request. This property is used to map an input request to an output response. When specified, the `id` property in the response object is set to this value. If you omit this property, the connector generates an ID.  
Required: `false`  
Type: `string`  
Valid pattern: `.*`  
**Example input**  

```
{
    "request": {
        "message" : "{\"temp\":23.33}"
    },
    "id" : "req123"
}
```

**Topic filter:** `iotanalytics/channels/+/messages/binary/put`  
Use this topic to send unformatted messages to the connector and dynamically specify a destination channel.  
The connector data doesn't parse the input messages received on this topic. It treats them as binary data. Before sending the messages to AWS IoT Analytics, the connector encodes and formats them to conform with `BatchPutMessage` API requirements:  
+ The connector base64-encodes the raw data and includes the encoded payload in an outbound `BatchPutMessage` request.
+ The connector generates and assigns an ID to each input message.
**Note**  
The connector's response output doesn't include an ID correlation for these input messages.  
**Message properties**  
None.

**Topic filter:** `#`  
Use this topic to send any message format to the default channel. This is especially useful when your client devices publish to fixed topics or when you want to send data to the default channel from client devices that can't process the data into the connector's [supported message format](#iot-analytics-connector-data-input-json).  
You define the topic syntax in the subscription that you create to connect this connector to the data source. We recommend that you specify a topic path instead of using only the `#` wildcard in your subscriptions.  
The connector data doesn't parse the messages that are published to this input topic. All input messages are treated as binary data. Before sending the messages to AWS IoT Analytics, the connector encodes and formats them to conform with `BatchPutMessage` API requirements:  
+ The connector base64-encodes the raw data and includes the encoded payload in an outbound `BatchPutMessage` request.
+ The connector generates and assigns an ID to each input message.
**Note**  
The connector's response output doesn't include an ID correlation for these input messages.  
**Message properties**  
None.

## Output data
<a name="iot-analytics-connector-data-output"></a>

This connector publishes status information as output data on an MQTT topic. This information contains the response returned by AWS IoT Analytics for each input message that it receives and sends to AWS IoT Analytics.

<a name="topic-filter"></a>**Topic filter in subscription**  
`iotanalytics/messages/put/status`

**Example output: Success**  

```
{
    "response" : {
        "status" : "success"
    },
    "id" : "req123"
}
```

**Example output: Failure**  

```
{
    "response" : {
        "status" : "fail",
        "error" : "ResourceNotFoundException",
        "error_message" : "A resource with the specified name could not be found."
    },
    "id" : "req123"
}
```
If the connector detects a retryable error (for example, connection errors), it retries the publish in the next batch. Exponential backoff is handled by the AWS SDK. Requests with retryable errors are added back to the channel queue for further publishing according to the `IotAnalyticsQueueDropBehavior` parameter.

## Usage Example
<a name="iot-analytics-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
If you use other Python runtimes, you can create a symlink from Python3.x to Python 3.7.
The [Get started with connectors (console)](connectors-console.md) and [Get started with connectors (CLI)](connectors-cli.md) topics contain detailed steps that show you how to configure and deploy an example Twilio Notifications connector.

1. Make sure you meet the [requirements](#iot-analytics-connector-req) for the connector.

   <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#iot-analytics-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. Add the connector and configure its [parameters](#iot-analytics-connector-param).

   1. Add subscriptions that allow the connector to receive [input data](#iot-analytics-connector-data-input) and send [output data](#iot-analytics-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="iot-analytics-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector.

```
import greengrasssdk
import time
import json
 
iot_client = greengrasssdk.client('iot-data')
send_topic = 'iotanalytics/channels/my_channel/messages/put'
 
def create_request_with_all_fields():
    return  {
        "request": {
            "message" : "{\"temp\":23.33}"
        },
        "id" : "req_123"
    }
 
def publish_basic_message():
    messageToPublish = create_request_with_all_fields()
    print("Message To Publish: ", messageToPublish)
    iot_client.publish(topic=send_topic,
        payload=json.dumps(messageToPublish))
 
publish_basic_message()
 
def lambda_handler(event, context):
    return
```

## Limits
<a name="iot-analytics-connector-limits"></a>

This connector is subject to the following limits.
+ All limits imposed by the AWS SDK for Python (Boto3) for the AWS IoT Analytics [https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iotanalytics.html#IoTAnalytics.Client.batch_put_message](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iotanalytics.html#IoTAnalytics.Client.batch_put_message) action.
+ All quotas imposed by the AWS IoT Analytics [ BatchPutMessage](https://docs.aws.amazon.com/iotanalytics/latest/userguide/api.html#cli-iotanalytics-batchputmessage) API. For more information, see [ Service Quotas](https://docs.aws.amazon.com/general/latest/gr/iot-analytics.html#limits_iot_analytics) for AWS IoT Analytics in the *AWS General Reference*.
  + 100,000 messages per second per channel.
  + 100 messages per batch.
  + 128 KB per message.

  This API uses channel names (not channel ARNs), so sending data to cross-region or cross-account channels is not supported.
+ All quotas imposed by the AWS IoT Greengrass Core. For more information, see [ Service Quotas](https://docs.aws.amazon.com/general/latest/gr/greengrass.html#limits_greengrass) for the AWS IoT Greengrass core in the *AWS General Reference*.

  The following quotas might be especially applicable:
  + Maximum size of messages sent by a device is 128 KB.
  + Maximum message queue size in the Greengrass core router is 2.5 MB.
  + Maximum length of a topic string is 256 bytes of UTF-8 encoded characters.

## Licenses
<a name="iot-analytics-connector-license"></a>

The IoT Analytics connector includes the following third-party software/licensing:<a name="boto-3-licenses"></a>
+ [AWS SDK for Python (Boto3)](https://pypi.org/project/boto3/)/Apache License 2.0
+ [botocore](https://pypi.org/project/botocore/)/Apache License 2.0
+ [dateutil](https://pypi.org/project/python-dateutil/1.4/)/PSF License
+ [docutils](https://pypi.org/project/docutils/)/BSD License, GNU General Public License (GPL), Python Software Foundation License, Public Domain
+ [jmespath](https://pypi.org/project/jmespath/)/MIT License
+ [s3transfer](https://pypi.org/project/s3transfer/)/Apache License 2.0
+ [urllib3](https://pypi.org/project/urllib3/)/MIT License

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="iot-analytics-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 4 | Adds the `IsolationMode` parameter to configure the containerization mode for the connector. | 
| 3 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 2 | Fix to reduce excessive logging. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="iot-analytics-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+  [What is AWS IoT Analytics?](https://docs.aws.amazon.com/iotanalytics/latest/userguide/welcome.html) in the *AWS IoT Analytics User Guide*

# IoT Ethernet IP Protocol Adapter connector
<a name="ethernet-ip-connector"></a>

The IoT Ethernet IP Protocol Adapter [connector](connectors.md) collects data from local devices using the Ethernet/IP protocol. You can use this connector to collect data from multiple devices and publish it to a `StreamManager` message stream. 

You can also use this connector with the IoT SiteWise connector and your IoT SiteWise gateway. Your gateway must supply the configuration for the connector. For more information, see [Configure an Ethernet/IP (EIP) source](http://docs.aws.amazon.com/iot-sitewise/latest/userguide/configure-eip-source.html) in the IoT SiteWise user guide. 

**Note**  
This connector runs in [No container](lambda-group-config.md#no-container-mode) isolation mode, so you can deploy it to a AWS IoT Greengrass group running in a Docker container. 

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 2 (recommended) | `arn:aws:greengrass:region::/connectors/IoTEIPProtocolAdaptor/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/IoTEIPProtocolAdaptor/versions/1` | 

For information about version changes, see the [Changelog](#ethernet-ip-connector-changelog).

## Requirements
<a name="ethernet-ip-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 1 and 2 ]
+ AWS IoT Greengrass Core software v1.10.2 or later.
+ Stream manager enabled on the AWS IoT Greengrass group.
+ Java 8 installed on the core device and added to the `PATH` environment variable.
+ A minimum of 256 MB additional RAM. This requirement is in addition to AWS IoT Greengrass Core memory requirements.

**Note**  
 This connector is available only in the following Regions:   
cn-north-1
ap-southeast-1
ap-southeast-2
eu-central-1
eu-west-1
us-east-1
us-west-2

------

## Connector Parameters
<a name="ethernet-ip-connector-param"></a>

This connector supports the following parameters:

`LocalStoragePath`  
The directory on the AWS IoT Greengrass host that the IoT SiteWise connector can write persistent data to. The default directory is `/var/sitewise`.  
Display name in the AWS IoT console: **Local storage path**  
Required: `false`  
Type: `string`  
Valid pattern: `^\s*$|\/.`

`ProtocolAdapterConfiguration`  
The set of Ethernet/IP collector configurations that the connector collect data from or connect to. This can be an empty list.  
Display name in the AWS IoT console: **Protocol Adapter Configuration**  
Required: `true`  
Type: A well-formed JSON string that defines the set of supported feedback configurations.

 The following is an example of a `ProtocolAdapterConfiguration`: 

```
{
    "sources": [
        {
            "type": "EIPSource",
            "name": "TestSource",
            "endpoint": {
                "ipAddress": "52.89.2.42",
                "port": 44818
            },
            "destination": {
                "type": "StreamManager",
                "streamName": "MyOutput_Stream",
                "streamBufferSize": 10
            },
            "destinationPathPrefix": "EIPSource_Prefix",
            "propertyGroups": [
                {
                    "name": "DriveTemperatures",
                    "scanMode": {
                        "type": "POLL",
                        "rate": 10000
                    },
                    "tagPathDefinitions": [
                        {
                            "type": "EIPTagPath",
                            "path": "arrayREAL[0]",
                            "dstDataType": "double"
                        }
                    ]
                }
            ]
        }
    ]
}
```

### Create Connector Example (AWS CLI)
<a name="eip-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the IoT Ethernet IP Protocol Adapter connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version 
'{
    "Connectors": [
        {
            "Id": "MyIoTEIPProtocolConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/IoTEIPProtocolAdaptor/versions/2",
            "Parameters": {
                "ProtocolAdaptorConfiguration": "{ \"sources\": [{ \"type\": \"EIPSource\", \"name\": \"Source1\", \"endpoint\": { \"ipAddress\": \"54.245.77.218\", \"port\": 44818 }, \"destinationPathPrefix\": \"EIPConnector_Prefix\", \"propertyGroups\": [{ \"name\": \"Values\", \"scanMode\": { \"type\": \"POLL\", \"rate\": 2000 }, \"tagPathDefinitions\": [{ \"type\": \"EIPTagPath\", \"path\": \"arrayREAL[0]\", \"dstDataType\": \"double\" }]}]}]}",
                "LocalStoragePath": "/var/MyIoTEIPProtocolConnectorState"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in this connector has a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

## Input data
<a name="ethernet-ip-connector-data-input"></a>

This connector doesn't accept MQTT messages as input data.

## Output data
<a name="ethernet-ip-connector-data-output"></a>

This connector publishes data to `StreamManager`. You must configure the destination message stream. The output messages are of the following structure:

```
{
    "alias": "string",
    "messages": [
        {
            "name": "string",
            "value": boolean|double|integer|string,
            "timestamp": number,
            "quality": "string"
        }
    ]
}
```

## Licenses
<a name="ethernet-ip-connector-license"></a>

The IoT Ethernet IP Protocol Adapter connector includes the following third-party software/licensing:
+ [Ethernet/IP client](https://github.com/digitalpetri/ethernet-ip/blob/master/LICENSE)
+ [MapDB](https://github.com/jankotek/mapdb/blob/master/LICENSE.txt)
+ [Elsa](https://github.com/jankotek/elsa/blob/master/LICENSE.txt)

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="ethernet-ip-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | Date | 
| --- | --- | --- | 
| 2 | This version contains bug fixes. | December 23, 2021 | 
| 1 | Initial release. | December 15, 2020 | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="ethernet-ip-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)

# IoT SiteWise connector
<a name="iot-sitewise-connector"></a>

The IoT SiteWise connector sends local device and equipment data to asset properties in AWS IoT SiteWise. You can use this connector to collect data from multiple OPC-UA servers and publish it to IoT SiteWise. The connector sends the data to asset properties in the current AWS account and Region.

**Note**  
IoT SiteWise is a fully managed service that collects, processes, and visualizes data from industrial devices and equipment. You can configure asset properties that process raw data sent from this connector to your assets' measurement properties. For example, you can define a transform property that converts a device's Celsius temperature data points to Fahrenheit, or you can define a metric property that calculates the average hourly temperature. For more information, see [What is AWS IoT SiteWise?](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/) in the *AWS IoT SiteWise User Guide*.

The connector sends data to IoT SiteWise with the OPC-UA data stream paths sent from the OPC-UA servers. For example, the data stream path `/company/windfarm/3/turbine/7/temperature` might represent the temperature sensor of turbine \$17 at wind farm \$13. If the AWS IoT Greengrass core loses connection to the internet, the connector caches data until it can successfully connect to the AWS Cloud. You can configure the maximum disk buffer size used for caching data. If the cache size exceeds the maximum disk buffer size, the connector discards the oldest data from the queue.

After you configure and deploy the IoT SiteWise connector, you can add a gateway and OPC-UA sources in the [IoT SiteWise console](https://console.aws.amazon.com/iotsitewise/). When you configure a source in the console, you can filter or prefix the OPC-UA data stream paths sent by the IoT SiteWise connector. For instructions to finish setting up your gateway and sources, see [Adding the gateway](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/configure-gateway.html#add-gateway) in the *AWS IoT SiteWise User Guide*.

IoT SiteWise receives data only from data streams that you have mapped to the measurement properties of IoT SiteWise assets. To map data streams to asset properties, you can set a property's alias to be equivalent to an OPC-UA data stream path. To learn about defining asset models and creating assets, see [Modeling industrial assets](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/industrial-asset-models) in the *AWS IoT SiteWise User Guide*.

**Notes**  
You can use stream manager to upload data to IoT SiteWise from sources other than OPC-UA servers. Stream manager also provides customizable support for persistence and bandwidth management. For more information, see [Manage data streams on the AWS IoT Greengrass core](stream-manager.md).  
This connector runs in [No container](lambda-group-config.md#no-container-mode) isolation mode, so you can deploy it to a Greengrass group running in a Docker container.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 12 (recommended) | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/12` | 
| 11 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/11` | 
| 10 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/10` | 
| 9 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/9` | 
| 8 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/8` | 
| 7 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/7` | 
| 6 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/6` | 
| 5 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/5` | 
| 4 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/4` | 
| 3 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/1` | 

For information about version changes, see the [Changelog](#iot-sitewise-connector-changelog).

## Requirements
<a name="iot-sitewise-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 9, 10, 11, and 12 ]

**Important**  
This version introduces new requirements: AWS IoT Greengrass Core software v1.10.2 and [stream manager](stream-manager.md).
+ AWS IoT Greengrass Core software v1.10.2.
+ <a name="conn-sitewise-req-stream-manager"></a>[Stream manager](stream-manager.md) enabled on the Greengrass group.
+ <a name="conn-sitewise-req-java-8"></a>Java 8 installed on the core device and added to the PATH environment variable.
+ <a name="conn-sitewise-req-regions"></a>This connector can be used only in Amazon Web Services Regions where both [AWS IoT Greengrass](https://docs.aws.amazon.com/general/latest/gr/greengrass.html) and [IoT SiteWise](https://docs.aws.amazon.com/general/latest/gr/iot-sitewise.html) are supported.
+ <a name="conn-sitewise-req-policy-v3"></a>An IAM policy added to the Greengrass group role. This role allows the AWS IoT Greengrass group access to the `iotsitewise:BatchPutAssetPropertyValue` action on the target root asset and its children, as shown in the following example. You can remove the `Condition` from the policy to allow the connector to access all of your IoT SiteWise assets.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
               "Effect": "Allow",
               "Action": "iotsitewise:BatchPutAssetPropertyValue",
               "Resource": "*",
               "Condition": {
                   "StringLike": {
                       "iotsitewise:assetHierarchyPath": [
                           "/root node asset ID",
                           "/root node asset ID/*"
                       ]
                   }
               }
          }
      ]
  }
  ```

------

  For more information, see [Adding and removing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

------
#### [ Versions 6, 7, and 8 ]

**Important**  
This version introduces new requirements: AWS IoT Greengrass Core software v1.10.0 and [stream manager](stream-manager.md).
+ <a name="conn-sitewise-req-ggc-1010"></a>AWS IoT Greengrass Core software v1.10.0.
+ <a name="conn-sitewise-req-stream-manager"></a>[Stream manager](stream-manager.md) enabled on the Greengrass group.
+ <a name="conn-sitewise-req-java-8"></a>Java 8 installed on the core device and added to the PATH environment variable.
+ <a name="conn-sitewise-req-regions"></a>This connector can be used only in Amazon Web Services Regions where both [AWS IoT Greengrass](https://docs.aws.amazon.com/general/latest/gr/greengrass.html) and [IoT SiteWise](https://docs.aws.amazon.com/general/latest/gr/iot-sitewise.html) are supported.
+ <a name="conn-sitewise-req-policy-v3"></a>An IAM policy added to the Greengrass group role. This role allows the AWS IoT Greengrass group access to the `iotsitewise:BatchPutAssetPropertyValue` action on the target root asset and its children, as shown in the following example. You can remove the `Condition` from the policy to allow the connector to access all of your IoT SiteWise assets.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
               "Effect": "Allow",
               "Action": "iotsitewise:BatchPutAssetPropertyValue",
               "Resource": "*",
               "Condition": {
                   "StringLike": {
                       "iotsitewise:assetHierarchyPath": [
                           "/root node asset ID",
                           "/root node asset ID/*"
                       ]
                   }
               }
          }
      ]
  }
  ```

------

  For more information, see [Adding and removing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

------
#### [ Version 5 ]
+ <a name="conn-sitewise-req-ggc-194"></a>AWS IoT Greengrass Core software v1.9.4.
+ <a name="conn-sitewise-req-java-8"></a>Java 8 installed on the core device and added to the PATH environment variable.
+ <a name="conn-sitewise-req-regions"></a>This connector can be used only in Amazon Web Services Regions where both [AWS IoT Greengrass](https://docs.aws.amazon.com/general/latest/gr/greengrass.html) and [IoT SiteWise](https://docs.aws.amazon.com/general/latest/gr/iot-sitewise.html) are supported.
+ <a name="conn-sitewise-req-policy-v3"></a>An IAM policy added to the Greengrass group role. This role allows the AWS IoT Greengrass group access to the `iotsitewise:BatchPutAssetPropertyValue` action on the target root asset and its children, as shown in the following example. You can remove the `Condition` from the policy to allow the connector to access all of your IoT SiteWise assets.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
               "Effect": "Allow",
               "Action": "iotsitewise:BatchPutAssetPropertyValue",
               "Resource": "*",
               "Condition": {
                   "StringLike": {
                       "iotsitewise:assetHierarchyPath": [
                           "/root node asset ID",
                           "/root node asset ID/*"
                       ]
                   }
               }
          }
      ]
  }
  ```

------

  For more information, see [Adding and removing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

------
#### [ Version 4 ]
+ <a name="conn-sitewise-req-ggc-1010"></a>AWS IoT Greengrass Core software v1.10.0.
+ <a name="conn-sitewise-req-java-8"></a>Java 8 installed on the core device and added to the PATH environment variable.
+ <a name="conn-sitewise-req-regions"></a>This connector can be used only in Amazon Web Services Regions where both [AWS IoT Greengrass](https://docs.aws.amazon.com/general/latest/gr/greengrass.html) and [IoT SiteWise](https://docs.aws.amazon.com/general/latest/gr/iot-sitewise.html) are supported.
+ <a name="conn-sitewise-req-policy-v3"></a>An IAM policy added to the Greengrass group role. This role allows the AWS IoT Greengrass group access to the `iotsitewise:BatchPutAssetPropertyValue` action on the target root asset and its children, as shown in the following example. You can remove the `Condition` from the policy to allow the connector to access all of your IoT SiteWise assets.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
               "Effect": "Allow",
               "Action": "iotsitewise:BatchPutAssetPropertyValue",
               "Resource": "*",
               "Condition": {
                   "StringLike": {
                       "iotsitewise:assetHierarchyPath": [
                           "/root node asset ID",
                           "/root node asset ID/*"
                       ]
                   }
               }
          }
      ]
  }
  ```

------

  For more information, see [Adding and removing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

------
#### [ Version 3 ]
+ <a name="conn-sitewise-req-ggc-194"></a>AWS IoT Greengrass Core software v1.9.4.
+ <a name="conn-sitewise-req-java-8"></a>Java 8 installed on the core device and added to the PATH environment variable.
+ <a name="conn-sitewise-req-regions"></a>This connector can be used only in Amazon Web Services Regions where both [AWS IoT Greengrass](https://docs.aws.amazon.com/general/latest/gr/greengrass.html) and [IoT SiteWise](https://docs.aws.amazon.com/general/latest/gr/iot-sitewise.html) are supported.
+ <a name="conn-sitewise-req-policy-v3"></a>An IAM policy added to the Greengrass group role. This role allows the AWS IoT Greengrass group access to the `iotsitewise:BatchPutAssetPropertyValue` action on the target root asset and its children, as shown in the following example. You can remove the `Condition` from the policy to allow the connector to access all of your IoT SiteWise assets.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
               "Effect": "Allow",
               "Action": "iotsitewise:BatchPutAssetPropertyValue",
               "Resource": "*",
               "Condition": {
                   "StringLike": {
                       "iotsitewise:assetHierarchyPath": [
                           "/root node asset ID",
                           "/root node asset ID/*"
                       ]
                   }
               }
          }
      ]
  }
  ```

------

  For more information, see [Adding and removing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

------
#### [ Versions 1 and 2 ]
+ <a name="conn-sitewise-req-ggc-194"></a>AWS IoT Greengrass Core software v1.9.4.
+ <a name="conn-sitewise-req-java-8"></a>Java 8 installed on the core device and added to the PATH environment variable.
+ <a name="conn-sitewise-req-regions"></a>This connector can be used only in Amazon Web Services Regions where both [AWS IoT Greengrass](https://docs.aws.amazon.com/general/latest/gr/greengrass.html) and [IoT SiteWise](https://docs.aws.amazon.com/general/latest/gr/iot-sitewise.html) are supported.
+ <a name="conn-sitewise-req-policy-v1"></a>An IAM policy added to the Greengrass group role that allows access to AWS IoT Core and the `iotsitewise:BatchPutAssetPropertyValue` action on the target root asset and its children, as shown in the following example. You can remove the `Condition` from the policy to allow the connector to access all of your IoT SiteWise assets.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
               "Effect": "Allow",
               "Action": "iotsitewise:BatchPutAssetPropertyValue",
               "Resource": "*",
               "Condition": {
                   "StringLike": {
                       "iotsitewise:assetHierarchyPath": [
                           "/root node asset ID",
                           "/root node asset ID/*"
                       ]
                   }
               }
          },
          {
              "Effect": "Allow",
              "Action": [
                   "iot:Connect",
                   "iot:DescribeEndpoint",
                   "iot:Publish",
                   "iot:Receive",
                   "iot:Subscribe"
              ],
              "Resource": "*"
          }
      ]
  }
  ```

------

  For more information, see [Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

------

## Parameters
<a name="iot-sitewise-connector-param"></a>

------
#### [ Versions 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 ]<a name="conn-sitewise-params-v2"></a>

`SiteWiseLocalStoragePath`  
The directory on the AWS IoT Greengrass host that the IoT SiteWise connector can write persistent data to. Defaults to `/var/sitewise`.  
Display name in the AWS IoT console: **Local storage path**  
Required: `false`  
Type: `string`  
Valid pattern: `^\s*$|\/.`

`AWSSecretsArnList`  
A list of secrets in AWS Secrets Manager that each contain a OPC-UA user name and password key-value pair. Each secret must be a key-value pair type secret.  
Display name in the AWS IoT console: **List of ARNs for OPC-UA username/password secrets**  
Required: `false`  
Type: `JsonArrayOfStrings`  
Valid pattern: `\[( ?,? ?\"(arn:(aws(-[a-z]+)*):secretsmanager:[a-z0-9\\-]+:[0-9]{12}:secret:([a-zA-Z0-9\\\\]+\/)*[a-zA-Z0-9\/_+=,.@\\-]+-[a-zA-Z0-9]+)*\")*\]`

`MaximumBufferSize`  
The maximum size in GB for IoT SiteWise disk usage. Defaults to 10GB.  
Display name in the AWS IoT console: **Maximum disk buffer size**  
Required: `false`  
Type: `string`  
Valid pattern: `^\s*$|[0-9]+`

------
#### [ Version 1 ]<a name="conn-sitewise-params-v1"></a>

`SiteWiseLocalStoragePath`  
The directory on the AWS IoT Greengrass host that the IoT SiteWise connector can write persistent data to. Defaults to `/var/sitewise`.  
Display name in the AWS IoT console: **Local storage path**  
Required: `false`  
Type: `string`  
Valid pattern: `^\s*$|\/.`

`SiteWiseOpcuaUserIdentityTokenSecretArn`  
The secret in AWS Secrets Manager that contains the OPC-UA user name and password key-value pair. This secret must be a key-value pair type secret.  
Display name in the AWS IoT console: **ARN of OPC-UA username/password secret**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|arn:(aws(-[a-z]+)*):secretsmanager:[a-z0-9\\-]+:[0-9]{12}:secret:([a-zA-Z0-9\\\\]+/)*[a-zA-Z0-9/_+=,.@\\-]+-[a-zA-Z0-9]+`

`SiteWiseOpcuaUserIdentityTokenSecretArn-ResourceId`  
The secret resource in the AWS IoT Greengrass group that references an OPC-UA user name and password secret.  
Display name in the AWS IoT console: **OPC-UA username/password secret resource**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|.+`

`MaximumBufferSize`  
The maximum size in GB for IoT SiteWise disk usage. Defaults to 10GB.  
Display name in the AWS IoT console: **Maximum disk buffer size**  
Required: `false`  
Type: `string`  
Valid pattern: `^\s*$|[0-9]+`

------

### Create Connector Example (AWS CLI)
<a name="iot-sitewise-connector-create"></a>

The following AWS CLI command creates a `ConnectorDefinition` with an initial version that contains the IoT SiteWise connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyIoTSiteWiseConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/IoTSiteWise/versions/11"
        }
    ]
}'
```

**Note**  
The Lambda functions in this connector have a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="iot-sitewise-connector-data-input"></a>

This connector doesn't accept MQTT messages as input data.

## Output data
<a name="iot-sitewise-connector-data-output"></a>

This connector doesn't publish MQTT messages as output data.

## Limits
<a name="iot-sitewise-connector-limits"></a>

This connector is subject to the following all limits imposed by IoT SiteWise, including the following. For more informatison, see [AWS IoT SiteWise endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/iot-sitewise.html) in the *AWS General Reference*. 
+ Maximum number of gateways per AWS account.
+ Maximum number of OPC-UA sources per gateway.
+ Maximum rate of timestamp-quality-value (TQV) data points stored per AWS account.
+ Maximum rate of TQV data points stored per asset property.

## Licenses
<a name="iot-sitewise-connector-license"></a>

------
#### [ Version 9, 10, 11, and 12 ]

The IoT SiteWise connector includes the following third-party software/licensing:
+  [MapDB](https://github.com/jankotek/mapdb/blob/master/LICENSE.txt) 
+  [Elsa](https://github.com/jankotek/elsa/blob/master/LICENSE.txt) 
+ [Eclipse Milo](https://github.com/eclipse/milo/blob/maintenance/0.2/LICENSE)

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

------
#### [ Versions 6, 7, and 8 ]

The IoT SiteWise connector includes the following third-party software/licensing:
+ [Milo](https://github.com/eclipse/milo/) / EDL 1.0

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

------
#### [ Versions 1, 2, 3, 4, and 5 ]

The IoT SiteWise connector includes the following third-party software/licensing:
+ [Milo](https://github.com/eclipse/milo/) / EDL 1.0
+ [Chronicle-Queue](https://github.com/OpenHFT/Chronicle-Queue) / Apache License 2.0

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

------

## Changelog
<a name="iot-sitewise-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | Date | 
| --- | --- | --- | 
|  12  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/iot-sitewise-connector.html)  |  December 22, 2021  | 
|  11  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/iot-sitewise-connector.html)  |  March 24, 2021  | 
|  10  |  Configured `StreamManager` to improve handling when the source connection is lost and re-established. This version also accepts OPC-UA values with a `ServerTimestamp` when no `SourceTimestamp` is available.  |  January 22, 2021  | 
|  9  |  Support launched for custom Greengrass `StreamManager` stream destinations, OPC-UA deadbanding, custom scan mode and custom scan rate. Also includes improved performance during configuration updates made from the IoT SiteWise gateway.  |  December 15, 2020  | 
|  8  |  Improved stability when the connector experiences intermittent network connectivity.  |  November 19, 2020  | 
|  7  |  Fixed an issue with gateway metrics.  |  August 14, 2020  | 
|  6  |  Added support for CloudWatch metrics and automatic discovery of new OPC-UA tags. This version requires [stream manager](stream-manager.md) and AWS IoT Greengrass Core software v1.10.0 or higher.  |  April 29, 2020  | 
|  5  |  Fixed a compatibility issue with AWS IoT Greengrass Core software v1.9.4.  |  February 12, 2020  | 
|  4  |  Fixed an issue with OPC-UA server reconnection.  |  February 7, 2020  | 
|  3  |  Removed `iot:*` permissions requirement.  |  December 17, 2019  | 
|  2  |  Added support for multiple OPC-UA secret resources.  |  December 10, 2019  | 
|  1  |  Initial release.  |  December 2, 2019  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="iot-sitewise-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+ See the following topics in the *AWS IoT SiteWise User Guide*:
  + [What is AWS IoT SiteWise?](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/)
  + [Using a gateway](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/gateway-connector.html)
  + [Gateway CloudWatch metrics](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/monitor-cloudwatch-metrics.html#gateway-metrics)
  + [Troubleshooting an IoT SiteWise gateway](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/troubleshooting.html#troubleshooting-gateway)

# Kinesis Firehose
<a name="kinesis-firehose-connector"></a>

The Kinesis Firehose [connector](connectors.md) publishes data through an Amazon Data Firehose delivery stream to destinations such as Amazon S3, Amazon Redshift, or Amazon OpenSearch Service.

This connector is a data producer for a Kinesis delivery stream. It receives input data on an MQTT topic, and sends the data to a specified delivery stream. The delivery stream then sends the data record to the configured destination (for example, an S3 bucket).

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 5 | `arn:aws:greengrass:region::/connectors/KinesisFirehose/versions/5` | 
| 4 | `arn:aws:greengrass:region::/connectors/KinesisFirehose/versions/4` | 
| 3 | `arn:aws:greengrass:region::/connectors/KinesisFirehose/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/KinesisFirehose/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/KinesisFirehose/versions/1` | 

For information about version changes, see the [Changelog](#kinesis-firehose-connector-changelog).

## Requirements
<a name="kinesis-firehose-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 4 - 5 ]
+ <a name="conn-req-ggc-v1.9.3"></a>AWS IoT Greengrass Core software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="req-kinesis-firehose-stream"></a>A configured Kinesis delivery stream. For more information, see [Creating an Amazon Data Firehose delivery stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html) in the *Amazon Kinesis Firehose Developer Guide*.
+ <a name="req-kinesis-firehose-iam-policy-v2"></a>The [Greengrass group role](group-role.md) configured to allow the `firehose:PutRecord` and `firehose:PutRecordBatch` actions on the target delivery stream, as shown in the following example IAM policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
          {
              "Sid":"Stmt1528133056761",
              "Action":[
                  "firehose:PutRecord",
                  "firehose:PutRecordBatch"
              ],
              "Effect":"Allow",
              "Resource":[
              "arn:aws:firehose:us-east-1:123456789012:deliverystream/stream-name"
              ]
          }
      ]
   }
  ```

------

  This connector allows you to dynamically override the default delivery stream in the input message payload. If your implementation uses this feature, the IAM policy should include all target streams as resources. You can grant granular or conditional access to resources (for example, by using a wildcard \$1 naming scheme).

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

------
#### [ Versions 2 - 3 ]
+ <a name="conn-req-ggc-v1.7.0"></a>AWS IoT Greengrass Core software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="req-kinesis-firehose-stream"></a>A configured Kinesis delivery stream. For more information, see [Creating an Amazon Data Firehose delivery stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html) in the *Amazon Kinesis Firehose Developer Guide*.
+ <a name="req-kinesis-firehose-iam-policy-v2"></a>The [Greengrass group role](group-role.md) configured to allow the `firehose:PutRecord` and `firehose:PutRecordBatch` actions on the target delivery stream, as shown in the following example IAM policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
          {
              "Sid":"Stmt1528133056761",
              "Action":[
                  "firehose:PutRecord",
                  "firehose:PutRecordBatch"
              ],
              "Effect":"Allow",
              "Resource":[
              "arn:aws:firehose:us-east-1:123456789012:deliverystream/stream-name"
              ]
          }
      ]
   }
  ```

------

  This connector allows you to dynamically override the default delivery stream in the input message payload. If your implementation uses this feature, the IAM policy should include all target streams as resources. You can grant granular or conditional access to resources (for example, by using a wildcard \$1 naming scheme).

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

------
#### [ Version 1 ]
+ <a name="conn-req-ggc-v1.7.0"></a>AWS IoT Greengrass Core software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="req-kinesis-firehose-stream"></a>A configured Kinesis delivery stream. For more information, see [Creating an Amazon Data Firehose delivery stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html) in the *Amazon Kinesis Firehose Developer Guide*.
+ The [Greengrass group role](group-role.md) configured to allow the `firehose:PutRecord` action on the target delivery stream, as shown in the following example IAM policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
          {
              "Sid":"Stmt1528133056761",
              "Action":[
                  "firehose:PutRecord"
              ],
              "Effect":"Allow",
              "Resource":[
              "arn:aws:firehose:us-east-1:123456789012:deliverystream/stream-name"
              ]
          }
      ]
   }
  ```

------

  <a name="role-resources"></a>This connector allows you to dynamically override the default delivery stream in the input message payload. If your implementation uses this feature, the IAM policy should include all target streams as resources. You can grant granular or conditional access to resources (for example, by using a wildcard \$1 naming scheme).

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

------

## Connector Parameters
<a name="kinesis-firehose-connector-param"></a>

This connector provides the following parameters:

------
#### [ Versions 5 ]

`DefaultDeliveryStreamArn`  <a name="kinesis-firehose-DefaultDeliveryStreamArn"></a>
The ARN of the default Firehose delivery stream to send data to. The destination stream can be overridden by the `delivery_stream_arn` property in the input message payload.  
The group role must allow the appropriate actions on all target delivery streams. For more information, see [Requirements](#kinesis-firehose-connector-req).
Display name in the AWS IoT console: **Default delivery stream ARN**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:firehose:([a-z]{2}-[a-z]+-\d{1}):(\d{12}):deliverystream/([a-zA-Z0-9_\-.]+)$`

`DeliveryStreamQueueSize`  <a name="kinesis-firehose-DeliveryStreamQueueSize"></a>
The maximum number of records to retain in memory before new records for the same delivery stream are rejected. The minimum value is 2000.  
Display name in the AWS IoT console: **Maximum number of records to buffer (per stream)**  
Required: `true`  
Type: `string`  
Valid pattern: `^([2-9]\\d{3}|[1-9]\\d{4,})$`

`MemorySize`  <a name="kinesis-firehose-MemorySize"></a>
The amount of memory (in KB) to allocate to this connector.  
Display name in the AWS IoT console: **Memory size**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`PublishInterval`  <a name="kinesis-firehose-PublishInterval"></a>
The interval (in seconds) for publishing records to Firehose. To disable batching, set this value to 0.  
Display name in the AWS IoT console: **Publish interval**  
Required: `true`  
Type: `string`  
Valid values: `0 - 900`  
Valid pattern: `[0-9]|[1-9]\\d|[1-9]\\d\\d|900`

`IsolationMode`  <a name="IsolationMode"></a>
The [containerization](connectors.md#connector-containerization) mode for this connector. The default is `GreengrassContainer`, which means that the connector runs in an isolated runtime environment inside the AWS IoT Greengrass container.  
The default containerization setting for the group does not apply to connectors.
Display name in the AWS IoT console: **Container isolation mode**  
Required: `false`  
Type: `string`  
Valid values: `GreengrassContainer` or `NoContainer`  
Valid pattern: `^NoContainer$|^GreengrassContainer$`

------
#### [ Versions 2 - 4 ]

`DefaultDeliveryStreamArn`  <a name="kinesis-firehose-DefaultDeliveryStreamArn"></a>
The ARN of the default Firehose delivery stream to send data to. The destination stream can be overridden by the `delivery_stream_arn` property in the input message payload.  
The group role must allow the appropriate actions on all target delivery streams. For more information, see [Requirements](#kinesis-firehose-connector-req).
Display name in the AWS IoT console: **Default delivery stream ARN**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:firehose:([a-z]{2}-[a-z]+-\d{1}):(\d{12}):deliverystream/([a-zA-Z0-9_\-.]+)$`

`DeliveryStreamQueueSize`  <a name="kinesis-firehose-DeliveryStreamQueueSize"></a>
The maximum number of records to retain in memory before new records for the same delivery stream are rejected. The minimum value is 2000.  
Display name in the AWS IoT console: **Maximum number of records to buffer (per stream)**  
Required: `true`  
Type: `string`  
Valid pattern: `^([2-9]\\d{3}|[1-9]\\d{4,})$`

`MemorySize`  <a name="kinesis-firehose-MemorySize"></a>
The amount of memory (in KB) to allocate to this connector.  
Display name in the AWS IoT console: **Memory size**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`PublishInterval`  <a name="kinesis-firehose-PublishInterval"></a>
The interval (in seconds) for publishing records to Firehose. To disable batching, set this value to 0.  
Display name in the AWS IoT console: **Publish interval**  
Required: `true`  
Type: `string`  
Valid values: `0 - 900`  
Valid pattern: `[0-9]|[1-9]\\d|[1-9]\\d\\d|900`

------
#### [ Version 1 ]

`DefaultDeliveryStreamArn`  <a name="kinesis-firehose-DefaultDeliveryStreamArn"></a>
The ARN of the default Firehose delivery stream to send data to. The destination stream can be overridden by the `delivery_stream_arn` property in the input message payload.  
The group role must allow the appropriate actions on all target delivery streams. For more information, see [Requirements](#kinesis-firehose-connector-req).
Display name in the AWS IoT console: **Default delivery stream ARN**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:firehose:([a-z]{2}-[a-z]+-\d{1}):(\d{12}):deliverystream/([a-zA-Z0-9_\-.]+)$`

------

**Example**  <a name="kinesis-firehose-connector-create"></a>
**Create Connector Example (AWS CLI)**  
The following CLI command creates a `ConnectorDefinition` with an initial version that contains the connector.  

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyKinesisFirehoseConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/KinesisFirehose/versions/5",
            "Parameters": {
                "DefaultDeliveryStreamArn": "arn:aws:firehose:region:account-id:deliverystream/stream-name",
                "DeliveryStreamQueueSize": "5000",
                "MemorySize": "65535",
                "PublishInterval": "10", 
                "IsolationMode" : "GreengrassContainer"
            }
        }
    ]
}'
```

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="kinesis-firehose-connector-data-input"></a>

This connector accepts stream content on MQTT topics, and then sends the content to the target delivery stream. It accepts two types of input data:
+ JSON data on the `kinesisfirehose/message` topic.
+ Binary data on the `kinesisfirehose/message/binary/#` topic.

------
#### [ Versions 2 - 5 ]<a name="kinesis-firehose-input-data"></a>

**Topic filter**: `kinesisfirehose/message`  
Use this topic to send a message that contains JSON data.    
**Message properties**    
`request`  
The data to send to the delivery stream and the target delivery stream, if different from the default stream.  
Required: `true`  
Type: `object` that includes the following properties:    
`data`  
The data to send to the delivery stream.  
Required: `true`  
Type: `string`  
`delivery_stream_arn`  
The ARN of the target Kinesis delivery stream. Include this property to override the default delivery stream.  
Required: `false`  
Type: `string`  
Valid pattern: `arn:aws:firehose:([a-z]{2}-[a-z]+-\d{1}):(\d{12}):deliverystream/([a-zA-Z0-9_\-.]+)$`  
`id`  
An arbitrary ID for the request. This property is used to map an input request to an output response. When specified, the `id` property in the response object is set to this value. If you don't use this feature, you can omit this property or specify an empty string.  
Required: `false`  
Type: `string`  
Valid pattern: `.*`  
**Example input**  

```
{
     "request": {
        "delivery_stream_arn": "arn:aws:firehose:region:account-id:deliverystream/stream2-name",
        "data": "Data to send to the delivery stream."
     },
     "id": "request123"
}
```
 

**Topic filter**: `kinesisfirehose/message/binary/#`  
Use this topic to send a message that contains binary data. The connector doesn't parse binary data. The data is streamed as is.  
To map the input request to an output response, replace the `#` wildcard in the message topic with an arbitrary request ID. For example, if you publish a message to `kinesisfirehose/message/binary/request123`, the `id` property in the response object is set to `request123`.  
If you don't want to map a request to a response, you can publish your messages to `kinesisfirehose/message/binary/`. Be sure to include the trailing slash.

------
#### [ Version 1 ]<a name="kinesis-firehose-input-data"></a>

**Topic filter**: `kinesisfirehose/message`  
Use this topic to send a message that contains JSON data.    
**Message properties**    
`request`  
The data to send to the delivery stream and the target delivery stream, if different from the default stream.  
Required: `true`  
Type: `object` that includes the following properties:    
`data`  
The data to send to the delivery stream.  
Required: `true`  
Type: `string`  
`delivery_stream_arn`  
The ARN of the target Kinesis delivery stream. Include this property to override the default delivery stream.  
Required: `false`  
Type: `string`  
Valid pattern: `arn:aws:firehose:([a-z]{2}-[a-z]+-\d{1}):(\d{12}):deliverystream/([a-zA-Z0-9_\-.]+)$`  
`id`  
An arbitrary ID for the request. This property is used to map an input request to an output response. When specified, the `id` property in the response object is set to this value. If you don't use this feature, you can omit this property or specify an empty string.  
Required: `false`  
Type: `string`  
Valid pattern: `.*`  
**Example input**  

```
{
     "request": {
        "delivery_stream_arn": "arn:aws:firehose:region:account-id:deliverystream/stream2-name",
        "data": "Data to send to the delivery stream."
     },
     "id": "request123"
}
```
 

**Topic filter**: `kinesisfirehose/message/binary/#`  
Use this topic to send a message that contains binary data. The connector doesn't parse binary data. The data is streamed as is.  
To map the input request to an output response, replace the `#` wildcard in the message topic with an arbitrary request ID. For example, if you publish a message to `kinesisfirehose/message/binary/request123`, the `id` property in the response object is set to `request123`.  
If you don't want to map a request to a response, you can publish your messages to `kinesisfirehose/message/binary/`. Be sure to include the trailing slash.

------

## Output data
<a name="kinesis-firehose-connector-data-output"></a>

This connector publishes status information as output data on an MQTT topic.

------
#### [ Versions 2 - 5 ]

<a name="topic-filter"></a>**Topic filter in subscription**  <a name="kinesis-firehose-output-topic-status"></a>
`kinesisfirehose/message/status`

**Example output**  
The response contains the status of each data record sent in the batch.  

```
{
    "response": [
        {
            "ErrorCode": "error",
            "ErrorMessage": "test error",
            "id": "request123",
            "status": "fail"
        },
        {
            "firehose_record_id": "xyz2",
            "id": "request456",
            "status": "success"
        },
        {
            "firehose_record_id": "xyz3",
            "id": "request890",
            "status": "success"
        }
    ]
}
```
If the connector detects a retryable error (for example, connection errors), it retries the publish in the next batch. Exponential backoff is handled by the AWS SDK. Requests that fail with retryable errors are added back to the end of the queue for further publishing.

------
#### [ Version 1 ]

<a name="topic-filter"></a>**Topic filter in subscription**  <a name="kinesis-firehose-output-topic-status"></a>
`kinesisfirehose/message/status`

**Example output: Success**  

```
{
   "response": {
       "firehose_record_id": "1lxfuuuFomkpJYzt/34ZU/r8JYPf8Wyf7AXqlXm",
       "status": "success"
    },
    "id": "request123"
}
```

**Example output: Failure**  

```
{
   "response" : {
       "error": "ResourceNotFoundException",
       "error_message": "An error occurred (ResourceNotFoundException) when calling the PutRecord operation: Firehose test1 not found under account 123456789012.",
       "status": "fail"
   },
   "id": "request123"
}
```

------

## Usage Example
<a name="kinesis-firehose-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
If you use other Python runtimes, you can create a symlink from Python3.x to Python 3.7.
The [Get started with connectors (console)](connectors-console.md) and [Get started with connectors (CLI)](connectors-cli.md) topics contain detailed steps that show you how to configure and deploy an example Twilio Notifications connector.

1. Make sure you meet the [requirements](#kinesis-firehose-connector-req) for the connector.

   <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#kinesis-firehose-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. Add the connector and configure its [parameters](#kinesis-firehose-connector-param).

   1. Add subscriptions that allow the connector to receive [JSON input data](#kinesis-firehose-connector-data-input) and send [output data](#kinesis-firehose-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="kinesis-firehose-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector. This message contains JSON data.

```
import greengrasssdk
import time
import json

iot_client = greengrasssdk.client('iot-data')
send_topic = 'kinesisfirehose/message'

def create_request_with_all_fields():
    return  {
        "request": {
            "data": "Message from Firehose Connector Test"
        },
        "id" : "req_123"
    }

def publish_basic_message():
    messageToPublish = create_request_with_all_fields()
    print("Message To Publish: ", messageToPublish)
    iot_client.publish(topic=send_topic,
        payload=json.dumps(messageToPublish))

publish_basic_message()

def lambda_handler(event, context):
    return
```

## Licenses
<a name="kinesis-firehose-connector-license"></a>

The Kinesis Firehose connector includes the following third-party software/licensing:<a name="boto-3-licenses"></a>
+ [AWS SDK for Python (Boto3)](https://pypi.org/project/boto3/)/Apache License 2.0
+ [botocore](https://pypi.org/project/botocore/)/Apache License 2.0
+ [dateutil](https://pypi.org/project/python-dateutil/1.4/)/PSF License
+ [docutils](https://pypi.org/project/docutils/)/BSD License, GNU General Public License (GPL), Python Software Foundation License, Public Domain
+ [jmespath](https://pypi.org/project/jmespath/)/MIT License
+ [s3transfer](https://pypi.org/project/s3transfer/)/Apache License 2.0
+ [urllib3](https://pypi.org/project/urllib3/)/MIT License

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="kinesis-firehose-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 5 | <a name="isolation-mode-changelog"></a>Added the `IsolationMode` parameter to configure the containerization mode for the connector. | 
| 4 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 3 | Fix to reduce excessive logging and other minor bug fixes.  | 
| 2 | Added support for sending batched data records to Firehose at a specified interval. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/kinesis-firehose-connector.html)  | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="kinesis-firehose-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+ [What is Amazon Kinesis Data Firehose?](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html) in the *Amazon Kinesis Developer Guide*

# ML Feedback connector
<a name="ml-feedback-connector"></a>

**Warning**  <a name="connectors-extended-life-phase-warning"></a>
This connector has moved into the *extended life phase*, and AWS IoT Greengrass won't release updates that provide features, enhancements to existing features, security patches, or bug fixes. For more information, see [AWS IoT Greengrass Version 1 maintenance policy](maintenance-policy.md).

The ML Feedback connector makes it easier to access your machine learning (ML) model data for model retraining and analysis. The connector:
+ Uploads input data (samples) used by your ML model to Amazon S3. Model input can be in any format, such as images, JSON, or audio. After samples are uploaded to the cloud, you can use them to retrain the model to improve the accuracy and precision of its predictions. For example, you can use [SageMaker AI Ground Truth](https://docs.aws.amazon.com/sagemaker/latest/dg/sms.html) to label your samples and [SageMaker AI](https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html) to retrain the model.
+ Publishes the prediction results from the model as MQTT messages. This lets you monitor and analyze the inference quality of your model in real time. You can also store prediction results and use them to analyze trends over time.
+ Publishes metrics about sample uploads and sample data to Amazon CloudWatch.

To configure this connector, you describe your supported *feedback configurations* in JSON format. A feedback configuration defines properties such as the destination Amazon S3 bucket, content type, and [sampling strategy](#ml-feedback-connector-sampling-strategies). (A sampling strategy is used to determine which samples to upload.)

You can use the ML Feedback connector in the following scenarios:
+ With user-defined Lambda functions. Your local inference Lambda functions use the AWS IoT Greengrass Machine Learning SDK to invoke this connector and pass in the target feedback configuration, model input, and model output (prediction results). For an example, see [Usage Example](#ml-feedback-connector-usage).
+ With the [ML Image Classification connector](image-classification-connector.md) (v2). To use this connector with the ML Image Classification connector, configure the `MLFeedbackConnectorConfigId` parameter for the ML Image Classification connector.
+ With the [ML Object Detection connector](obj-detection-connector.md). To use this connector with the ML Object Detection connector, configure the `MLFeedbackConnectorConfigId` parameter for the ML Object Detection connector.

**ARN**: `arn:aws:greengrass:region::/connectors/MLFeedback/versions/1`

## Requirements
<a name="ml-feedback-connector-req"></a>

This connector has the following requirements:
+ AWS IoT Greengrass Core Software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ One or more Amazon S3 buckets. The number of buckets you use depends on your sampling strategy.
+ The [Greengrass group role](group-role.md) configured to allow the `s3:PutObject` action on objects in the destination Amazon S3 bucket, as shown in the following example IAM policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": "s3:PutObject",
              "Resource": [
                  "arn:aws:s3:::bucket-name/*"
              ]
          }
      ]
  }
  ```

------

  The policy should include all destination buckets as resources. You can grant granular or conditional access to resources (for example, by using a wildcard \$1 naming scheme).

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).
+ The [CloudWatch Metrics connector](cloudwatch-metrics-connector.md) added to the Greengrass group and configured. This is required only if you want to use the metrics reporting feature.
+ [AWS IoT Greengrass Machine Learning SDK](lambda-functions.md#lambda-sdks-ml) v1.1.0 is required to interact with this connector.

## Parameters
<a name="ml-feedback-connector-param"></a>

`FeedbackConfigurationMap`  
A set of one or more feedback configurations that the connector can use to upload samples to Amazon S3. A feedback configuration defines parameters such as the destination bucket, content type, and [sampling strategy](#ml-feedback-connector-sampling-strategies). When this connector is invoked, the calling Lambda function or connector specifies a target feedback configuration.  
Display name in the AWS IoT console: **Feedback configuration map**  
Required: `true`  
Type: A well-formed JSON string that defines the set of supported feedback configurations. For an example, see [FeedbackConfigurationMap example](#ml-feedback-connector-feedbackconfigmap).    
  
The ID of a feedback configuration object has the following requirements.    
  
The ID:  
+ Must be unique across configuration objects.
+ Must begin with a letter or number. Can contain lowercase and uppercase letters, numbers, and hyphens.
+ Must be 2 - 63 characters in length.
Required: `true`  
Type: `string`  
Valid pattern: `^[a-zA-Z0-9][a-zA-Z0-9-]{1,62}$`  
Examples: `MyConfig0`, `config-a`, `12id`
The body of a feedback configuration object contains the following properties.    
`s3-bucket-name`  
The name of the destination Amazon S3 bucket.  
The group role must allow the `s3:PutObject` action on all destination buckets. For more information, see [Requirements](#ml-feedback-connector-req).
Required: `true`  
Type: `string`  
Valid pattern: `^[a-z0-9\.\-]{3,63}$`  
`content-type`  
The content type of the samples to upload. All content for an individual feedback configuration must be of the same type.  
Required: `true`  
Type: `string`  
Examples: `image/jpeg`, `application/json`, `audio/ogg`  
`s3-prefix`  
The key prefix to use for uploaded samples. A prefix is similar to a directory name. It allows you to store similar data under the same directory in a bucket. For more information, see [Object key and metadata](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingMetadata.html) in the *Amazon Simple Storage Service User Guide*.  
Required: `false`  
Type: `string`  
`file-ext`  
The file extension to use for uploaded samples. Must be a valid file extension for the content type.  
Required: `false`  
Type: `string`  
Examples: `jpg`, `json`, `ogg`  
`sampling-strategy`  
The [sampling strategy](#ml-feedback-connector-sampling-strategies) to use to filter which samples to upload. If omitted, the connector tries to upload all the samples that it receives.  
Required: `false`  
Type: A well-formed JSON string that contains the following properties.    
`strategy-name`  
The name of the sampling strategy.  
Required: `true`  
Type: `string`  
Valid values: `RANDOM_SAMPLING`, `LEAST_CONFIDENCE`, `MARGIN`, or `ENTROPY`  
`rate`  
The rate for the [Random](#ml-feedback-connector-sampling-strategies-random) sampling strategy.  
Required: `true` if `strategy-name` is `RANDOM_SAMPLING`.  
Type: `number`  
Valid values: `0.0 - 1.0`  
`threshold`  
The threshold for the [Least Confidence](#ml-feedback-connector-sampling-strategies-least-confidence), [Margin](#ml-feedback-connector-sampling-strategies-margin), or [Entropy](#ml-feedback-connector-sampling-strategies-entropy) sampling strategy.  
Required: `true` if `strategy-name` is `LEAST_CONFIDENCE`, `MARGIN`, or `ENTROPY`.  
Type: `number`  
Valid values:  
+ `0.0 - 1.0` for the `LEAST_CONFIDENCE` or `MARGIN` strategy.
+ `0.0 - no limit` for the `ENTROPY` strategy.

`RequestLimit`  
The maximum number of requests that the connector can process at a time.  
You can use this parameter to restrict memory consumption by limiting the number of requests that the connector processes at the same time. Requests that exceed this limit are ignored.  
Display name in the AWS IoT console: **Request limit**  
Required: `false`  
Type: `string`  
Valid values: `0 - 999`  
Valid pattern: `^$|^[0-9]{1,3}$`

### Create Connector Example (AWS CLI)
<a name="ml-feedback-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the ML Feedback connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyMLFeedbackConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/MLFeedback/versions/1",
            "Parameters": {
                "FeedbackConfigurationMap": "{  \"RandomSamplingConfiguration\": {  \"s3-bucket-name\": \"my-aws-bucket-random-sampling\",  \"content-type\": \"image/png\",  \"file-ext\": \"png\",  \"sampling-strategy\": {  \"strategy-name\": \"RANDOM_SAMPLING\",  \"rate\": 0.5  } },  \"LeastConfidenceConfiguration\": {  \"s3-bucket-name\": \"my-aws-bucket-least-confidence-sampling\",  \"content-type\": \"image/png\",  \"file-ext\": \"png\",  \"sampling-strategy\": {  \"strategy-name\": \"LEAST_CONFIDENCE\",  \"threshold\": 0.4  } } }", 
                "RequestLimit": "10"
            }
        }
    ]
}'
```

### FeedbackConfigurationMap example
<a name="ml-feedback-connector-feedbackconfigmap"></a>

The following is an expanded example value for the `FeedbackConfigurationMap` parameter. This example includes several feedback configurations that use different sampling strategies.

```
{
    "ConfigID1": {
        "s3-bucket-name": "my-aws-bucket-random-sampling",
        "content-type": "image/png",
        "file-ext": "png",
        "sampling-strategy": {
            "strategy-name": "RANDOM_SAMPLING",
            "rate": 0.5
        }
    },
    "ConfigID2": {
        "s3-bucket-name": "my-aws-bucket-margin-sampling",
        "content-type": "image/png",
        "file-ext": "png",
        "sampling-strategy": {
            "strategy-name": "MARGIN",
            "threshold": 0.4
        }
    },
    "ConfigID3": {
        "s3-bucket-name": "my-aws-bucket-least-confidence-sampling",
        "content-type": "image/png",
        "file-ext": "png",
        "sampling-strategy": {
            "strategy-name": "LEAST_CONFIDENCE",
            "threshold": 0.4
        }
    },
    "ConfigID4": {
        "s3-bucket-name": "my-aws-bucket-entropy-sampling",
        "content-type": "image/png",
        "file-ext": "png",
        "sampling-strategy": {
            "strategy-name": "ENTROPY",
            "threshold": 2
        }
    },
    "ConfigID5": {
        "s3-bucket-name": "my-aws-bucket-no-sampling",
        "s3-prefix": "DeviceA",
        "content-type": "application/json"
    }
}
```

### Sampling strategies
<a name="ml-feedback-connector-sampling-strategies"></a>

The connector supports four sampling strategies that determine whether to upload samples that are passed to the connector. Samples are discrete instances of data that a model uses for a prediction. You can use sampling strategies to filter for the samples that are most likely to improve model accuracy.

`RANDOM_SAMPLING`  <a name="ml-feedback-connector-sampling-strategies-random"></a>
Randomly uploads samples based on the supplied rate. It uploads a sample if a randomly generated value is less than the rate. The higher the rate, the more samples are uploaded.  
This strategy disregards any model prediction that is supplied.

`LEAST_CONFIDENCE`  <a name="ml-feedback-connector-sampling-strategies-least-confidence"></a>
Uploads samples whose maximum confidence probability falls below the supplied threshold.    
Example scenario:  
Threshold: `.6`  
Model prediction: `[.2, .2, .4, .2]`  
Maximum confidence probability: `.4`  
Result:  
Use the sample because maximum confidence probability (`.4`) <= threshold (`.6`).

`MARGIN`  <a name="ml-feedback-connector-sampling-strategies-margin"></a>
Uploads samples if the margin between the top two confidence probabilities falls within the supplied threshold. The margin is the difference between the top two probabilities.    
Example scenario:  
Threshold: `.02`  
Model prediction: `[.3, .35, .34, .01]`  
Top two confidence probabilities: `[.35, .34]`  
Margin: `.01` (`.35 - .34`)  
Result:  
Use the sample because margin (`.01`) <= threshold (`.02`).

`ENTROPY`  <a name="ml-feedback-connector-sampling-strategies-entropy"></a>
Uploads samples whose entropy is greater than the supplied threshold. Uses the model prediction's normalized entropy.    
Example scenario:  
Threshold: `0.75`  
Model prediction: `[.5, .25, .25]`  
Entropy for prediction: `1.03972`  
Result:  
Use sample because entropy (`1.03972`) > threshold (`0.75`).

## Input data
<a name="ml-feedback-connector-data-input"></a>

User-defined Lambda functions use the `publish` function of the `feedback` client in the AWS IoT Greengrass Machine Learning SDK to invoke the connector. For an example, see [Usage Example](#ml-feedback-connector-usage).

**Note**  
This connector doesn't accept MQTT messages as input data.

The `publish` function takes the following arguments:

ConfigId  
The ID of the target feedback configuration. This must match the ID of a feedback configuration defined in the [FeedbackConfigurationMap](#ml-feedback-connector-param) parameter for the ML Feedback connector.  
Required: true  
Type: string

ModelInput  
The input data that was passed to a model for inference. This input data is uploaded using the target configuration unless it is filtered out based on the sampling strategy.  
Required: true  
Type: bytes

ModelPrediction  
The prediction results from the model. The result type can be a dictionary or a list. For example, the prediction results from the ML Image Classification connector is a list of probabilities (such as `[0.25, 0.60, 0.15]`). This data is published to the `/feedback/message/prediction` topic.  
Required: true  
Type: dictionary or list of `float` values

Metadata  
Customer-defined, application-specific metadata that is attached to the uploaded sample and published to the `/feedback/message/prediction` topic. The connector also inserts a `publish-ts` key with a timestamp value into the metadata.  
Required: false  
Type: dictionary  
Example: `{"some-key": "some value"}`

## Output data
<a name="ml-feedback-connector-data-output"></a>

This connector publishes data to three MQTT topics:
+ Status information from the connector on the `feedback/message/status` topic.
+ Prediction results on the `feedback/message/prediction` topic.
+ Metrics destined for CloudWatch on the `cloudwatch/metric/put` topic.

<a name="connectors-input-output-subscriptions"></a>You must configure subscriptions to allow the connector to communicate on MQTT topics. For more information, see [Inputs and outputs](connectors.md#connectors-inputs-outputs).

**Topic filter**: `feedback/message/status`  
Use this topic to monitor the status of sample uploads and dropped samples. The connector publishes to this topic every time that it receives a request.     
**Example output: Sample upload succeeded**  

```
{
  "response": {
    "status": "success",
    "s3_response": {
      "ResponseMetadata": {
        "HostId": "IOWQ4fDEXAMPLEQM+ey7N9WgVhSnQ6JEXAMPLEZb7hSQDASK+Jd1vEXAMPLEa3Km",
        "RetryAttempts": 1,
        "HTTPStatusCode": 200,
        "RequestId": "79104EXAMPLEB723",
        "HTTPHeaders": {
          "content-length": "0",
          "x-amz-id-2": "lbbqaDVFOhMlyU3gRvAX1ZIdg8P0WkGkCSSFsYFvSwLZk3j7QZhG5EXAMPLEdd4/pEXAMPLEUqU=",
          "server": "AmazonS3",
          "x-amz-expiration": "expiry-date=\"Wed, 17 Jul 2019 00:00:00 GMT\", rule-id=\"OGZjYWY3OTgtYWI2Zi00ZDllLWE4YmQtNzMyYzEXAMPLEoUw\"",
          "x-amz-request-id": "79104EXAMPLEB723",
          "etag": "\"b9c4f172e64458a5fd674EXAMPLE5628\"",
          "date": "Thu, 11 Jul 2019 00:12:50 GMT",
          "x-amz-server-side-encryption": "AES256"
        }
      },
      "bucket": "greengrass-feedback-connector-data-us-west-2",
      "ETag": "\"b9c4f172e64458a5fd674EXAMPLE5628\"",
      "Expiration": "expiry-date=\"Wed, 17 Jul 2019 00:00:00 GMT\", rule-id=\"OGZjYWY3OTgtYWI2Zi00ZDllLWE4YmQtNzMyYzEXAMPLEoUw\"",
      "key": "s3-key-prefix/UUID.file_ext",
      "ServerSideEncryption": "AES256"
    }
  },
  "id": "5aaa913f-97a3-48ac-5907-18cd96b89eeb"
}
```
The connector adds the `bucket` and `key` fields to the response from Amazon S3. For more information about the Amazon S3 response, see [PUT object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html#RESTObjectPUT-responses) in the *Amazon Simple Storage Service API Reference*.  
**Example output: Sample dropped because of the sampling strategy**  

```
{
  "response": {
    "status": "sample_dropped_by_strategy"
  },
  "id": "4bf5aeb0-d1e4-4362-5bb4-87c05de78ba3"
}
```  
**Example output: Sample upload failed**  
A failure status includes the error message as the `error_message` value and the exception class as the `error` value.  

```
{
  "response": {
    "status": "fail",
    "error_message": "[RequestId: 4bf5aeb0-d1e4-4362-5bb4-87c05de78ba3] Failed to upload model input data due to exception. Model prediction will not be published. Exception type: NoSuchBucket, error: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist",
    "error": "NoSuchBucket"
  },
  "id": "4bf5aeb0-d1e4-4362-5bb4-87c05de78ba3"
}
```  
**Example output: Request throttled because of the request limit**  

```
{
  "response": {
    "status": "fail",
    "error_message": "Request limit has been reached (max request: 10 ). Dropping request.",
    "error": "Queue.Full"
  },
  "id": "4bf5aeb0-d1e4-4362-5bb4-87c05de78ba3"
}
```

**Topic filter**: `feedback/message/prediction`  
Use this topic to listen for predictions based on uploaded sample data. This lets you analyze your model performance in real time. Model predictions are published to this topic only if data is successfully uploaded to Amazon S3. Messages published on this topic are in JSON format. They contain the link to the uploaded data object, the model's prediction, and the metadata included in the request.  
You can also store prediction results and use them to report and analyze trends over time. Trends can provide valuable insights. For example, a *decreasing accuracy over time* trend can help you to decide whether the model needs to be retrained.    
**Example output**  

```
{
  "source-ref": "s3://greengrass-feedback-connector-data-us-west-2/s3-key-prefix/UUID.file_ext",
  "model-prediction": [
    0.5,
    0.2,
    0.2,
    0.1
  ],
  "config-id": "ConfigID2",
  "metadata": {
    "publish-ts": "2019-07-11 00:12:48.816752"
  }
}
```
You can configure the [IoT Analytics connector](iot-analytics-connector.md) to subscribe to this topic and send the information to AWS IoT Analytics for further or historical analysis.

**Topic filter**: `cloudwatch/metric/put`  
This is the output topic used to publish metrics to CloudWatch. This feature requires that you install and configure the [CloudWatch Metrics connector](cloudwatch-metrics-connector.md).  
Metrics include:  
+ The number of uploaded samples.
+ The size of uploaded samples.
+ The number of errors from uploads to Amazon S3.
+ The number of dropped samples based on the sampling strategy.
+ The number of throttled requests.  
**Example output: Size of the data sample (published before the actual upload)**  

```
{
  "request": {
    "namespace": "GreengrassFeedbackConnector",
    "metricData": {
      "value": 47592,
      "unit": "Bytes",
      "metricName": "SampleSize"
    }
  }
}
```  
**Example output: Sample upload succeeded**  

```
{
  "request": {
    "namespace": "GreengrassFeedbackConnector",
    "metricData": {
      "value": 1,
      "unit": "Count",
      "metricName": "SampleUploadSuccess"
    }
  }
}
```  
**Example output: Sample upload succeeded and prediction result published**  

```
{
  "request": {
    "namespace": "GreengrassFeedbackConnector",
    "metricData": {
      "value": 1,
      "unit": "Count",
      "metricName": "SampleAndPredictionPublished"
    }
  }
}
```  
**Example output: Sample upload failed**  

```
{
  "request": {
    "namespace": "GreengrassFeedbackConnector",
    "metricData": {
      "value": 1,
      "unit": "Count",
      "metricName": "SampleUploadFailure"
    }
  }
}
```  
**Example output: Sample dropped because of the sampling strategy**  

```
{
  "request": {
    "namespace": "GreengrassFeedbackConnector",
    "metricData": {
      "value": 1,
      "unit": "Count",
      "metricName": "SampleNotUsed"
    }
  }
}
```  
**Example output: Request throttled because of the request limit**  

```
{
  "request": {
    "namespace": "GreengrassFeedbackConnector",
    "metricData": {
      "value": 1,
      "unit": "Count",
      "metricName": "ErrorRequestThrottled"
    }
  }
}
```

## Usage Example
<a name="ml-feedback-connector-usage"></a>

The following example is a user-defined Lambda function that uses the [AWS IoT Greengrass Machine Learning SDK](lambda-functions.md#lambda-sdks-ml) to send data to the ML Feedback connector.

**Note**  
You can download the AWS IoT Greengrass Machine Learning SDK from the AWS IoT Greengrass [downloads page](what-is-gg.md#gg-ml-sdk-download).

```
import json
import logging
import os
import sys
import greengrass_machine_learning_sdk as ml

client = ml.client('feedback')

try:
    feedback_config_id = os.environ["FEEDBACK_CONFIG_ID"]
    model_input_data_dir = os.environ["MODEL_INPUT_DIR"]
    model_prediction_str = os.environ["MODEL_PREDICTIONS"]
    model_prediction = json.loads(model_prediction_str)
except Exception as e:
    logging.info("Failed to open environment variables. Failed with exception:{}".format(e))
    sys.exit(1)

try:
    with open(os.path.join(model_input_data_dir, os.listdir(model_input_data_dir)[0]), 'rb') as f:
        content = f.read()
except Exception as e:
    logging.info("Failed to open model input directory. Failed with exception:{}".format(e))
    sys.exit(1)    

def invoke_feedback_connector():
    logging.info("Invoking feedback connector.")
    try:
        client.publish(
            ConfigId=feedback_config_id,
            ModelInput=content,
            ModelPrediction=model_prediction
        )
    except Exception as e:
        logging.info("Exception raised when invoking feedback connector:{}".format(e))
        sys.exit(1)    

invoke_feedback_connector()

def function_handler(event, context):
    return
```

## Licenses
<a name="ml-feedback-connector-license"></a>



The ML Feedback connector includes the following third-party software/licensing:<a name="boto-3-licenses"></a>
+ [AWS SDK for Python (Boto3)](https://pypi.org/project/boto3/)/Apache License 2.0
+ [botocore](https://pypi.org/project/botocore/)/Apache License 2.0
+ [dateutil](https://pypi.org/project/python-dateutil/1.4/)/PSF License
+ [docutils](https://pypi.org/project/docutils/)/BSD License, GNU General Public License (GPL), Python Software Foundation License, Public Domain
+ [jmespath](https://pypi.org/project/jmespath/)/MIT License
+ [s3transfer](https://pypi.org/project/s3transfer/)/Apache License 2.0
+ [urllib3](https://pypi.org/project/urllib3/)/MIT License
+ <a name="six-license"></a>[six](https://github.com/benjaminp/six)/MIT

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## See also
<a name="ml-feedback-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)

# ML Image Classification connector
<a name="image-classification-connector"></a>

**Warning**  <a name="connectors-extended-life-phase-warning"></a>
This connector has moved into the *extended life phase*, and AWS IoT Greengrass won't release updates that provide features, enhancements to existing features, security patches, or bug fixes. For more information, see [AWS IoT Greengrass Version 1 maintenance policy](maintenance-policy.md).

The ML Image Classification [connectors](connectors.md) provide a machine learning (ML) inference service that runs on the AWS IoT Greengrass core. This local inference service performs image classification using a model trained by the SageMaker AI image classification algorithm.

User-defined Lambda functions use the AWS IoT Greengrass Machine Learning SDK to submit inference requests to the local inference service. The service runs inference locally and returns probabilities that the input image belongs to specific categories.

AWS IoT Greengrass provides the following versions of this connector, which is available for multiple platforms.

------
#### [ Version 2 ]


| Connector | Description and ARN | 
| --- | --- | 
| ML Image Classification Aarch64 JTX2 |  Image classification inference service for NVIDIA Jetson TX2. Supports GPU acceleration. **ARN:** `arn:aws:greengrass:region::/connectors/ImageClassificationAarch64JTX2/versions/2` | 
| ML Image Classification x86\$164 |  Image classification inference service for x86\$164 platforms. **ARN:** `arn:aws:greengrass:region::/connectors/ImageClassificationx86-64/versions/2` | 
| ML Image Classification ARMv7 |  Image classification inference service for ARMv7 platforms. **ARN:** `arn:aws:greengrass:region::/connectors/ImageClassificationARMv7/versions/2` | 

------
#### [ Version 1 ]


| Connector | Description and ARN | 
| --- | --- | 
| ML Image Classification Aarch64 JTX2 |  Image classification inference service for NVIDIA Jetson TX2. Supports GPU acceleration. **ARN:** `arn:aws:greengrass:region::/connectors/ImageClassificationAarch64JTX2/versions/1` | 
| ML Image Classification x86\$164 |  Image classification inference service for x86\$164 platforms. **ARN:** `arn:aws:greengrass:region::/connectors/ImageClassificationx86-64/versions/1` | 
| ML Image Classification Armv7 |  Image classification inference service for Armv7 platforms. **ARN:** `arn:aws:greengrass:region::/connectors/ImageClassificationARMv7/versions/1` | 

------

For information about version changes, see the [Changelog](#image-classification-connector-changelog).

## Requirements
<a name="image-classification-connector-req"></a>

These connectors have the following requirements:

------
#### [ Version 2 ]
+ AWS IoT Greengrass Core Software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="req-image-classification-framework"></a>Dependencies for the Apache MXNet framework installed on the core device. For more information, see [Installing MXNet dependencies on the AWS IoT Greengrass core](#image-classification-connector-config).
+ <a name="req-image-classification-resource"></a>An [ML resource](ml-inference.md#ml-resources) in the Greengrass group that references an SageMaker AI model source. This model must be trained by the SageMaker AI image classification algorithm. For more information, see [Image classification algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html) in the *Amazon SageMaker AI Developer Guide*.
+ <a name="req-image-classification-feedback"></a>The [ML Feedback connector](ml-feedback-connector.md) added to the Greengrass group and configured. This is required only if you want to use the connector to upload model input data and publish predictions to an MQTT topic.
+ <a name="req-image-classification-policy"></a>The [Greengrass group role](group-role.md) configured to allow the `sagemaker:DescribeTrainingJob` action on the target training job, as shown in the following example IAM policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "sagemaker:DescribeTrainingJob"
              ],
              "Resource": "arn:aws:sagemaker:us-east-1:123456789012:training-job/training-job-name"
          }
      ]
  }
  ```

------

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

  You can grant granular or conditional access to resources (for example, by using a wildcard \$1 naming scheme). If you change the target training job in the future, make sure to update the group role.
+ [AWS IoT Greengrass Machine Learning SDK](lambda-functions.md#lambda-sdks-ml) v1.1.0 is required to interact with this connector.

------
#### [ Version 1 ]
+ AWS IoT Greengrass Core Software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="req-image-classification-framework"></a>Dependencies for the Apache MXNet framework installed on the core device. For more information, see [Installing MXNet dependencies on the AWS IoT Greengrass core](#image-classification-connector-config).
+ <a name="req-image-classification-resource"></a>An [ML resource](ml-inference.md#ml-resources) in the Greengrass group that references an SageMaker AI model source. This model must be trained by the SageMaker AI image classification algorithm. For more information, see [Image classification algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html) in the *Amazon SageMaker AI Developer Guide*.
+ <a name="req-image-classification-policy"></a>The [Greengrass group role](group-role.md) configured to allow the `sagemaker:DescribeTrainingJob` action on the target training job, as shown in the following example IAM policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "sagemaker:DescribeTrainingJob"
              ],
              "Resource": "arn:aws:sagemaker:us-east-1:123456789012:training-job/training-job-name"
          }
      ]
  }
  ```

------

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

  You can grant granular or conditional access to resources (for example, by using a wildcard \$1 naming scheme). If you change the target training job in the future, make sure to update the group role.
+ [AWS IoT Greengrass Machine Learning SDK](lambda-functions.md#lambda-sdks-ml) v1.0.0 or later is required to interact with this connector.

------

## Connector Parameters
<a name="image-classification-connector-param"></a>

These connectors provide the following parameters.

------
#### [ Version 2 ]

`MLModelDestinationPath`  <a name="param-image-classification-mdlpath"></a>
The absolute local path of the ML resource inside the Lambda environment. This is the destination path that's specified for the ML resource.  
If you created the ML resource in the console, this is the local path.
Display name in the AWS IoT console: **Model destination path**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`MLModelResourceId`  <a name="param-image-classification-mdlresourceid"></a>
The ID of the ML resource that references the source model.  
Display name in the AWS IoT console: **SageMaker job ARN resource**  
Required: `true`  
Type: `string`  
Valid pattern: `[a-zA-Z0-9:_-]+`

`MLModelSageMakerJobArn`  <a name="param-image-classification-mdljobarn"></a>
The ARN of the SageMaker AI training job that represents the SageMaker AI model source. The model must be trained by the SageMaker AI image classification algorithm.  
Display name in the AWS IoT console: **SageMaker job ARN**  
Required: `true`  
Type: `string`  
Valid pattern: `^arn:aws:sagemaker:[a-zA-Z0-9-]+:[0-9]+:training-job/[a-zA-Z0-9][a-zA-Z0-9-]+$`

`LocalInferenceServiceName`  <a name="param-image-classification-svcname"></a>
The name for the local inference service. User-defined Lambda functions invoke the service by passing the name to the `invoke_inference_service` function of the AWS IoT Greengrass Machine Learning SDK. For an example, see [Usage Example](#image-classification-connector-usage).  
Display name in the AWS IoT console: **Local inference service name**  
Required: `true`  
Type: `string`  
Valid pattern: `[a-zA-Z0-9][a-zA-Z0-9-]{1,62}`

`LocalInferenceServiceTimeoutSeconds`  <a name="param-image-classification-svctimeout"></a>
The amount of time (in seconds) before the inference request is terminated. The minimum value is 1.  
Display name in the AWS IoT console: **Timeout (second)**  
Required: `true`  
Type: `string`  
Valid pattern: `[1-9][0-9]*`

`LocalInferenceServiceMemoryLimitKB`  <a name="param-image-classification-svcmemorylimit"></a>
The amount of memory (in KB) that the service has access to. The minimum value is 1.  
Display name in the AWS IoT console: **Memory limit (KB)**  
Required: `true`  
Type: `string`  
Valid pattern: `[1-9][0-9]*`

`GPUAcceleration`  <a name="param-image-classification-gpuacceleration"></a>
The CPU or GPU (accelerated) computing context. This property applies to the ML Image Classification Aarch64 JTX2 connector only.  
Display name in the AWS IoT console: **GPU acceleration**  
Required: `true`  
Type: `string`  
Valid values: `CPU` or `GPU`

`MLFeedbackConnectorConfigId`  <a name="param-image-classification-feedbackconfigid"></a>
The ID of the feedback configuration to use to upload model input data. This must match the ID of a feedback configuration defined for the [ML Feedback connector](ml-feedback-connector.md).  
This parameter is required only if you want to use the ML Feedback connector to upload model input data and publish predictions to an MQTT topic.  
Display name in the AWS IoT console: **ML Feedback connector configuration ID**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|^[a-zA-Z0-9][a-zA-Z0-9-]{1,62}$`

------
#### [ Version 1 ]

`MLModelDestinationPath`  <a name="param-image-classification-mdlpath"></a>
The absolute local path of the ML resource inside the Lambda environment. This is the destination path that's specified for the ML resource.  
If you created the ML resource in the console, this is the local path.
Display name in the AWS IoT console: **Model destination path**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`MLModelResourceId`  <a name="param-image-classification-mdlresourceid"></a>
The ID of the ML resource that references the source model.  
Display name in the AWS IoT console: **SageMaker job ARN resource**  
Required: `true`  
Type: `string`  
Valid pattern: `[a-zA-Z0-9:_-]+`

`MLModelSageMakerJobArn`  <a name="param-image-classification-mdljobarn"></a>
The ARN of the SageMaker AI training job that represents the SageMaker AI model source. The model must be trained by the SageMaker AI image classification algorithm.  
Display name in the AWS IoT console: **SageMaker job ARN**  
Required: `true`  
Type: `string`  
Valid pattern: `^arn:aws:sagemaker:[a-zA-Z0-9-]+:[0-9]+:training-job/[a-zA-Z0-9][a-zA-Z0-9-]+$`

`LocalInferenceServiceName`  <a name="param-image-classification-svcname"></a>
The name for the local inference service. User-defined Lambda functions invoke the service by passing the name to the `invoke_inference_service` function of the AWS IoT Greengrass Machine Learning SDK. For an example, see [Usage Example](#image-classification-connector-usage).  
Display name in the AWS IoT console: **Local inference service name**  
Required: `true`  
Type: `string`  
Valid pattern: `[a-zA-Z0-9][a-zA-Z0-9-]{1,62}`

`LocalInferenceServiceTimeoutSeconds`  <a name="param-image-classification-svctimeout"></a>
The amount of time (in seconds) before the inference request is terminated. The minimum value is 1.  
Display name in the AWS IoT console: **Timeout (second)**  
Required: `true`  
Type: `string`  
Valid pattern: `[1-9][0-9]*`

`LocalInferenceServiceMemoryLimitKB`  <a name="param-image-classification-svcmemorylimit"></a>
The amount of memory (in KB) that the service has access to. The minimum value is 1.  
Display name in the AWS IoT console: **Memory limit (KB)**  
Required: `true`  
Type: `string`  
Valid pattern: `[1-9][0-9]*`

`GPUAcceleration`  <a name="param-image-classification-gpuacceleration"></a>
The CPU or GPU (accelerated) computing context. This property applies to the ML Image Classification Aarch64 JTX2 connector only.  
Display name in the AWS IoT console: **GPU acceleration**  
Required: `true`  
Type: `string`  
Valid values: `CPU` or `GPU`

------

### Create Connector Example (AWS CLI)
<a name="image-classification-connector-create"></a>

The following CLI commands create a `ConnectorDefinition` with an initial version that contains an ML Image Classification connector.

**Example: CPU Instance**  
This example creates an instance of the ML Image Classification Armv7l connector.  

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyImageClassificationConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/ImageClassificationARMv7/versions/2",
            "Parameters": {
                "MLModelDestinationPath": "/path-to-model",
                "MLModelResourceId": "my-ml-resource",
                "MLModelSageMakerJobArn": "arn:aws:sagemaker:us-west-2:123456789012:training-job:MyImageClassifier",
                "LocalInferenceServiceName": "imageClassification",
                "LocalInferenceServiceTimeoutSeconds": "10",
                "LocalInferenceServiceMemoryLimitKB": "500000",
                "MLFeedbackConnectorConfigId": "MyConfig0"
            }
        }
    ]
}'
```

**Example: GPU Instance**  
This example creates an instance of the ML Image Classification Aarch64 JTX2 connector, which supports GPU acceleration on an NVIDIA Jetson TX2 board.  

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyImageClassificationConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/ImageClassificationAarch64JTX2/versions/2",
            "Parameters": {
                "MLModelDestinationPath": "/path-to-model",
                "MLModelResourceId": "my-ml-resource",
                "MLModelSageMakerJobArn": "arn:aws:sagemaker:us-west-2:123456789012:training-job:MyImageClassifier",
                "LocalInferenceServiceName": "imageClassification",
                "LocalInferenceServiceTimeoutSeconds": "10",
                "LocalInferenceServiceMemoryLimitKB": "500000",
                "GPUAcceleration": "GPU",
                "MLFeedbackConnectorConfigId": "MyConfig0"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in these connectors have a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="image-classification-connector-data-input"></a>

 These connectors accept an image file as input. Input image files must be in `jpeg` or `png` format. For more information, see [Usage Example](#image-classification-connector-usage). 

These connectors don't accept MQTT messages as input data.

## Output data
<a name="image-classification-connector-data-output"></a>

These connectors return a formatted prediction for the object identified in the input image:

```
[0.3,0.1,0.04,...]
```

The prediction contains a list of values that correspond with the categories used in the training dataset during model training. Each value represents the probability that the image falls under the corresponding category. The category with the highest probability is the dominant prediction.

These connectors don't publish MQTT messages as output data.

## Usage Example
<a name="image-classification-connector-usage"></a>

The following example Lambda function uses the [AWS IoT Greengrass Machine Learning SDK](lambda-functions.md#lambda-sdks-ml) to interact with an ML Image Classification connector.

**Note**  
 You can download the SDK from the [AWS IoT Greengrass Machine Learning SDK](what-is-gg.md#gg-ml-sdk-download) downloads page.

The example initializes an SDK client and synchronously calls the SDK's `invoke_inference_service` function to invoke the local inference service. It passes in the algorithm type, service name, image type, and image content. Then, the example parses the service response to get the probability results (predictions).

------
#### [ Python 3.7 ]

```
import logging
from threading import Timer

import numpy as np

import greengrass_machine_learning_sdk as ml

# We assume the inference input image is provided as a local file
# to this inference client Lambda function.
with open('/test_img/test.jpg', 'rb') as f:
    content = bytearray(f.read())

client = ml.client('inference')

def infer():
    logging.info('invoking Greengrass ML Inference service')

    try:
        resp = client.invoke_inference_service(
            AlgoType='image-classification',
            ServiceName='imageClassification',
            ContentType='image/jpeg',
            Body=content
        )
    except ml.GreengrassInferenceException as e:
        logging.info('inference exception {}("{}")'.format(e.__class__.__name__, e))
        return
    except ml.GreengrassDependencyException as e:
        logging.info('dependency exception {}("{}")'.format(e.__class__.__name__, e))
        return

    logging.info('resp: {}'.format(resp))
    predictions = resp['Body'].read().decode("utf-8")
    logging.info('predictions: {}'.format(predictions))
    
    # The connector output is in the format: [0.3,0.1,0.04,...]
    # Remove the '[' and ']' at the beginning and end.
    predictions = predictions[1:-1]
    count = len(predictions.split(','))
    predictions_arr = np.fromstring(predictions, count=count, sep=',')

    # Perform business logic that relies on the predictions_arr, which is an array
    # of probabilities.
    
    # Schedule the infer() function to run again in one second.
    Timer(1, infer).start()
    return

infer()

def function_handler(event, context):
    return
```

------
#### [ Python 2.7 ]

```
import logging
from threading import Timer

import numpy

import greengrass_machine_learning_sdk as gg_ml

# The inference input image.
with open("/test_img/test.jpg", "rb") as f:
    content = f.read()

client = gg_ml.client("inference")


def infer():
    logging.info("Invoking Greengrass ML Inference service")

    try:
        resp = client.invoke_inference_service(
            AlgoType="image-classification",
            ServiceName="imageClassification",
            ContentType="image/jpeg",
            Body=content,
        )
    except gg_ml.GreengrassInferenceException as e:
        logging.info('Inference exception %s("%s")', e.__class__.__name__, e)
        return
    except gg_ml.GreengrassDependencyException as e:
        logging.info('Dependency exception %s("%s")', e.__class__.__name__, e)
        return

    logging.info("Response: %s", resp)
    predictions = resp["Body"].read()
    logging.info("Predictions: %s", predictions)

    # The connector output is in the format: [0.3,0.1,0.04,...]
    # Remove the '[' and ']' at the beginning and end.
    predictions = predictions[1:-1]
    predictions_arr = numpy.fromstring(predictions, sep=",")
    logging.info("Split into %s predictions.", len(predictions_arr))

    # Perform business logic that relies on predictions_arr, which is an array
    # of probabilities.

    # Schedule the infer() function to run again in one second.
    Timer(1, infer).start()


infer()


# In this example, the required AWS Lambda handler is never called.
def function_handler(event, context):
    return
```

------

The `invoke_inference_service` function in the AWS IoT Greengrass Machine Learning SDK accepts the following arguments.


| Argument | Description | 
| --- | --- | 
| `AlgoType` | The name of the algorithm type to use for inference. Currently, only `image-classification` is supported. Required: `true` Type: `string` Valid values: `image-classification` | 
| `ServiceName` | The name of the local inference service. Use the name that you specified for the `LocalInferenceServiceName` parameter when you configured the connector. Required: `true` Type: `string` | 
| `ContentType` | The mime type of the input image. Required: `true` Type: `string` Valid values: `image/jpeg, image/png` | 
| `Body` | The content of the input image file. Required: `true` Type: `binary` | 

## Installing MXNet dependencies on the AWS IoT Greengrass core
<a name="image-classification-connector-config"></a>

To use an ML Image Classification connector, you must install the dependencies for the Apache MXNet framework on the core device. The connectors use the framework to serve the ML model.

**Note**  
These connectors are bundled with a precompiled MXNet library, so you don't need to install the MXNet framework on the core device. 

AWS IoT Greengrass provides scripts to install the dependencies for the following common platforms and devices (or to use as a reference for installing them). If you're using a different platform or device, see the [MXNet documentation](https://mxnet.apache.org/) for your configuration.

Before installing the MXNet dependencies, make sure that the required [system libraries](#image-classification-connector-logging) (with the specified minimum versions) are present on the device.

------
#### [ NVIDIA Jetson TX2 ]

1. Install CUDA Toolkit 9.0 and cuDNN 7.0. You can follow the instructions in [Setting up other devices](setup-filter.other.md) in the Getting Started tutorial.

1. Enable universe repositories so the connector can install community-maintained open software. For more information, see [ Repositories/Ubuntu](https://help.ubuntu.com/community/Repositories/Ubuntu) in the Ubuntu documentation.

   1. Open the `/etc/apt/sources.list` file.

   1. Make sure that the following lines are uncommented.

      ```
      deb http://ports.ubuntu.com/ubuntu-ports/ xenial universe
      deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial universe
      deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates universe
      deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates universe
      ```

1. Save a copy of the following installation script to a file named `nvidiajtx2.sh` on the core device.

------
#### [ Python 3.7 ]

   ```
   #!/bin/bash
   set -e
   
   echo "Installing dependencies on the system..."
   echo 'Assuming that universe repos are enabled and checking dependencies...'
   apt-get -y update
   apt-get -y dist-upgrade
   apt-get install -y liblapack3 libopenblas-dev liblapack-dev libatlas-base-dev
   apt-get install -y python3.7 python3.7-dev
   
   python3.7 -m pip install --upgrade pip
   python3.7 -m pip install numpy==1.15.0
   python3.7 -m pip install opencv-python || echo 'Error: Unable to install OpenCV with pip on this platform. Try building the latest OpenCV from source (https://github.com/opencv/opencv).'
   
   echo 'Dependency installation/upgrade complete.'
   ```

**Note**  
<a name="opencv-build-from-source"></a>If [OpenCV](https://github.com/opencv/opencv) does not install successfully using this script, you can try building from source. For more information, see [ Installation in Linux](https://docs.opencv.org/4.1.0/d7/d9f/tutorial_linux_install.html) in the OpenCV documentation, or refer to other online resources for your platform.

------
#### [ Python 2.7 ]

   ```
   #!/bin/bash
   set -e
   
   echo "Installing dependencies on the system..."
   echo 'Assuming that universe repos are enabled and checking dependencies...'
   apt-get -y update
   apt-get -y dist-upgrade
   apt-get install -y liblapack3 libopenblas-dev liblapack-dev libatlas-base-dev python-dev
   
   echo 'Install latest pip...'
   wget https://bootstrap.pypa.io/get-pip.py
   python get-pip.py
   rm get-pip.py
   
   pip install numpy==1.15.0 scipy
   
   echo 'Dependency installation/upgrade complete.'
   ```

------

1. From the directory where you saved the file, run the following command:

   ```
   sudo nvidiajtx2.sh
   ```

------
#### [ x86\$164 (Ubuntu or Amazon Linux)  ]

1. Save a copy of the following installation script to a file named `x86_64.sh` on the core device.

------
#### [ Python 3.7 ]

   ```
   #!/bin/bash
   set -e
   
   echo "Installing dependencies on the system..."
   
   release=$(awk -F= '/^NAME/{print $2}' /etc/os-release)
   
   if [ "$release" == '"Ubuntu"' ]; then
     # Ubuntu. Supports EC2 and DeepLens. DeepLens has all the dependencies installed, so
     # this is mostly to prepare dependencies on Ubuntu EC2 instance.
     apt-get -y update
     apt-get -y dist-upgrade
   
     apt-get install -y libgfortran3 libsm6 libxext6 libxrender1
     apt-get install -y python3.7 python3.7-dev
   elif [ "$release" == '"Amazon Linux"' ]; then
     # Amazon Linux. Expect python to be installed already
     yum -y update
     yum -y upgrade
   
     yum install -y compat-gcc-48-libgfortran libSM libXrender libXext
   else
     echo "OS Release not supported: $release"
     exit 1
   fi
   
   python3.7 -m pip install --upgrade pip
   python3.7 -m pip install numpy==1.15.0
   python3.7 -m pip install opencv-python || echo 'Error: Unable to install OpenCV with pip on this platform. Try building the latest OpenCV from source (https://github.com/opencv/opencv).'
   
   echo 'Dependency installation/upgrade complete.'
   ```

**Note**  
<a name="opencv-build-from-source"></a>If [OpenCV](https://github.com/opencv/opencv) does not install successfully using this script, you can try building from source. For more information, see [ Installation in Linux](https://docs.opencv.org/4.1.0/d7/d9f/tutorial_linux_install.html) in the OpenCV documentation, or refer to other online resources for your platform.

------
#### [ Python 2.7 ]

   ```
   #!/bin/bash
   set -e
   
   echo "Installing dependencies on the system..."
   
   release=$(awk -F= '/^NAME/{print $2}' /etc/os-release)
   
   if [ "$release" == '"Ubuntu"' ]; then
     # Ubuntu. Supports EC2 and DeepLens. DeepLens has all the dependencies installed, so
     # this is mostly to prepare dependencies on Ubuntu EC2 instance.
     apt-get -y update
     apt-get -y dist-upgrade
   
     apt-get install -y libgfortran3 libsm6 libxext6 libxrender1 python-dev python-pip
   elif [ "$release" == '"Amazon Linux"' ]; then
     # Amazon Linux. Expect python to be installed already
     yum -y update
     yum -y upgrade
   
     yum install -y compat-gcc-48-libgfortran libSM libXrender libXext python-pip
   else
     echo "OS Release not supported: $release"
     exit 1
   fi
   
   pip install numpy==1.15.0 scipy opencv-python
   
   echo 'Dependency installation/upgrade complete.'
   ```

------

1. From the directory where you saved the file, run the following command:

   ```
   sudo x86_64.sh
   ```

------
#### [ Armv7 (Raspberry Pi) ]

1. Save a copy of the following installation script to a file named `armv7l.sh` on the core device.

------
#### [ Python 3.7 ]

   ```
   #!/bin/bash
   set -e
   
   echo "Installing dependencies on the system..."
   
   apt-get update
   apt-get -y upgrade
   
   apt-get install -y liblapack3 libopenblas-dev liblapack-dev
   apt-get install -y python3.7 python3.7-dev
   
   python3.7 -m pip install --upgrade pip
   python3.7 -m pip install numpy==1.15.0
   python3.7 -m pip install opencv-python || echo 'Error: Unable to install OpenCV with pip on this platform. Try building the latest OpenCV from source (https://github.com/opencv/opencv).'
   
   echo 'Dependency installation/upgrade complete.'
   ```

**Note**  
<a name="opencv-build-from-source"></a>If [OpenCV](https://github.com/opencv/opencv) does not install successfully using this script, you can try building from source. For more information, see [ Installation in Linux](https://docs.opencv.org/4.1.0/d7/d9f/tutorial_linux_install.html) in the OpenCV documentation, or refer to other online resources for your platform.

------
#### [ Python 2.7 ]

   ```
   #!/bin/bash
   set -e
   
   echo "Installing dependencies on the system..."
   
   apt-get update
   apt-get -y upgrade
   
   apt-get install -y liblapack3 libopenblas-dev liblapack-dev python-dev
   
   # python-opencv depends on python-numpy. The latest version in the APT repository is python-numpy-1.8.2
   # This script installs python-numpy first so that python-opencv can be installed, and then install the latest
   # numpy-1.15.x with pip
   apt-get install -y python-numpy python-opencv
   dpkg --remove --force-depends python-numpy
   
   echo 'Install latest pip...'
   wget https://bootstrap.pypa.io/get-pip.py
   python get-pip.py
   rm get-pip.py
   
   pip install --upgrade numpy==1.15.0 picamera scipy
   
   echo 'Dependency installation/upgrade complete.'
   ```

------

1. From the directory where you saved the file, run the following command:

   ```
   sudo bash armv7l.sh
   ```
**Note**  
On a Raspberry Pi, using `pip` to install machine learning dependencies is a memory-intensive operation that can cause the device to run out of memory and become unresponsive. As a workaround, you can temporarily increase the swap size:  
In `/etc/dphys-swapfile`, increase the value of the `CONF_SWAPSIZE` variable and then run the following command to restart `dphys-swapfile`.  

   ```
   /etc/init.d/dphys-swapfile restart
   ```

------

## Logging and troubleshooting
<a name="image-classification-connector-logging"></a>

Depending on your group settings, event and error logs are written to CloudWatch Logs, the local file system, or both. Logs from this connector use the prefix `LocalInferenceServiceName`. If the connector behaves unexpectedly, check the connector's logs. These usually contain useful debugging information, such as a missing ML library dependency or the cause of a connector startup failure.

If the AWS IoT Greengrass group is configured to write local logs, the connector writes log files to `greengrass-root/ggc/var/log/user/region/aws/`. For more information about Greengrass logging, see [Monitoring with AWS IoT Greengrass logs](greengrass-logs-overview.md).

Use the following information to help troubleshoot issues with the ML Image Classification connectors.

**Required system libraries**

The following tabs list the system libraries required for each ML Image Classification connector.

------
#### [ ML Image Classification Aarch64 JTX2 ]


| Library | Minimum version | 
| --- | --- | 
| ld-linux-aarch64.so.1 | GLIBC\$12.17 | 
| libc.so.6 | GLIBC\$12.17 | 
| libcublas.so.9.0 | not applicable | 
| libcudart.so.9.0 | not applicable | 
| libcudnn.so.7 | not applicable | 
| libcufft.so.9.0 | not applicable | 
| libcurand.so.9.0 | not applicable | 
| libcusolver.so.9.0 | not applicable | 
| libgcc\$1s.so.1 | GCC\$14.2.0 | 
| libgomp.so.1 | GOMP\$14.0, OMP\$11.0 | 
| libm.so.6 | GLIBC\$12.23 | 
| libpthread.so.0 | GLIBC\$12.17 | 
| librt.so.1 | GLIBC\$12.17 | 
| libstdc\$1\$1.so.6 | GLIBCXX\$13.4.21, CXXABI\$11.3.8 | 

------
#### [ ML Image Classification x86\$164 ]


| Library | Minimum version | 
| --- | --- | 
| ld-linux-x86-64.so.2 | GCC\$14.0.0 | 
| libc.so.6 | GLIBC\$12.4 | 
| libgfortran.so.3 | GFORTRAN\$11.0 | 
| libm.so.6 | GLIBC\$12.23 | 
| libpthread.so.0 | GLIBC\$12.2.5 | 
| librt.so.1 | GLIBC\$12.2.5 | 
| libstdc\$1\$1.so.6 | CXXABI\$11.3.8, GLIBCXX\$13.4.21 | 

------
#### [ ML Image Classification Armv7 ]


| Library | Minimum version | 
| --- | --- | 
| ld-linux-armhf.so.3 | GLIBC\$12.4 | 
| libc.so.6 | GLIBC\$12.7 | 
| libgcc\$1s.so.1 | GCC\$14.0.0 | 
| libgfortran.so.3 | GFORTRAN\$11.0 | 
| libm.so.6 | GLIBC\$12.4 | 
| libpthread.so.0 | GLIBC\$12.4 | 
| librt.so.1 | GLIBC\$12.4 | 
| libstdc\$1\$1.so.6 | CXXABI\$11.3.8, CXXABI\$1ARM\$11.3.3, GLIBCXX\$13.4.20 | 

------

**Issues**


| Symptom | Solution | 
| --- | --- | 
|  On a Raspberry Pi, the following error message is logged and you are not using the camera: `Failed to initialize libdc1394`   |  Run the following command to disable the driver: <pre>sudo ln /dev/null /dev/raw1394</pre> This operation is ephemeral and the symbolic link will disappear after rebooting. Consult the manual of your OS distribution to learn how to automatically create the link up on reboot.  | 

## Licenses
<a name="image-classification-connector-license"></a>

The ML Image Classification connectors includes the following third-party software/licensing:<a name="boto-3-licenses"></a>
+ [AWS SDK for Python (Boto3)](https://pypi.org/project/boto3/)/Apache License 2.0
+ [botocore](https://pypi.org/project/botocore/)/Apache License 2.0
+ [dateutil](https://pypi.org/project/python-dateutil/1.4/)/PSF License
+ [docutils](https://pypi.org/project/docutils/)/BSD License, GNU General Public License (GPL), Python Software Foundation License, Public Domain
+ [jmespath](https://pypi.org/project/jmespath/)/MIT License
+ [s3transfer](https://pypi.org/project/s3transfer/)/Apache License 2.0
+ [urllib3](https://pypi.org/project/urllib3/)/MIT License
+ [Deep Neural Network Library (DNNL)](https://github.com/intel/mkl-dnn)/Apache License 2.0
+ [OpenMP\$1 Runtime Library](https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/optimization-and-programming-guide/openmp-support/openmp-library-support/openmp-run-time-library-routines.html)/See [Intel OpenMP Runtime Library licensing](#openmp-license).
+ [mxnet](https://pypi.org/project/mxnet/)/Apache License 2.0
+ <a name="six-license"></a>[six](https://github.com/benjaminp/six)/MIT

**Intel OpenMP Runtime Library licensing**. The Intel® OpenMP\$1 runtime is dual-licensed, with a commercial (COM) license as part of the Intel® Parallel Studio XE Suite products, and a BSD open source (OSS) license.

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="image-classification-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 2 | Added the `MLFeedbackConnectorConfigId` parameter to support the use of the [ML Feedback connector](ml-feedback-connector.md) to upload model input data, publish predictions to an MQTT topic, and publish metrics to Amazon CloudWatch.  | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="image-classification-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+ [Perform machine learning inference](ml-inference.md)
+ [Image classification algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html) in the *Amazon SageMaker AI Developer Guide*

# ML Object Detection connector
<a name="obj-detection-connector"></a>

**Warning**  <a name="connectors-extended-life-phase-warning"></a>
This connector has moved into the *extended life phase*, and AWS IoT Greengrass won't release updates that provide features, enhancements to existing features, security patches, or bug fixes. For more information, see [AWS IoT Greengrass Version 1 maintenance policy](maintenance-policy.md).

The ML Object Detection [connectors](connectors.md) provide a machine learning (ML) inference service that runs on the AWS IoT Greengrass core. This local inference service performs object detection using an object detection model compiled by the SageMaker AI Neo deep learning compiler. Two types of object detection models are supported: Single Shot Multibox Detector (SSD) and You Only Look Once (YOLO) v3. For more information, see [Object Detection Model Requirements](#obj-detection-connector-req-model).

 User-defined Lambda functions use the AWS IoT Greengrass Machine Learning SDK to submit inference requests to the local inference service. The service performs local inference on an input image and returns a list of predictions for each object detected in the image. Each prediction contains an object category, a prediction confidence score, and pixel coordinates that specify a bounding box around the predicted object. 

AWS IoT Greengrass provides ML Object Detection connectors for multiple platforms:


| Connector | Description and ARN | 
| --- | --- | 
| ML Object Detection Aarch64 JTX2 |  Object detection inference service for NVIDIA Jetson TX2. Supports GPU acceleration.  **ARN:** `arn:aws:greengrass:region::/connectors/ObjectDetectionAarch64JTX2/versions/1`   | 
| ML Object Detection x86\$164 |  Object detection inference service for x86\$164 platforms.  **ARN:** `arn:aws:greengrass:region::/connectors/ObjectDetectionx86-64/versions/1`   | 
| ML Object Detection ARMv7 |   Object detection inference service for ARMv7 platforms.   **ARN:** `arn:aws:greengrass:region::/connectors/ObjectDetectionARMv7/versions/1`   | 

## Requirements
<a name="obj-detection-connector-req"></a>

These connectors have the following requirements:
+ AWS IoT Greengrass Core Software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ Dependencies for the SageMaker AI Neo deep learning runtime installed on the core device. For more information, see [Installing Neo deep learning runtime dependencies on the AWS IoT Greengrass core](#obj-detection-connector-config).
+ An [ML resource](ml-inference.md#ml-resources) in the Greengrass group. The ML resource must reference an Amazon S3 bucket that contains an object detection model. For more information, see [Amazon S3 model sources](ml-inference.md#s3-ml-resources).
**Note**  
The model must be a Single Shot Multibox Detector or You Only Look Once v3 object detection model type. It must be compiled using the SageMaker AI Neo deep learning compiler. For more information, see [Object Detection Model Requirements](#obj-detection-connector-req-model).
+ <a name="req-image-classification-feedback"></a>The [ML Feedback connector](ml-feedback-connector.md) added to the Greengrass group and configured. This is required only if you want to use the connector to upload model input data and publish predictions to an MQTT topic.
+ [AWS IoT Greengrass Machine Learning SDK](lambda-functions.md#lambda-sdks-ml) v1.1.0 is required to interact with this connector.

### Object detection model requirements
<a name="obj-detection-connector-req-model"></a>

The ML Object Detection connectors support Single Shot multibox Detector (SSD) and You Only Look Once (YOLO) v3 object detection model types. You can use the object detection components provided by [GluonCV](https://gluon-cv.mxnet.io) to train the model with your own dataset. Or, you can use pre-trained models from the GluonCV Model Zoo:
+ [Pre-trained SSD model](https://gluon-cv.mxnet.io/build/examples_detection/demo_ssd.html)
+ [Pre-trained YOLO v3 model](https://gluon-cv.mxnet.io/build/examples_detection/demo_yolo.html)

Your object detection model must be trained with 512 x 512 input images. The pre-trained models from the GluonCV Model Zoo already meet this requirement.

Trained object detection models must be compiled with the SageMaker AI Neo deep learning compiler. When compiling, make sure the target hardware matches the hardware of your Greengrass core device. For more information, see [ SageMaker AI Neo](https://docs.aws.amazon.com/sagemaker/latest/dg/neo.html) in the *Amazon SageMaker AI Developer Guide*.

The compiled model must be added as an ML resource ([Amazon S3 model source](ml-inference.md#s3-ml-resources)) to the same Greengrass group as the connector.

## Connector Parameters
<a name="obj-detection-connector-param"></a>

These connectors provide the following parameters.

`MLModelDestinationPath`  
The absolute path to the the Amazon S3 bucket that contains the Neo-compatible ML model. This is the destination path that's specified for the ML model resource.  
Display name in the AWS IoT console: **Model destination path**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`MLModelResourceId`  
The ID of the ML resource that references the source model.  
Display name in the AWS IoT console: **Greengrass group ML resource**  
Required: `true`  
Type: `S3MachineLearningModelResource`  
Valid pattern: `^[a-zA-Z0-9:_-]+$`

`LocalInferenceServiceName`  
The name for the local inference service. User-defined Lambda functions invoke the service by passing the name to the `invoke_inference_service` function of the AWS IoT Greengrass Machine Learning SDK. For an example, see [Usage Example](#obj-detection-connector-usage).  
Display name in the AWS IoT console: **Local inference service name**  
Required: `true`  
Type: `string`  
Valid pattern: `^[a-zA-Z0-9][a-zA-Z0-9-]{1,62}$`

`LocalInferenceServiceTimeoutSeconds`  
The time (in seconds) before the inference request is terminated. The minimum value is 1. The default value is 10.  
Display name in the AWS IoT console: **Timeout (second)**  
Required: `true`  
Type: `string`  
Valid pattern: `^[1-9][0-9]*$`

`LocalInferenceServiceMemoryLimitKB`  
The amount of memory (in KB) that the service has access to. The minimum value is 1.  
Display name in the AWS IoT console: **Memory limit**  
Required: `true`  
Type: `string`  
Valid pattern: `^[1-9][0-9]*$`

`GPUAcceleration`  <a name="param-image-classification-gpuacceleration"></a>
The CPU or GPU (accelerated) computing context. This property applies to the ML Image Classification Aarch64 JTX2 connector only.  
Display name in the AWS IoT console: **GPU acceleration**  
Required: `true`  
Type: `string`  
Valid values: `CPU` or `GPU`

`MLFeedbackConnectorConfigId`  <a name="param-image-classification-feedbackconfigid"></a>
The ID of the feedback configuration to use to upload model input data. This must match the ID of a feedback configuration defined for the [ML Feedback connector](ml-feedback-connector.md).  
This parameter is required only if you want to use the ML Feedback connector to upload model input data and publish predictions to an MQTT topic.  
Display name in the AWS IoT console: **ML Feedback connector configuration ID**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|^[a-zA-Z0-9][a-zA-Z0-9-]{1,62}$`

### Create Connector Example (AWS CLI)
<a name="obj-detection-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains an ML Object Detection connector. This example creates an instance of the ML Object Detection ARMv7l connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyObjectDetectionConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/ObjectDetectionARMv7/versions/1",
            "Parameters": {
                "MLModelDestinationPath": "/path-to-model",
                "MLModelResourceId": "my-ml-resource",
                "LocalInferenceServiceName": "objectDetection",
                "LocalInferenceServiceTimeoutSeconds": "10",
                "LocalInferenceServiceMemoryLimitKB": "500000",
                "MLFeedbackConnectorConfigId" : "object-detector-random-sampling"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in these connectors have a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="obj-detection-connector-data-input"></a>

 These connectors accept an image file as input. Input image files must be in `jpeg` or `png` format. For more information, see [Usage Example](#obj-detection-connector-usage). 

These connectors don't accept MQTT messages as input data.

## Output data
<a name="obj-detection-connector-data-output"></a>

 These connectors return a formatted list of prediction results for the identified objects in the input image: 

```
     {
         "prediction": [
             [
                 14,
                 0.9384938478469849,
                 0.37763649225234985,
                 0.5110225081443787,
                 0.6697432398796082,
                 0.8544386029243469
             ],
             [
                 14,
                 0.8859519958496094,
                 0,
                 0.43536216020584106,
                 0.3314110040664673,
                 0.9538808465003967
             ],
             [
                 12,
                 0.04128098487854004,
                 0.5976729989051819,
                 0.5747185945510864,
                 0.704264223575592,
                 0.857937216758728
             ],
             ...
         ]
     }
```

Each prediction in the list is contained in square brackets and contains six values:
+  The first value represents the predicted object category for the identified object. Object categories and their corresponding values are determined when training your object detection machine learning model in the Neo deep learning compiler.
+ The second value is the confidence score for the object category prediction. This represents the probability that the prediction was correct. 
+ The last four values correspond to pixel dimensions that represent a bounding box around the predicted object in the image.

These connectors don't publish MQTT messages as output data.

## Usage Example
<a name="obj-detection-connector-usage"></a>

The following example Lambda function uses the [AWS IoT Greengrass Machine Learning SDK](lambda-functions.md#lambda-sdks-ml) to interact with an ML Object Detection connector.

**Note**  
 You can download the SDK from the [AWS IoT Greengrass Machine Learning SDK](what-is-gg.md#gg-ml-sdk-download) downloads page. 

The example initializes an SDK client and synchronously calls the SDK's `invoke_inference_service` function to invoke the local inference service. It passes in the algorithm type, service name, image type, and image content. Then, the example parses the service response to get the probability results (predictions).

```
import logging
from threading import Timer

import numpy as np

import greengrass_machine_learning_sdk as ml

# We assume the inference input image is provided as a local file
# to this inference client Lambda function.
with open('/test_img/test.jpg', 'rb') as f:
    content = bytearray(f.read())

client = ml.client('inference')

def infer():
    logging.info('invoking Greengrass ML Inference service')

    try:
        resp = client.invoke_inference_service(
            AlgoType='object-detection',
            ServiceName='objectDetection',
            ContentType='image/jpeg',
            Body=content
        )
    except ml.GreengrassInferenceException as e:
        logging.info('inference exception {}("{}")'.format(e.__class__.__name__, e))
        return
    except ml.GreengrassDependencyException as e:
        logging.info('dependency exception {}("{}")'.format(e.__class__.__name__, e))
        return

    logging.info('resp: {}'.format(resp))
    predictions = resp['Body'].read().decode("utf-8")
    logging.info('predictions: {}'.format(predictions))
    predictions = eval(predictions) 

    # Perform business logic that relies on the predictions.
    
    # Schedule the infer() function to run again in ten second.
    Timer(10, infer).start()
    return

infer()

def function_handler(event, context):
    return
```

The `invoke_inference_service` function in the AWS IoT Greengrass Machine Learning SDK accepts the following arguments.


| Argument | Description | 
| --- | --- | 
| `AlgoType` | The name of the algorithm type to use for inference. Currently, only `object-detection` is supported. Required: `true` Type: `string` Valid values: `object-detection` | 
| `ServiceName` | The name of the local inference service. Use the name that you specified for the `LocalInferenceServiceName` parameter when you configured the connector. Required: `true` Type: `string` | 
| `ContentType` | The mime type of the input image. Required: `true` Type: `string` Valid values: `image/jpeg, image/png` | 
| `Body` | The content of the input image file. Required: `true` Type: `binary` | 

## Installing Neo deep learning runtime dependencies on the AWS IoT Greengrass core
<a name="obj-detection-connector-config"></a>

The ML Object Detection connectors are bundled with the SageMaker AI Neo deep learning runtime (DLR). The connectors use the runtime to serve the ML model. To use these connectors, you must install the dependencies for the DLR on your core device. 

Before you install the DLR dependencies, make sure that the required [system libraries](#obj-detection-connector-logging) (with the specified minimum versions) are present on the device.

------
#### [ NVIDIA Jetson TX2 ]

1. Install CUDA Toolkit 9.0 and cuDNN 7.0. You can follow the instructions in [Setting up other devices](setup-filter.other.md) in the Getting Started tutorial.

1. Enable universe repositories so the connector can install community-maintained open software. For more information, see [ Repositories/Ubuntu](https://help.ubuntu.com/community/Repositories/Ubuntu) in the Ubuntu documentation.

   1. Open the `/etc/apt/sources.list` file.

   1. Make sure that the following lines are uncommented.

      ```
      deb http://ports.ubuntu.com/ubuntu-ports/ xenial universe
      deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial universe
      deb http://ports.ubuntu.com/ubuntu-ports/ xenial-updates universe
      deb-src http://ports.ubuntu.com/ubuntu-ports/ xenial-updates universe
      ```

1. Save a copy of the following installation script to a file named `nvidiajtx2.sh` on the core device.

   ```
   #!/bin/bash
   set -e
   
   echo "Installing dependencies on the system..."
   echo 'Assuming that universe repos are enabled and checking dependencies...'
   apt-get -y update
   apt-get -y dist-upgrade
   apt-get install -y liblapack3 libopenblas-dev liblapack-dev libatlas-base-dev
   apt-get install -y python3.7 python3.7-dev
   
   python3.7 -m pip install --upgrade pip
   python3.7 -m pip install numpy==1.15.0
   python3.7 -m pip install opencv-python || echo 'Error: Unable to install OpenCV with pip on this platform. Try building the latest OpenCV from source (https://github.com/opencv/opencv).'
   
   echo 'Dependency installation/upgrade complete.'
   ```
**Note**  
<a name="opencv-build-from-source"></a>If [OpenCV](https://github.com/opencv/opencv) does not install successfully using this script, you can try building from source. For more information, see [ Installation in Linux](https://docs.opencv.org/4.1.0/d7/d9f/tutorial_linux_install.html) in the OpenCV documentation, or refer to other online resources for your platform.

1. From the directory where you saved the file, run the following command:

   ```
   sudo nvidiajtx2.sh
   ```

------
#### [ x86\$164 (Ubuntu or Amazon Linux)  ]

1. Save a copy of the following installation script to a file named `x86_64.sh` on the core device.

   ```
   #!/bin/bash
   set -e
   
   echo "Installing dependencies on the system..."
   
   release=$(awk -F= '/^NAME/{print $2}' /etc/os-release)
   
   if [ "$release" == '"Ubuntu"' ]; then
     # Ubuntu. Supports EC2 and DeepLens. DeepLens has all the dependencies installed, so
     # this is mostly to prepare dependencies on Ubuntu EC2 instance.
     apt-get -y update
     apt-get -y dist-upgrade
   
     apt-get install -y libgfortran3 libsm6 libxext6 libxrender1
     apt-get install -y python3.7 python3.7-dev
   elif [ "$release" == '"Amazon Linux"' ]; then
     # Amazon Linux. Expect python to be installed already
     yum -y update
     yum -y upgrade
   
     yum install -y compat-gcc-48-libgfortran libSM libXrender libXext
   else
     echo "OS Release not supported: $release"
     exit 1
   fi
   
   python3.7 -m pip install --upgrade pip
   python3.7 -m pip install numpy==1.15.0
   python3.7 -m pip install opencv-python || echo 'Error: Unable to install OpenCV with pip on this platform. Try building the latest OpenCV from source (https://github.com/opencv/opencv).'
   
   echo 'Dependency installation/upgrade complete.'
   ```
**Note**  
<a name="opencv-build-from-source"></a>If [OpenCV](https://github.com/opencv/opencv) does not install successfully using this script, you can try building from source. For more information, see [ Installation in Linux](https://docs.opencv.org/4.1.0/d7/d9f/tutorial_linux_install.html) in the OpenCV documentation, or refer to other online resources for your platform.

1. From the directory where you saved the file, run the following command:

   ```
   sudo x86_64.sh
   ```

------
#### [ ARMv7 (Raspberry Pi) ]

1. Save a copy of the following installation script to a file named `armv7l.sh` on the core device.

   ```
   #!/bin/bash
   set -e
   
   echo "Installing dependencies on the system..."
   
   apt-get update
   apt-get -y upgrade
   
   apt-get install -y liblapack3 libopenblas-dev liblapack-dev
   apt-get install -y python3.7 python3.7-dev
   
   python3.7 -m pip install --upgrade pip
   python3.7 -m pip install numpy==1.15.0
   python3.7 -m pip install opencv-python || echo 'Error: Unable to install OpenCV with pip on this platform. Try building the latest OpenCV from source (https://github.com/opencv/opencv).'
   
   echo 'Dependency installation/upgrade complete.'
   ```
**Note**  
<a name="opencv-build-from-source"></a>If [OpenCV](https://github.com/opencv/opencv) does not install successfully using this script, you can try building from source. For more information, see [ Installation in Linux](https://docs.opencv.org/4.1.0/d7/d9f/tutorial_linux_install.html) in the OpenCV documentation, or refer to other online resources for your platform.

1. From the directory where you saved the file, run the following command:

   ```
   sudo bash armv7l.sh
   ```
**Note**  
On a Raspberry Pi, using `pip` to install machine learning dependencies is a memory-intensive operation that can cause the device to run out of memory and become unresponsive. As a workaround, you can temporarily increase the swap size. In `/etc/dphys-swapfile`, increase the value of the `CONF_SWAPSIZE` variable and then run the following command to restart `dphys-swapfile`.  

   ```
   /etc/init.d/dphys-swapfile restart
   ```

------

## Logging and troubleshooting
<a name="obj-detection-connector-logging"></a>

Depending on your group settings, event and error logs are written to CloudWatch Logs, the local file system, or both. Logs from this connector use the prefix `LocalInferenceServiceName`. If the connector behaves unexpectedly, check the connector's logs. These usually contain useful debugging information, such as a missing ML library dependency or the cause of a connector startup failure.

If the AWS IoT Greengrass group is configured to write local logs, the connector writes log files to `greengrass-root/ggc/var/log/user/region/aws/`. For more information about Greengrass logging, see [Monitoring with AWS IoT Greengrass logs](greengrass-logs-overview.md).

Use the following information to help troubleshoot issues with the ML Object Detection connectors.

**Required system libraries**

The following tabs list the system libraries required for each ML Object Detection connector.

------
#### [ ML Object Detection Aarch64 JTX2 ]


| Library | Minimum version | 
| --- | --- | 
| ld-linux-aarch64.so.1 | GLIBC\$12.17 | 
| libc.so.6 | GLIBC\$12.17 | 
| libcublas.so.9.0 | not applicable | 
| libcudart.so.9.0 | not applicable | 
| libcudnn.so.7 | not applicable | 
| libcufft.so.9.0 | not applicable | 
| libcurand.so.9.0 | not applicable | 
| libcusolver.so.9.0 | not applicable | 
| libgcc\$1s.so.1 | GCC\$14.2.0 | 
| libgomp.so.1 | GOMP\$14.0, OMP\$11.0 | 
| libm.so.6 | GLIBC\$12.23 | 
| libnvinfer.so.4 | not applicable | 
| libnvrm\$1gpu.so | not applicable | 
| libnvrm.so | not applicable | 
| libnvidia-fatbinaryloader.so.28.2.1 | not applicable | 
| libnvos.so | not applicable | 
| libpthread.so.0 | GLIBC\$12.17 | 
| librt.so.1 | GLIBC\$12.17 | 
| libstdc\$1\$1.so.6 | GLIBCXX\$13.4.21, CXXABI\$11.3.8 | 

------
#### [ ML Object Detection x86\$164 ]


| Library | Minimum version | 
| --- | --- | 
| ld-linux-x86-64.so.2 | GCC\$14.0.0 | 
| libc.so.6 | GLIBC\$12.4 | 
| libgfortran.so.3 | GFORTRAN\$11.0 | 
| libm.so.6 | GLIBC\$12.23 | 
| libpthread.so.0 | GLIBC\$12.2.5 | 
| librt.so.1 | GLIBC\$12.2.5 | 
| libstdc\$1\$1.so.6 | CXXABI\$11.3.8, GLIBCXX\$13.4.21 | 

------
#### [ ML Object Detection ARMv7 ]


| Library | Minimum version | 
| --- | --- | 
| ld-linux-armhf.so.3 | GLIBC\$12.4 | 
| libc.so.6 | GLIBC\$12.7 | 
| libgcc\$1s.so.1 | GCC\$14.0.0 | 
| libgfortran.so.3 | GFORTRAN\$11.0 | 
| libm.so.6 | GLIBC\$12.4 | 
| libpthread.so.0 | GLIBC\$12.4 | 
| librt.so.1 | GLIBC\$12.4 | 
| libstdc\$1\$1.so.6 | CXXABI\$11.3.8, CXXABI\$1ARM\$11.3.3, GLIBCXX\$13.4.20 | 

------

**Issues**


| Symptom | Solution | 
| --- | --- | 
|  On a Raspberry Pi, the following error message is logged and you are not using the camera: `Failed to initialize libdc1394`   |  Run the following command to disable the driver: <pre>sudo ln /dev/null /dev/raw1394</pre> This operation is ephemeral. The symbolic link disappears after you reboot. Consult the manual of your OS distribution to learn how to create the link automatically upon reboot.  | 

## Licenses
<a name="obj-detection-connector-license"></a>

The ML Object Detection connectors include the following third-party software/licensing:<a name="boto-3-licenses"></a>
+ [AWS SDK for Python (Boto3)](https://pypi.org/project/boto3/)/Apache License 2.0
+ [botocore](https://pypi.org/project/botocore/)/Apache License 2.0
+ [dateutil](https://pypi.org/project/python-dateutil/1.4/)/PSF License
+ [docutils](https://pypi.org/project/docutils/)/BSD License, GNU General Public License (GPL), Python Software Foundation License, Public Domain
+ [jmespath](https://pypi.org/project/jmespath/)/MIT License
+ [s3transfer](https://pypi.org/project/s3transfer/)/Apache License 2.0
+ [urllib3](https://pypi.org/project/urllib3/)/MIT License
+ [Deep Learning Runtime](https://github.com/neo-ai/neo-ai-dlr)/Apache License 2.0
+ <a name="six-license"></a>[six](https://github.com/benjaminp/six)/MIT

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## See also
<a name="obj-detection-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+ [Perform machine learning inference](ml-inference.md)
+ [Object detection algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection.html) in the *Amazon SageMaker AI Developer Guide*

# Modbus-RTU Protocol Adapter connector
<a name="modbus-protocol-adapter-connector"></a>

The Modbus-RTU Protocol Adapter [connector](connectors.md) polls information from Modbus RTU devices that are in the AWS IoT Greengrass group.

This connector receives parameters for a Modbus RTU request from a user-defined Lambda function. It sends the corresponding request, and then publishes the response from the target device as an MQTT message.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 3 | `arn:aws:greengrass:region::/connectors/ModbusRTUProtocolAdapter/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/ModbusRTUProtocolAdapter/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/ModbusRTUProtocolAdapter/versions/1` | 

For information about version changes, see the [Changelog](#modbus-protocol-adapter-connector-changelog).

## Requirements
<a name="modbus-protocol-adapter-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 3 ]
+ <a name="conn-req-ggc-v1.9.3"></a>AWS IoT Greengrass Core software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="conn-modbus-req-physical-connection"></a>A physical connection between the AWS IoT Greengrass core and the Modbus devices. The core must be physically connected to the Modbus RTU network through a serial port; for example, a USB port.
+ <a name="conn-modbus-req-serial-port-resource"></a>A [local device resource](access-local-resources.md) in the Greengrass group that points to the physical Modbus serial port.
+ <a name="conn-modbus-req-user-lambda"></a>A user-defined Lambda function that sends Modbus RTU request parameters to this connector. The request parameters must conform to expected patterns and include the IDs and addresses of the target devices on the Modbus RTU network. For more information, see [Input data](#modbus-protocol-adapter-connector-data-input).

------
#### [ Versions 1 - 2 ]
+ <a name="conn-req-ggc-v1.7.0"></a>AWS IoT Greengrass Core software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-modbus-req-physical-connection"></a>A physical connection between the AWS IoT Greengrass core and the Modbus devices. The core must be physically connected to the Modbus RTU network through a serial port; for example, a USB port.
+ <a name="conn-modbus-req-serial-port-resource"></a>A [local device resource](access-local-resources.md) in the Greengrass group that points to the physical Modbus serial port.
+ <a name="conn-modbus-req-user-lambda"></a>A user-defined Lambda function that sends Modbus RTU request parameters to this connector. The request parameters must conform to expected patterns and include the IDs and addresses of the target devices on the Modbus RTU network. For more information, see [Input data](#modbus-protocol-adapter-connector-data-input).

------

## Connector Parameters
<a name="modbus-protocol-adapter-connector-param"></a>

This connector supports the following parameters:

`ModbusSerialPort-ResourceId`  
The ID of the local device resource that represents the physical Modbus serial port.  
This connector is granted read-write access to the resource.
Display name in the AWS IoT console: **Modbus serial port resource**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`ModbusSerialPort`  
The absolute path to the physical Modbus serial port on the device. This is the source path that's specified for the Modbus local device resource.  
Display name in the AWS IoT console: **Source path of Modbus serial port resource**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

### Create Connector Example (AWS CLI)
<a name="modbus-protocol-adapter-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the Modbus-RTU Protocol Adapter connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyModbusRTUProtocolAdapterConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/ModbusRTUProtocolAdapter/versions/3",
            "Parameters": {
                "ModbusSerialPort-ResourceId": "MyLocalModbusSerialPort",
                "ModbusSerialPort": "/path-to-port"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in this connector has a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

**Note**  
After you deploy the Modbus-RTU Protocol Adapter connector, you can use AWS IoT Things Graph to orchestrate interactions between devices in your group. For more information, see [Modbus](https://docs.aws.amazon.com/thingsgraph/latest/ug/iot-tg-protocols-modbus.html) in the *AWS IoT Things Graph User Guide*.

## Input data
<a name="modbus-protocol-adapter-connector-data-input"></a>

This connector accepts Modbus RTU request parameters from a user-defined Lambda function on an MQTT topic. Input messages must be in JSON format.

<a name="topic-filter"></a>**Topic filter in subscription**  
`modbus/adapter/request`

**Message properties**  
The request message varies based on the type of Modbus RTU request that it represents. The following properties are required for all requests:  
+ In the `request` object:
  + `operation`. The name of the operation to execute. For example, specify `"operation": "ReadCoilsRequest"` to read coils. This value must be a Unicode string. For supported operations, see [Modbus RTU requests and responses](#modbus-protocol-adapter-connector-requests-responses).
  + `device`. The target device of the request. This value must be between `0 - 247`.
+ The `id` property. An ID for the request. This value is used for data deduplication and is returned as is in the `id` property of all responses, including error responses. This value must be a Unicode string.
If your request includes an address field, you must specify the value as an integer. For example, `"address": 1`.
The other parameters to include in the request depend on the operation. All request parameters are required except the CRC, which is handled separately. For examples, see [Example requests and responses](#modbus-protocol-adapter-connector-examples).

**Example input: Read coils request**  

```
{
    "request": {
        "operation": "ReadCoilsRequest",
    	"device": 1,
    	"address": 1,
    	"count": 1
    },
    "id": "TestRequest"
}
```

## Output data
<a name="modbus-protocol-adapter-connector-data-output"></a>

This connector publishes responses to incoming Modbus RTU requests.

<a name="topic-filter"></a>**Topic filter in subscription**  
`modbus/adapter/response`

**Message properties**  
The format of the response message varies based on the corresponding request and the response status. For examples, see [Example requests and responses](#modbus-protocol-adapter-connector-examples).  
A response for a write operation is simply an echo of the request. Although no meaningful information is returned for write responses, it's a good practice to check the status of the response.
Every response includes the following properties:  
+ In the `response` object:
  + `status`. The status of the request. The status can be one of the following values:
    + `Success`. The request was valid, sent to the Modbus RTU network, and a response was returned.
    + `Exception`. The request was valid, sent to the Modbus RTU network, and an exception response was returned. For more information, see [Response status: Exception](#modbus-protocol-adapter-connector-response-exception).
    + `No Response`. The request was invalid, and the connector caught the error before the request was sent over the Modbus RTU network. For more information, see [Response status: No response](#modbus-protocol-adapter-connector-response-noresponse).
  + `device`. The device that the request was sent to.
  + `operation`. The request type that was sent.
  + `payload`. The response content that was returned. If the `status` is `No Response`, this object contains only an `error` property with the error description (for example, `"error": "[Input/Output] No Response received from the remote unit"`).
+ The `id` property. The ID of the request, used for data deduplication.

**Example output: Success**  

```
{
    "response" : {
        "status" : "success",
        "device": 1,
    	"operation": "ReadCoilsRequest",
    	"payload": {
        	"function_code": 1,
        	"bits": [1]
    	}
     },
     "id" : "TestRequest"
}
```

**Example output: Failure**  

```
{
    "response" : {
        "status" : "fail",
        "error_message": "Internal Error",
        "error": "Exception",
        "device": 1,
    	"operation": "ReadCoilsRequest",
    	"payload": {
        	"function_code": 129,
        	"exception_code": 2
    	}
     },
     "id" : "TestRequest"
}
```
For more examples, see [Example requests and responses](#modbus-protocol-adapter-connector-examples).

## Modbus RTU requests and responses
<a name="modbus-protocol-adapter-connector-requests-responses"></a>

This connector accepts Modbus RTU request parameters as [input data](#modbus-protocol-adapter-connector-data-input) and publishes responses as [output data](#modbus-protocol-adapter-connector-data-output).

The following common operations are supported.


| Operation name in request | Function code in response | 
| --- | --- | 
| ReadCoilsRequest | 01 | 
| ReadDiscreteInputsRequest | 02 | 
| ReadHoldingRegistersRequest | 03 | 
| ReadInputRegistersRequest | 04 | 
| WriteSingleCoilRequest | 05 | 
| WriteSingleRegisterRequest | 06 | 
| WriteMultipleCoilsRequest | 15 | 
| WriteMultipleRegistersRequest | 16 | 
| MaskWriteRegisterRequest | 22 | 
| ReadWriteMultipleRegistersRequest | 23 | 

### Example requests and responses
<a name="modbus-protocol-adapter-connector-examples"></a>

The following are example requests and responses for supported operations.

Read Coils  
**Request example:**  

```
{
    "request": {
        "operation": "ReadCoilsRequest",
    	"device": 1,
    	"address": 1,
    	"count": 1
    },
    "id": "TestRequest"
}
```
**Response example:**  

```
{
    "response": {
        "status": "success",
        "device": 1,
    	"operation": "ReadCoilsRequest",
    	"payload": {
        	"function_code": 1,
        	"bits": [1]
    	}
     },
     "id" : "TestRequest"
}
```

Read Discrete Inputs  
**Request example:**  

```
{
    "request": {
        "operation": "ReadDiscreteInputsRequest",
        "device": 1,
        "address": 1,
        "count": 1
    },
    "id": "TestRequest"
}
```
**Response example:**  

```
{
    "response": {
        "status": "success",
        "device": 1,
        "operation": "ReadDiscreteInputsRequest",
        "payload": {
            "function_code": 2,
            "bits": [1]
        }
     },
     "id" : "TestRequest"
}
```

Read Holding Registers  
**Request example:**  

```
{
    "request": {
        "operation": "ReadHoldingRegistersRequest",
    	"device": 1,
    	"address": 1,
    	"count": 1
    },
    "id": "TestRequest"
}
```
**Response example:**  

```
{
    "response": {
        "status": "success",
        "device": 1,
    	"operation": "ReadHoldingRegistersRequest",
    	"payload": {
    	    "function_code": 3,
            "registers": [20,30]
    	}
     },
     "id" : "TestRequest"
}
```

Read Input Registers  
**Request example:**  

```
{
    "request": {
        "operation": "ReadInputRegistersRequest",
    	"device": 1,
    	"address": 1,
    	"value": 1
    },
    "id": "TestRequest"
}
```

Write Single Coil  
**Request example:**  

```
{
    "request": {
        "operation": "WriteSingleCoilRequest",
    	"device": 1,
    	"address": 1,
    	"value": 1
    },
    "id": "TestRequest"
}
```
**Response example:**  

```
{
    "response": {
        "status": "success",
        "device": 1,
    	"operation": "WriteSingleCoilRequest",
    	"payload": {
    	    "function_code": 5,
    	    "address": 1,
    	    "value": true
    	}
     },
     "id" : "TestRequest"
```

Write Single Register  
**Request example:**  

```
{
    "request": {
        "operation": "WriteSingleRegisterRequest",
    	"device": 1,
    	"address": 1,
    	"value": 1
    },
    "id": "TestRequest"
}
```

Write Multiple Coils  
**Request example:**  

```
{
    "request": {
        "operation": "WriteMultipleCoilsRequest",
    	"device": 1,
    	"address": 1,
    	"values": [1,0,0,1]
    },
    "id": "TestRequest"
}
```
**Response example:**  

```
{
    "response": {
        "status": "success",
        "device": 1,
    	"operation": "WriteMultipleCoilsRequest",
    	"payload": {
    	    "function_code": 15,
    	    "address": 1,
    	    "count": 4
    	}
     },
     "id" : "TestRequest"
}
```

Write Multiple Registers  
**Request example:**  

```
{
    "request": {
        "operation": "WriteMultipleRegistersRequest",
    	"device": 1,
    	"address": 1,
    	"values": [20,30,10]
    },
    "id": "TestRequest"
}
```
**Response example:**  

```
{
    "response": {
        "status": "success",
        "device": 1,
    	"operation": "WriteMultipleRegistersRequest",
    	"payload": {
    	    "function_code": 23,
    	    "address": 1,
       		"count": 3
    	}
     },
     "id" : "TestRequest"
}
```

Mask Write Register  
**Request example:**  

```
{
    "request": {
        "operation": "MaskWriteRegisterRequest",
    	"device": 1,
    	"address": 1,
        "and_mask": 175,
        "or_mask": 1
    },
    "id": "TestRequest"
}
```
**Response example:**  

```
{
    "response": {
        "status": "success",
        "device": 1,
    	"operation": "MaskWriteRegisterRequest",
    	"payload": {
    	    "function_code": 22,
            "and_mask": 0,
            "or_mask": 8
    	}
     },
     "id" : "TestRequest"
}
```

Read Write Multiple Registers  
**Request example:**  

```
{
    "request": {
        "operation": "ReadWriteMultipleRegistersRequest",
    	"device": 1,
    	"read_address": 1,
        "read_count": 2,
        "write_address": 3,
        "write_registers": [20,30,40]
    },
    "id": "TestRequest"
}
```
**Response example:**  

```
{
    "response": {
        "status": "success",
        "device": 1,
    	"operation": "ReadWriteMultipleRegistersRequest",
    	"payload": {
    	    "function_code": 23,
    	    "registers": [10,20,10,20]
    	}
     },
     "id" : "TestRequest"
}
```
The registers returned in this response are the registers that are read from.

### Response status: Exception
<a name="modbus-protocol-adapter-connector-response-exception"></a>

Exceptions can occur when the request format is valid, but the request is not completed successfully. In this case, the response contains the following information:
+ The `status` is set to `Exception`.
+ The `function_code` equals the function code of the request \$1 128.
+ The `exception_code` contains the exception code. For more information, see Modbus exception codes.

**Example:**

```
{
    "response" : {
        "status" : "fail",
        "error_message": "Internal Error",
        "error": "Exception",
        "device": 1,
    	"operation": "ReadCoilsRequest",
    	"payload": {
        	"function_code": 129,
        	"exception_code": 2
    	}
     },
     "id" : "TestRequest"
}
```

### Response status: No response
<a name="modbus-protocol-adapter-connector-response-noresponse"></a>

This connector performs validation checks on the Modbus request. For example, it checks for invalid formats and missing fields. If the validation fails, the connector doesn't send the request. Instead, it returns a response that contains the following information:
+ The `status` is set to `No Response`.
+ The `error` contains the reason for the error.
+ The `error_message` contains the error message.

**Examples:**

```
{
    "response" : {
        "status" : "fail",
        "error_message": "Invalid address field. Expected <type 'int'>, got <type 'str'>",
        "error": "No Response",
        "device": 1,
    	"operation": "ReadCoilsRequest",
    	"payload": {
        	"error": "Invalid address field. Expected <type 'int'>, got <type 'str'>"
    	}
     },
     "id" : "TestRequest"
}
```

If the request targets a nonexistent device or if the Modbus RTU network is not working, you might get a `ModbusIOException`, which uses the No Response format.

```
{
    "response" : {
        "status" : "fail",
        "error_message": "[Input/Output] No Response received from the remote unit",
        "error": "No Response",
        "device": 1,
    	"operation": "ReadCoilsRequest",
    	"payload": {
        	"error": "[Input/Output] No Response received from the remote unit"
    	}
     },
     "id" : "TestRequest"
}
```

## Usage Example
<a name="modbus-protocol-adapter-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
If you use other Python runtimes, you can create a symlink from Python3.x to Python 3.7.
The [Get started with connectors (console)](connectors-console.md) and [Get started with connectors (CLI)](connectors-cli.md) topics contain detailed steps that show you how to configure and deploy an example Twilio Notifications connector.

1. Make sure you meet the [requirements](#modbus-protocol-adapter-connector-req) for the connector.

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#modbus-protocol-adapter-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. <a name="connectors-setup-device-resource"></a>Add the required local device resource and grant read/write access to the Lambda function.

   1. Add the connector and configure its [parameters](#modbus-protocol-adapter-connector-param).

   1. Add subscriptions that allow the connector to receive [input data](#modbus-protocol-adapter-connector-data-input) and send [output data](#modbus-protocol-adapter-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="modbus-protocol-adapter-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector.

```
import greengrasssdk
import json

TOPIC_REQUEST = 'modbus/adapter/request'

# Creating a greengrass core sdk client
iot_client = greengrasssdk.client('iot-data')

def create_read_coils_request():
	request = {
		"request": {
			"operation": "ReadCoilsRequest",
			"device": 1,
			"address": 1,
			"count": 1
		},
		"id": "TestRequest"
	}
	return request

def publish_basic_request():
	iot_client.publish(payload=json.dumps(create_read_coils_request()), topic=TOPIC_REQUEST)

publish_basic_request()

def lambda_handler(event, context):
	return
```

## Licenses
<a name="modbus-protocol-adapter-connector-license"></a>

The Modbus-RTU Protocol Adapter connector includes the following third-party software/licensing:
+ [pymodbus](https://github.com/riptideio/pymodbus/blob/master/README.rst)/BSD
+ [pyserial](https://github.com/pyserial/pyserial)/BSD

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="modbus-protocol-adapter-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 3 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 2 | Updated connector ARN for AWS Region support. Improved error logging. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="modbus-protocol-adapter-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)

# Modbus-TCP Protocol Adapter connector
<a name="modbus-tcp-connector"></a>

The Modbus-TCP Protocol Adapter [connector](connectors.md) collects data from local devices through the ModbusTCP protocol and publishes it to the selected `StreamManager` streams.

You can also use this connector with the IoT SiteWise connector and your IoT SiteWise gateway. Your gateway must supply the configuration for the connector. For more information, see [Configure a Modbus TCP source](http://docs.aws.amazon.com/iot-sitewise/latest/userguide/configure-modbus-source.html) in the IoT SiteWise user guide. 

**Note**  
 This connector runs in [No container](lambda-group-config.md#no-container-mode) isolation mode, so you can deploy it to a AWS IoT Greengrass group running in a Docker container. 

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 3 | `arn:aws:greengrass:region::/connectors/ModbusTCPConnector/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/ModbusTCPConnector/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/ModbusTCPConnector/versions/1` | 

For information about version changes, see the [Changelog](#modbus-tcp-connector-changelog).

## Requirements
<a name="modbus-tcp-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 1 - 3 ]
+ AWS IoT Greengrass Core software v1.10.2 or later.
+ Stream manager enabled on the AWS IoT Greengrass group.
+ Java 8 installed on the core device and added to the `PATH` environment variable.

**Note**  
 This connector is only available in the following regions:   
ap-southeast-1
ap-southeast-2
eu-central-1
eu-west-1
us-east-1
us-west-2
cn-north-1

------

## Connector Parameters
<a name="modbus-tcp-connector-param"></a>

This connector supports the following parameters:

`LocalStoragePath`  
The directory on the AWS IoT Greengrass host that the IoT SiteWise connector can write persistent data to. The default directory is `/var/sitewise`.  
Display name in the AWS IoT console: **Local storage path**  
Required: `false`  
Type: `string`  
Valid pattern: `^\s*$|\/.`

`MaximumBufferSize`  
The maximum size in GB for IoT SiteWise disk usage. The default size is 10GB.  
Display name in the AWS IoT console: **Maximum disk buffer size**  
Required: `false`  
Type: `string`  
Valid pattern: `^\s*$|[0-9]+`

`CapabilityConfiguration`  
The set of Modbus TCP collector configurations that the connector collects data from and connects to.  
Display name in the AWS IoT console: **CapabilityConfiguration**  
Required: `false`  
Type: A well-formed JSON string that defines the set of supported feedback configurations.

The following is an example of a `CapabilityConfiguration`:

```
{
    "sources": [
        {
            "type": "ModBusTCPSource",
            "name": "SourceName1",
            "measurementDataStreamPrefix": "SourceName1_Prefix",
            "destination": {
                "type": "StreamManager",
                "streamName": "SiteWise_Stream_1",
                "streamBufferSize": 8
            },
            "endpoint": {
                "ipAddress": "127.0.0.1",
                "port": 8081,
                "unitId": 1
            },
            "propertyGroups": [
                {
                    "name": "GroupName",
                    "tagPathDefinitions": [
                        {
                            "type": "ModBusTCPAddress",
                            "tag": "TT-001",
                            "address": "30001",
                            "size": 2,
                            "srcDataType": "float",
                            "transformation": "byteWordSwap",
                            "dstDataType": "double"
                        }
                    ],
                    "scanMode": {
                        "type": "POLL",
                        "rate": 100
                    }
                }
            ]
        }
    ]
}
```

### Create Connector Example (AWS CLI)
<a name="modbus-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the Modbus-TCP Protocol Adapter connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '
{
    "Connectors": [
        {
            "Id": "MyModbusTCPConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/ModbusTCP/versions/3",
            "Parameters": {
                "capability_configuration": "{\"version\":1,\"namespace\":\"iotsitewise:modbuscollector:1\",\"configuration\":\"{\"sources\":[{\"type\":\"ModBusTCPSource\",\"name\":\"SourceName1\",\"measurementDataStreamPrefix\":\"\",\"endpoint\":{\"ipAddress\":\"127.0.0.1\",\"port\":8081,\"unitId\":1},\"propertyGroups\":[{\"name\":\"PropertyGroupName\",\"tagPathDefinitions\":[{\"type\":\"ModBusTCPAddress\",\"tag\":\"TT-001\",\"address\":\"30001\",\"size\":2,\"srcDataType\":\"hexdump\",\"transformation\":\"noSwap\",\"dstDataType\":\"string\"}],\"scanMode\":{\"rate\":200,\"type\":\"POLL\"}}],\"destination\":{\"type\":\"StreamManager\",\"streamName\":\"SiteWise_Stream\",\"streamBufferSize\":10},\"minimumInterRequestDuration\":200}]}\"}"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in this connector has a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

## Input data
<a name="modbus-tcp-connector-data-input"></a>

This connector doesn't accept MQTT messages as input data.

## Output data
<a name="modbus-tcp-connector-data-output"></a>

This connector publishes data to `StreamManager`. You must configure the destination message stream. The output messages are of the following structure:

```
{
    "alias": "string",
    "messages": [
        {
            "name": "string",
            "value": boolean|double|integer|string,
            "timestamp": number,
            "quality": "string"
        }
    ]
}
```

## Licenses
<a name="modbus-tcp-connector-license"></a>

The Modbus-TCP Protocol Adapter connector includes the following third-party software/licensing:
+ [Digital Petri](https://github.com/digitalpetri/modbus) Modbus

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="modbus-tcp-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | Date | 
| --- | --- | --- | 
| 3 (recommended) | This version contains bug fixes. | December 22, 2021 | 
| 2 | Added support for ASCII, UTF8, and ISO8859 encoded source strings. | May 24, 2021 | 
| 1 | Initial release. | December 15, 2020 | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="modbus-tcp-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)

# Raspberry Pi GPIO connector
<a name="raspberrypi-gpio-connector"></a>

**Warning**  <a name="connectors-extended-life-phase-warning"></a>
This connector has moved into the *extended life phase*, and AWS IoT Greengrass won't release updates that provide features, enhancements to existing features, security patches, or bug fixes. For more information, see [AWS IoT Greengrass Version 1 maintenance policy](maintenance-policy.md).

The Raspberry Pi GPIO [connector](connectors.md) controls general-purpose input/output (GPIO) pins on a Raspberry Pi core device.

This connector polls input pins at a specified interval and publishes state changes to MQTT topics. It also accepts read and write requests as MQTT messages from user-defined Lambda functions. Write requests are used to set the pin to high or low voltage.

The connector provides parameters that you use to designate input and output pins. This behavior is configured before group deployment. It can't be changed at runtime.
+ Input pins can be used to receive data from peripheral devices.
+ Output pins can be used to control peripherals or send data to peripherals.

You can use this connector for many scenarios, such as:
+ Controlling green, yellow, and red LED lights for a traffic light.
+ Controlling a fan (attached to an electrical relay) based on data from a humidity sensor.
+ Alerting employees in a retail store when customers press a button.
+ Using a smart light switch to control other IoT devices.

**Note**  
This connector is not suitable for applications that have real-time requirements. Events with short durations might be missed.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 3 | `arn:aws:greengrass:region::/connectors/RaspberryPiGPIO/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/RaspberryPiGPIO/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/RaspberryPiGPIO/versions/1` | 

For information about version changes, see the [Changelog](#raspberrypi-gpio-connector-changelog).

## Requirements
<a name="raspberrypi-gpio-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 3 ]
+ <a name="conn-req-ggc-v1.9.3"></a>AWS IoT Greengrass Core software v1.9.3 or later.
+ [Python](https://www.python.org/) version 3.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-gpio-req-pin-seq"></a>Raspberry Pi 4 Model B, or Raspberry Pi 3 Model B/B\$1. You must know the pin sequence of your Raspberry Pi. For more information, see [GPIO Pin sequence](#raspberrypi-gpio-connector-req-pins).
+ <a name="conn-gpio-req-dev-gpiomem-resource"></a>A [local device resource](access-local-resources.md) in the Greengrass group that points to `/dev/gpiomem` on the Raspberry Pi. If you create the resource in the console, you must select the **Automatically add OS group permissions of the Linux group that owns the resource** option. In the API, set the `GroupOwnerSetting.AutoAddGroupOwner` property to `true`.
+ <a name="conn-gpio-req-rpi-gpio"></a>The [RPi.GPIO](https://sourceforge.net/p/raspberry-gpio-python/wiki/Home/) module installed on the Raspberry Pi. In Raspbian, this module is installed by default. You can use the following command to reinstall it:

  ```
  sudo pip install RPi.GPIO
  ```

------
#### [ Versions 1 - 2 ]
+ <a name="conn-req-ggc-v1.7.0"></a>AWS IoT Greengrass Core software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-gpio-req-pin-seq"></a>Raspberry Pi 4 Model B, or Raspberry Pi 3 Model B/B\$1. You must know the pin sequence of your Raspberry Pi. For more information, see [GPIO Pin sequence](#raspberrypi-gpio-connector-req-pins).
+ <a name="conn-gpio-req-dev-gpiomem-resource"></a>A [local device resource](access-local-resources.md) in the Greengrass group that points to `/dev/gpiomem` on the Raspberry Pi. If you create the resource in the console, you must select the **Automatically add OS group permissions of the Linux group that owns the resource** option. In the API, set the `GroupOwnerSetting.AutoAddGroupOwner` property to `true`.
+ <a name="conn-gpio-req-rpi-gpio"></a>The [RPi.GPIO](https://sourceforge.net/p/raspberry-gpio-python/wiki/Home/) module installed on the Raspberry Pi. In Raspbian, this module is installed by default. You can use the following command to reinstall it:

  ```
  sudo pip install RPi.GPIO
  ```

------

### GPIO Pin sequence
<a name="raspberrypi-gpio-connector-req-pins"></a>

The Raspberry Pi GPIO connector references GPIO pins by the numbering scheme of the underlying System on Chip (SoC), not by the physical layout of GPIO pins. The physical ordering of pins might vary in Raspberry Pi versions. For more information, see [GPIO](https://www.raspberrypi.org/documentation/usage/gpio/) in the Raspberry Pi documentation.

The connector can't validate that the input and output pins you configure map correctly to the underlying hardware of your Raspberry Pi. If the pin configuration is invalid, the connector returns a runtime error when it attempts to start on the device. To resolve this issue, reconfigure the connector and then redeploy.

**Note**  
Make sure that peripherals for GPIO pins are properly wired to prevent component damage.

## Connector Parameters
<a name="raspberrypi-gpio-connector-param"></a>

This connector provides the following parameters:

`InputGpios`  
A comma-separated list of GPIO pin numbers to configure as inputs. Optionally append `U` to set a pin's pull-up resistor, or `D` to set the pull-down resistor. Example: `"5,6U,7D"`.  
Display name in the AWS IoT console: **Input GPIO pins**  
Required: `false`. You must specify input pins, output pins, or both.  
Type: `string`  
Valid pattern: `^$|^[0-9]+[UD]?(,[0-9]+[UD]?)*$`

`InputPollPeriod`  
The interval (in milliseconds) between each polling operation, which checks input GPIO pins for state changes. The minimum value is 1.  
This value depends on your scenario and the type of devices that are polled. For example, a value of `50` should be fast enough to detect a button press.  
Display name in the AWS IoT console: **Input GPIO polling period**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|^[1-9][0-9]*$`

`OutputGpios`  
A comma-separated list of GPIO pin numbers to configure as outputs. Optionally append `H` to set a high state (1), or `L` to set a low state (0). Example: `"8H,9,27L"`.  
Display name in the AWS IoT console: **Output GPIO pins**  
Required: `false`. You must specify input pins, output pins, or both.  
Type: `string`  
Valid pattern: `^$|^[0-9]+[HL]?(,[0-9]+[HL]?)*$`

`GpioMem-ResourceId`  
The ID of the local device resource that represents `/dev/gpiomem`.  
This connector is granted read-write access to the resource.
Display name in the AWS IoT console: **Resource for /dev/gpiomem device**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

### Create Connector Example (AWS CLI)
<a name="raspberrypi-gpio-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the Raspberry Pi GPIO connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyRaspberryPiGPIOConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/RaspberryPiGPIO/versions/3",
            "Parameters": {
                "GpioMem-ResourceId": "my-gpio-resource",
                "InputGpios": "5,6U,7D",
                "InputPollPeriod": 50,
                "OutputGpios": "8H,9,27L"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in this connector has a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="raspberrypi-gpio-connector-data-input"></a>

This connector accepts read or write requests for GPIO pins on two MQTT topics.
+ Read requests on the `gpio/+/+/read` topic.
+ Write requests on the `gpio/+/+/write` topic.

To publish to these topics, replace the `+` wildcards with the core thing name and the target pin number, respectively. For example:

```
gpio/core-thing-name/gpio-number/read
```

**Note**  
Currently, when you create a subscription that uses the Raspberry Pi GPIO connector, you must specify a value for at least one of the \$1 wildcards in the topic.

**Topic filter:** `gpio/+/+/read`  
Use this topic to direct the connector to read the state of the GPIO pin that's specified in the topic.  
The connector publishes the response to the corresponding output topic (for example, `gpio/core-thing-name/gpio-number/state`).    
**Message properties**  
None. Messages that are sent to this topic are ignored.

**Topic filter:** `gpio/+/+/write`  
Use this topic to send write requests to a GPIO pin. This directs the connector to set the GPIO pin that's specified in the topic to a low or high voltage.  
+ `0` sets the pin to low voltage.
+ `1` sets the pin to high voltage.
The connector publishes the response to the corresponding output `/state` topic (for example, `gpio/core-thing-name/gpio-number/state`).    
**Message properties**  
The value `0` or `1`, as an integer or string.  
**Example input**  

```
0
```

## Output data
<a name="raspberrypi-gpio-connector-data-output"></a>

This connector publishes data to two topics:
+ High or low state changes on the `gpio/+/+/state` topic.
+ Errors on the `gpio/+/error` topic.

**Topic filter:** `gpio/+/+/state`  
Use this topic to listen for state changes on input pins and responses for read requests. The connector returns the string `"0"` if the pin is in a low state, or `"1"` if it's in a high state.  
When publishing to this topic, the connector replaces the `+` wildcards with the core thing name and the target pin, respectively. For example:  

```
gpio/core-thing-name/gpio-number/state
```
Currently, when you create a subscription that uses the Raspberry Pi GPIO connector, you must specify a value for at least one of the \$1 wildcards in the topic.  
**Example output**  

```
0
```

**Topic filter:** `gpio/+/error`  
Use this topic to listen for errors. The connector publishes to this topic as a result of an invalid request (for example, when a state change is requested on an input pin).  
When publishing to this topic, the connector replaces the `+` wildcard with the core thing name.    
**Example output**  

```
{
   "topic": "gpio/my-core-thing/22/write",
   "error": "Invalid GPIO operation",
   "long_description": "GPIO 22 is configured as an INPUT GPIO. Write operations are not permitted."
 }
```

## Usage Example
<a name="raspberrypi-gpio-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
If you use other Python runtimes, you can create a symlink from Python3.x to Python 3.7.
The [Get started with connectors (console)](connectors-console.md) and [Get started with connectors (CLI)](connectors-cli.md) topics contain detailed steps that show you how to configure and deploy an example Twilio Notifications connector.

1. Make sure you meet the [requirements](#raspberrypi-gpio-connector-req) for the connector.

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#raspberrypi-gpio-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. <a name="connectors-setup-device-resource"></a>Add the required local device resource and grant read/write access to the Lambda function.

   1. Add the connector and configure its [parameters](#raspberrypi-gpio-connector-param).

   1. Add subscriptions that allow the connector to receive [input data](#raspberrypi-gpio-connector-data-input) and send [output data](#raspberrypi-gpio-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="raspberrypi-gpio-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector. This example sends read requests for a set of input GPIO pins. It shows how to construct topics using the core thing name and pin number.

```
import greengrasssdk
import json
import os

iot_client = greengrasssdk.client('iot-data')
INPUT_GPIOS = [6, 17, 22]

thingName = os.environ['AWS_IOT_THING_NAME']

def get_read_topic(gpio_num):
    return '/'.join(['gpio', thingName, str(gpio_num), 'read'])

def get_write_topic(gpio_num):
    return '/'.join(['gpio', thingName, str(gpio_num), 'write'])

def send_message_to_connector(topic, message=''):
    iot_client.publish(topic=topic, payload=str(message))

def set_gpio_state(gpio, state):
    send_message_to_connector(get_write_topic(gpio), str(state))

def read_gpio_state(gpio):
    send_message_to_connector(get_read_topic(gpio))

def publish_basic_message():
    for i in INPUT_GPIOS:
    	read_gpio_state(i)

publish_basic_message()

def lambda_handler(event, context):
    return
```

## Licenses
<a name="raspberrypi-gpio-connector-license"></a>

The Raspberry Pi GPIO; connector includes the following third-party software/licensing:
+ [RPi.GPIO](https://pypi.org/project/RPi.GPIO/)/MIT

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="raspberrypi-gpio-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 3 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 2 | Updated connector ARN for AWS Region support. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="raspberrypi-gpio-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+ [GPIO](https://www.raspberrypi.org/documentation/usage/gpio/) in the Raspberry Pi documentation

# Serial Stream connector
<a name="serial-stream-connector"></a>

**Warning**  <a name="connectors-extended-life-phase-warning"></a>
This connector has moved into the *extended life phase*, and AWS IoT Greengrass won't release updates that provide features, enhancements to existing features, security patches, or bug fixes. For more information, see [AWS IoT Greengrass Version 1 maintenance policy](maintenance-policy.md).

The Serial Stream [connector](connectors.md) reads and writes to a serial port on an AWS IoT Greengrass core device.

This connector supports two modes of operation:
+ **Read-On-Demand**. Receives read and write requests on MQTT topics and publishes the response of the read operation or the status of the write operation.
+ **Polling-Read**. Reads from the serial port at regular intervals. This mode also supports Read-On-Demand requests.

**Note**  
Read requests are limited to a maximum read length of 63994 bytes. Write requests are limited to a maximum data length of 128000 bytes.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 3 | `arn:aws:greengrass:region::/connectors/SerialStream/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/SerialStream/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/SerialStream/versions/1` | 

For information about version changes, see the [Changelog](#serial-stream-connector-changelog).

## Requirements
<a name="serial-stream-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 3 ]
+ <a name="conn-req-ggc-v1.9.3"></a>AWS IoT Greengrass Core software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="conn-serial-stream-req-serial-port-resource"></a>A [local device resource](access-local-resources.md) in the Greengrass group that points to the target serial port.
**Note**  
Before you deploy this connector, we recommend that you set up the serial port and verify that you can read and write to it. 

------
#### [ Versions 1 - 2 ]
+ <a name="conn-req-ggc-v1.7.0"></a>AWS IoT Greengrass Core software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-serial-stream-req-serial-port-resource"></a>A [local device resource](access-local-resources.md) in the Greengrass group that points to the target serial port.
**Note**  
Before you deploy this connector, we recommend that you set up the serial port and verify that you can read and write to it. 

------

## Connector Parameters
<a name="serial-stream-connector-param"></a>

This connector provides the following parameters:

`BaudRate`  
The baud rate of the serial connection.  
Display name in the AWS IoT console: **Baud rate**  
Required: `true`  
Type: `string`  
Valid values: `110, 300, 600, 1200, 2400, 4800, 9600, 14400, 19200, 28800, 38400, 56000, 57600, 115200, 230400`  
Valid pattern: `^110$|^300$|^600$|^1200$|^2400$|^4800$|^9600$|^14400$|^19200$|^28800$|^38400$|^56000$|^57600$|^115200$|^230400$`

`Timeout`  
The timeout (in seconds) for a read operation.  
Display name in the AWS IoT console: **Timeout**  
Required: `true`  
Type: `string`  
Valid values: `1 - 59`  
Valid pattern: `^([1-9]|[1-5][0-9])$`

`SerialPort`  
The absolute path to the physical serial port on the device. This is the source path that's specified for the local device resource.  
Display name in the AWS IoT console: **Serial port**  
Required: `true`  
Type: `string`  
Valid pattern: `[/a-zA-Z0-9_-]+`

`SerialPort-ResourceId`  
The ID of the local device resource that represents the physical serial port.  
This connector is granted read-write access to the resource.
Display name in the AWS IoT console: **Serial port resource**  
Required: `true`  
Type: `string`  
Valid pattern: `[a-zA-Z0-9_-]+`

`PollingRead`  
Sets the read mode: Polling-Read or Read-On-Demand.  
+ For Polling-Read mode, specify `true`. In this mode, the `PollingInterval`, `PollingReadType`, and `PollingReadLength` properties are required.
+ For Read-On-Demand mode, specify `false`. In this mode, the type and length values are specified in the read request.
Display name in the AWS IoT console: **Read mode**  
Required: `true`  
Type: `string`  
Valid values: `true, false`  
Valid pattern: `^([Tt][Rr][Uu][Ee]|[Ff][Aa][Ll][Ss][Ee])$`

`PollingReadLength`  
The length of data (in bytes) to read in each polling read operation. This applies only when using Polling-Read mode.  
Display name in the AWS IoT console: **Polling read length**  
Required: `false`. This property is required when `PollingRead` is `true`.  
Type: `string`  
Valid pattern: `^(|[1-9][0-9]{0,3}|[1-5][0-9]{4}|6[0-2][0-9]{3}|63[0-8][0-9]{2}|639[0-8][0-9]|6399[0-4])$`

`PollingReadInterval`  
The interval (in seconds) at which the polling read takes place. This applies only when using Polling-Read mode.  
Display name in the AWS IoT console: **Polling read interval**  
Required: `false`. This property is required when `PollingRead` is `true`.  
Type: `string`  
Valid values: 1 - 999  
Valid pattern: `^(|[1-9]|[1-9][0-9]|[1-9][0-9][0-9])$`

`PollingReadType`  
The type of data that the polling thread reads. This applies only when using Polling-Read mode.  
Display name in the AWS IoT console: **Polling read type**  
Required: `false`. This property is required when `PollingRead` is `true`.  
Type: `string`  
Valid values: `ascii, hex`  
Valid pattern: `^(|[Aa][Ss][Cc][Ii][Ii]|[Hh][Ee][Xx])$`

`RtsCts`  
Indicates whether to enable the RTS/CTS flow control. The default value is `false`. For more information, see [RTS, CTS, and RTR](https://en.wikipedia.org/wiki/RS-232#RTS,_CTS,_and_RTR).   
Display name in the AWS IoT console: **RTS/CTS flow control**  
Required: `false`  
Type: `string`  
Valid values: `true, false`  
Valid pattern: `^(|[Tt][Rr][Uu][Ee]|[Ff][Aa][Ll][Ss][Ee])$`

`XonXoff`  
Indicates whether to enable the software flow control. The default value is `false`. For more information, see [Software flow control](https://en.wikipedia.org/wiki/Software_flow_control).  
Display name in the AWS IoT console: **Software flow control**  
Required: `false`  
Type: `string`  
Valid values: `true, false`  
Valid pattern: `^(|[Tt][Rr][Uu][Ee]|[Ff][Aa][Ll][Ss][Ee])$`

`Parity`  
The parity of the serial port. The default value is `N`. For more information, see [Parity](https://en.wikipedia.org/wiki/Serial_port#Parity).   
Display name in the AWS IoT console: **Serial port parity**  
Required: `false`  
Type: `string`  
Valid values: `N, E, O, S, M`  
Valid pattern: `^(|[NEOSMneosm])$`

### Create Connector Example (AWS CLI)
<a name="serial-stream-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the Serial Stream connector. It configures the connector for Polling-Read mode.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MySerialStreamConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/SerialStream/versions/3",
            "Parameters": {
                "BaudRate" : "9600",
                "Timeout" : "25",
                "SerialPort" : "/dev/serial1",
                "SerialPort-ResourceId" : "my-serial-port-resource",
                "PollingRead" : "true",
                "PollingReadLength" : "30",
                "PollingReadInterval" : "30",
                "PollingReadType" : "hex"
            }
        }
    ]
}'
```

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="serial-stream-connector-data-input"></a>

This connector accepts read or write requests for serial ports on two MQTT topics. Input messages must be in JSON format.
+ Read requests on the `serial/+/read/#` topic.
+ Write requests on the `serial/+/write/#` topic.

To publish to these topics, replace the `+` wildcard with the core thing name and `#` wildcard with the path to the serial port. For example:

```
serial/core-thing-name/read/dev/serial-port
```

**Topic filter:** `serial/+/read/#`  
Use this topic to send on-demand read requests to a serial pin. Read requests are limited to a maximum read length of 63994 bytes.    
**Message properties**    
`readLength`  
The length of data to read from the serial port.  
Required: `true`  
Type: `string`  
Valid pattern: `^[1-9][0-9]*$`  
`type`  
The type of data to read.  
Required: `true`  
Type: `string`  
Valid values: `ascii, hex`  
Valid pattern: `(?i)^(ascii|hex)$`  
`id`  
An arbitrary ID for the request. This property is used to map an input request to an output response.  
Required: `false`  
Type: `string`  
Valid pattern: `.+`  
**Example input**  

```
{
    "readLength": "30",
    "type": "ascii",
    "id": "abc123"
}
```

**Topic filter:** `serial/+/write/#`  
Use this topic to send write requests to a serial pin. Write requests are limited to a maximum data length of 128000 bytes.    
**Message properties**    
`data`  
The string to write to the serial port.  
Required: `true`  
Type: `string`  
Valid pattern: `^[1-9][0-9]*$`  
`type`  
The type of data to read.  
Required: `true`  
Type: `string`  
Valid values: `ascii, hex`  
Valid pattern: `^(ascii|hex|ASCII|HEX)$`  
`id`  
An arbitrary ID for the request. This property is used to map an input request to an output response.  
Required: `false`  
Type: `string`  
Valid pattern: `.+`  
**Example input: ASCII request**  

```
{
    "data": "random serial data",
    "type": "ascii",
    "id": "abc123"
}
```  
**Example input: hex request**  

```
{
    "data": "base64 encoded data",
    "type": "hex",
    "id": "abc123"
}
```

## Output data
<a name="serial-stream-connector-data-output"></a>

The connector publishes output data on two topics:
+ Status information from the connector on the `serial/+/status/#` topic.
+ Responses from read requests on the `serial/+/read_response/#` topic.

When publishing to this topic, the connector replaces the `+` wildcard with the core thing name and `#` wildcard with the path to the serial port. For example:

```
serial/core-thing-name/status/dev/serial-port
```

**Topic filter:** `serial/+/status/#`  
Use this topic to listen for the status of read and write requests. If an `id` property is included it the request, it's returned in the response.    
**Example output: Success**  

```
{
    "response": {
        "status": "success"
    },
    "id": "abc123"
}
```  
**Example output: Failure**  
A failure response includes an `error_message` property that describes the error or timeout encountered while performing the read or write operation.  

```
{
    "response": {
        "status": "fail",
        "error_message": "Could not write to port"
    },
    "id": "abc123"
}
```

**Topic filter:** `serial/+/read_response/#`  
Use this topic to receive response data from a read operation. The response data is Base64 encoded if the type is `hex`.    
**Example output**  

```
{
    "data": "output of serial read operation"
    "id": "abc123"
}
```

## Usage Example
<a name="serial-stream-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
If you use other Python runtimes, you can create a symlink from Python3.x to Python 3.7.
The [Get started with connectors (console)](connectors-console.md) and [Get started with connectors (CLI)](connectors-cli.md) topics contain detailed steps that show you how to configure and deploy an example Twilio Notifications connector.

1. Make sure you meet the [requirements](#serial-stream-connector-req) for the connector.

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#serial-stream-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. <a name="connectors-setup-device-resource"></a>Add the required local device resource and grant read/write access to the Lambda function.

   1. Add the connector to your group and configure its [parameters](#serial-stream-connector-param).

   1. Add subscriptions to the group that allow the connector to receive [input data](#serial-stream-connector-data-input) and send [output data](#serial-stream-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="serial-stream-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector.

```
import greengrasssdk
import json

TOPIC_REQUEST = 'serial/CORE_THING_NAME/write/dev/serial1'

# Creating a greengrass core sdk client
iot_client = greengrasssdk.client('iot-data')

def create_serial_stream_request():
	request = {
		"data": "TEST",
		"type": "ascii",
		"id": "abc123"
	}
	return request

def publish_basic_request():
	iot_client.publish(payload=json.dumps(create_serial_stream_request()), topic=TOPIC_REQUEST)

publish_basic_request()

def lambda_handler(event, context):
	return
```

## Licenses
<a name="serial-stream-connector-license"></a>

The Serial Stream connector includes the following third-party software/licensing:
+ [pyserial](https://github.com/pyserial/pyserial)/BSD

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="serial-stream-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 3 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 2 | Updated connector ARN for AWS Region support. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="serial-stream-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)

# ServiceNow MetricBase Integration connector
<a name="servicenow-connector"></a>

**Warning**  <a name="connectors-extended-life-phase-warning"></a>
This connector has moved into the *extended life phase*, and AWS IoT Greengrass won't release updates that provide features, enhancements to existing features, security patches, or bug fixes. For more information, see [AWS IoT Greengrass Version 1 maintenance policy](maintenance-policy.md).

The ServiceNow MetricBase Integration [connector](connectors.md) publishes time series metrics from Greengrass devices to ServiceNow MetricBase. This allows you to store, analyze, and visualize time series data from the Greengrass core environment, and act on local events.

This connector receives time series data on an MQTT topic, and publishes the data to the ServiceNow API at regular intervals.

You can use this connector to support scenarios such as:
+ Create threshold-based alerts and alarms based on time series data collected from Greengrass devices.
+ Use time services data from Greengrass devices with custom applications built on the ServiceNow platform.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 4 | `arn:aws:greengrass:region::/connectors/ServiceNowMetricBaseIntegration/versions/4` | 
| 3 | `arn:aws:greengrass:region::/connectors/ServiceNowMetricBaseIntegration/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/ServiceNowMetricBaseIntegration/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/ServiceNowMetricBaseIntegration/versions/1` | 

For information about version changes, see the [Changelog](#servicenow-connector-changelog).

## Requirements
<a name="servicenow-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 3 - 4 ]
+ <a name="conn-req-ggc-v1.9.3-secrets"></a>AWS IoT Greengrass Core software v1.9.3 or later. AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs).
**Note**  
This requirement includes allowing access to your Secrets Manager secrets. If you're using the default Greengrass service role, Greengrass has permission to get the values of secrets with names that start with *greengrass-*.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="conn-servicenow-req-servicenow-account"></a>A ServiceNow account with an activated subscription to MetricBase. In addition, a metric and metric table must be created in the account. For more information, see [MetricBase](https://docs.servicenow.com/bundle/london-servicenow-platform/page/administer/metricbase/concept/metricbase.html) in the ServiceNow documentation.
+ <a name="conn-servicenow-req-secret"></a>A text type secret in AWS Secrets Manager that stores the user name and password to log in to your ServiceNow instance with basic authentication. The secret must contain "user" and "password" keys with corresponding values. For more information, see [Creating a basic secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) in the *AWS Secrets Manager User Guide*.
+ A secret resource in the Greengrass group that references the Secrets Manager secret. For more information, see [Deploy secrets to the AWS IoT Greengrass core](secrets.md).

------
#### [ Versions 1 - 2 ]
+ <a name="conn-req-ggc-v1.7.0-secrets"></a>AWS IoT Greengrass Core software v1.7 or later. AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs).
**Note**  
This requirement includes allowing access to your Secrets Manager secrets. If you're using the default Greengrass service role, Greengrass has permission to get the values of secrets with names that start with *greengrass-*.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-servicenow-req-servicenow-account"></a>A ServiceNow account with an activated subscription to MetricBase. In addition, a metric and metric table must be created in the account. For more information, see [MetricBase](https://docs.servicenow.com/bundle/london-servicenow-platform/page/administer/metricbase/concept/metricbase.html) in the ServiceNow documentation.
+ <a name="conn-servicenow-req-secret"></a>A text type secret in AWS Secrets Manager that stores the user name and password to log in to your ServiceNow instance with basic authentication. The secret must contain "user" and "password" keys with corresponding values. For more information, see [Creating a basic secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) in the *AWS Secrets Manager User Guide*.
+ A secret resource in the Greengrass group that references the Secrets Manager secret. For more information, see [Deploy secrets to the AWS IoT Greengrass core](secrets.md).

------

## Connector Parameters
<a name="servicenow-connector-param"></a>

This connector provides the following parameters:

------
#### [ Version 4 ]

`PublishInterval`  <a name="service-now-PublishInterval"></a>
The maximum number of seconds to wait between publish events to ServiceNow. The maximum value is 900.  
The connector publishes to ServiceNow when `PublishBatchSize` is reached or `PublishInterval` expires.  
Display name in the AWS IoT console: **Publish interval in seconds**  
Required: `true`  
Type: `string`  
Valid values: `1 - 900`  
Valid pattern: `[1-9]|[1-9]\d|[1-9]\d\d|900`

`PublishBatchSize`  <a name="service-now-PublishBatchSize"></a>
The maximum number of metric values that can be batched before they are published to ServiceNow.  
The connector publishes to ServiceNow when `PublishBatchSize` is reached or `PublishInterval` expires.  
Display name in the AWS IoT console: **Publish batch size**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`InstanceName`  <a name="service-now-InstanceName"></a>
The name of the instance used to connect to ServiceNow.  
Display name in the AWS IoT console: **Name of ServiceNow instance**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`DefaultTableName`  <a name="service-now-DefaultTableName"></a>
The name of the table that contains the `GlideRecord` associated with the time series MetricBase database. The `table` property in the input message payload can be used to override this value.  
Display name in the AWS IoT console: **Name of the table to contain the metric**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`MaxMetricsToRetain`  <a name="service-now-MaxMetricsToRetain"></a>
The maximum number of metrics to save in memory before they are replaced with new metrics.  
This limit applies when there's no connection to the internet and the connector starts to buffer the metrics to publish later. When the buffer is full, the oldest metrics are replaced by new metrics.  
Metrics are not saved if the host process for the connector is interrupted. For example, this can happen during group deployment or when the device restarts.
This value should be greater than the batch size and large enough to hold messages based on the incoming rate of the MQTT messages.  
Display name in the AWS IoT console: **Maximum metrics to retain in memory**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`AuthSecretArn`  <a name="service-now-AuthSecretArn"></a>
The secret in AWS Secrets Manager that stores the ServiceNow user name and password. This must be a text type secret. The secret must contain "user" and "password" keys with corresponding values.  
Display name in the AWS IoT console: **ARN of auth secret**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:secretsmanager:[a-z0-9\-]+:[0-9]{12}:secret:([a-zA-Z0-9\\]+/)*[a-zA-Z0-9/_+=,.@\-]+-[a-zA-Z0-9]+`

`AuthSecretArn-ResourceId`  <a name="service-now-AuthSecretArn-ResourceId"></a>
The secret resource in the group that references the Secrets Manager secret for the ServiceNow credentials.  
Display name in the AWS IoT console: **Auth token resource**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`IsolationMode`  <a name="IsolationMode"></a>
The [containerization](connectors.md#connector-containerization) mode for this connector. The default is `GreengrassContainer`, which means that the connector runs in an isolated runtime environment inside the AWS IoT Greengrass container.  
The default containerization setting for the group does not apply to connectors.
Display name in the AWS IoT console: **Container isolation mode**  
Required: `false`  
Type: `string`  
Valid values: `GreengrassContainer` or `NoContainer`  
Valid pattern: `^NoContainer$|^GreengrassContainer$`

------
#### [ Version 1 - 3 ]

`PublishInterval`  <a name="service-now-PublishInterval"></a>
The maximum number of seconds to wait between publish events to ServiceNow. The maximum value is 900.  
The connector publishes to ServiceNow when `PublishBatchSize` is reached or `PublishInterval` expires.  
Display name in the AWS IoT console: **Publish interval in seconds**  
Required: `true`  
Type: `string`  
Valid values: `1 - 900`  
Valid pattern: `[1-9]|[1-9]\d|[1-9]\d\d|900`

`PublishBatchSize`  <a name="service-now-PublishBatchSize"></a>
The maximum number of metric values that can be batched before they are published to ServiceNow.  
The connector publishes to ServiceNow when `PublishBatchSize` is reached or `PublishInterval` expires.  
Display name in the AWS IoT console: **Publish batch size**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`InstanceName`  <a name="service-now-InstanceName"></a>
The name of the instance used to connect to ServiceNow.  
Display name in the AWS IoT console: **Name of ServiceNow instance**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`DefaultTableName`  <a name="service-now-DefaultTableName"></a>
The name of the table that contains the `GlideRecord` associated with the time series MetricBase database. The `table` property in the input message payload can be used to override this value.  
Display name in the AWS IoT console: **Name of the table to contain the metric**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`MaxMetricsToRetain`  <a name="service-now-MaxMetricsToRetain"></a>
The maximum number of metrics to save in memory before they are replaced with new metrics.  
This limit applies when there's no connection to the internet and the connector starts to buffer the metrics to publish later. When the buffer is full, the oldest metrics are replaced by new metrics.  
Metrics are not saved if the host process for the connector is interrupted. For example, this can happen during group deployment or when the device restarts.
This value should be greater than the batch size and large enough to hold messages based on the incoming rate of the MQTT messages.  
Display name in the AWS IoT console: **Maximum metrics to retain in memory**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`AuthSecretArn`  <a name="service-now-AuthSecretArn"></a>
The secret in AWS Secrets Manager that stores the ServiceNow user name and password. This must be a text type secret. The secret must contain "user" and "password" keys with corresponding values.  
Display name in the AWS IoT console: **ARN of auth secret**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:secretsmanager:[a-z0-9\-]+:[0-9]{12}:secret:([a-zA-Z0-9\\]+/)*[a-zA-Z0-9/_+=,.@\-]+-[a-zA-Z0-9]+`

`AuthSecretArn-ResourceId`  <a name="service-now-AuthSecretArn-ResourceId"></a>
The secret resource in the group that references the Secrets Manager secret for the ServiceNow credentials.  
Display name in the AWS IoT console: **Auth token resource**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

------

### Create Connector Example (AWS CLI)
<a name="servicenow-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the ServiceNow MetricBase Integration connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyServiceNowMetricBaseIntegrationConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/ServiceNowMetricBaseIntegration/versions/4",
            "Parameters": {
                "PublishInterval" : "10",
                "PublishBatchSize" : "50",
                "InstanceName" : "myinstance",
                "DefaultTableName" : "u_greengrass_app",
                "MaxMetricsToRetain" : "20000",
                "AuthSecretArn" : "arn:aws:secretsmanager:region:account-id:secret:greengrass-secret-hash",
                "AuthSecretArn-ResourceId" : "MySecretResource", 
                "IsolationMode" : "GreengrassContainer"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in this connector has a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="servicenow-connector-data-input"></a>

This connector accepts time series metrics on an MQTT topic and publishes the metrics to ServiceNow. Input messages must be in JSON format.

<a name="topic-filter"></a>**Topic filter in subscription**  
`servicenow/metricbase/metric`

**Message properties**    
`request`  
Information about the table, record, and metric. This request represents the `seriesRef` object in a time series POST request. For more information, see [ Clotho Time Series API - POST](https://docs.servicenow.com/bundle/london-application-development/page/integrate/inbound-rest/concept/Clotho-Time-Series-API.html#clotho-POST-put).  
  
Required: `true`  
Type: `object` that includes the following properties:    
`subject`  
The `sys_id` of the specific record in the table.  
Required: `true`  
Type: `string`  
`metric_name`  
The metric field name.  
Required: `true`  
Type: `string`  
`table`  
The name of the table to store the record in. Specify this value to override the `DefaultTableName` parameter.  
Required: `false`  
Type: `string`  
`value`  
The value of the individual data point.  
Required: `true`  
Type: `float`  
`timestamp`  
The timestamp of the individual data point. The default value is the current time.  
Required: `false`  
Type: `string`

**Example input**  

```
{
    "request": {
        "subject":"ef43c6d40a0a0b5700c77f9bf387afe3",
        "metric_name":"u_count",
        "table": "u_greengrass_app"
        "value": 1.0,
        "timestamp": "2018-10-14T10:30:00"
    }
}
```

## Output data
<a name="servicenow-connector-data-output"></a>

This connector publishes status information as output data on an MQTT topic.

<a name="topic-filter"></a>**Topic filter in subscription**  
`servicenow/metricbase/metric/status`

**Example output: Success**  

```
{
    "response": {
        "metric_name": "Errors",
        "table_name": "GliderProd",
        "processed_on": "2018-10-14T10:35:00",
        "response_id": "khjKSkj132qwr23fcba",
        "status": "success",
        "values": [
            {
                "timestamp": "2016-10-14T10:30:00",
                "value": 1.0
            },
            {
                "timestamp": "2016-10-14T10:31:00",
                "value": 1.1
            }
        ]
    }
}
```

**Example output: Failure**  

```
{
    "response": {
        "error": "InvalidInputException",
        "error_message": "metric value is invalid",
        "status": "fail"
    }
}
```
If the connector detects a retryable error (for example, connection errors), it retries the publish in the next batch.

## Usage Example
<a name="servicenow-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
If you use other Python runtimes, you can create a symlink from Python3.x to Python 3.7.
The [Get started with connectors (console)](connectors-console.md) and [Get started with connectors (CLI)](connectors-cli.md) topics contain detailed steps that show you how to configure and deploy an example Twilio Notifications connector.

1. Make sure you meet the [requirements](#servicenow-connector-req) for the connector.

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#servicenow-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. <a name="connectors-setup-secret-resource"></a>Add the required secret resource and grant read access to the Lambda function.

   1. Add the connector and configure its [parameters](#servicenow-connector-param).

   1. Add subscriptions that allow the connector to receive [input data](#servicenow-connector-data-input) and send [output data](#servicenow-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="servicenow-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector.

```
import greengrasssdk
import json

iot_client = greengrasssdk.client('iot-data')
SEND_TOPIC = 'servicenow/metricbase/metric'

def create_request_with_all_fields():
    return {
        "request": {
             "subject": '2efdf6badbd523803acfae441b961961',
             "metric_name": 'u_count',
             "value": 1234,
             "timestamp": '2018-10-20T20:22:20',
             "table": 'u_greengrass_metricbase_test'
        }
    }

def publish_basic_message():
    messageToPublish = create_request_with_all_fields()
    print("Message To Publish: ", messageToPublish)
    iot_client.publish(topic=SEND_TOPIC,
        payload=json.dumps(messageToPublish))

publish_basic_message()

def lambda_handler(event, context):
    return
```

## Licenses
<a name="servicenow-connector-license"></a>

The ServiceNow MetricBase Integration connector includes the following third-party software/licensing:
+ [pysnow](https://github.com/rbw/pysnow)/MIT

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="servicenow-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 4 | <a name="isolation-mode-changelog"></a>Added the `IsolationMode` parameter to configure the containerization mode for the connector. | 
| 3 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 2 | Fix to reduce excessive logging. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="servicenow-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)

# SNS connector
<a name="sns-connector"></a>

The SNS [connector](connectors.md) publishes messages to an Amazon SNS topic. This enables web servers, email addresses, and other message subscribers to respond to events in the Greengrass group.

This connector receives SNS message information on an MQTT topic, and then sends the message to a specified SNS topic. You can optionally use custom Lambda functions to implement filtering or formatting logic on messages before they are published to this connector.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 4 | `arn:aws:greengrass:region::/connectors/SNS/versions/4` | 
| 3 | `arn:aws:greengrass:region::/connectors/SNS/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/SNS/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/SNS/versions/1` | 

For information about version changes, see the [Changelog](#sns-connector-changelog).

## Requirements
<a name="sns-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 3 - 4 ]
+ <a name="conn-req-ggc-v1.9.3"></a>AWS IoT Greengrass Core software v1.9.3 or later.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="conn-sns-req-sns-config"></a>A configured SNS topic. For more information, see [Creating an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-tutorial-create-topic.html) in the *Amazon Simple Notification Service Developer Guide*.
+ <a name="conn-sns-req-iam-policy"></a>The [Greengrass group role](group-role.md) configured to allow the `sns:Publish` action on the target Amazon SNStopic, as shown in the following example IAM policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "Stmt1528133056761",
              "Action": [
                  "sns:Publish"
              ],
              "Effect": "Allow",
              "Resource": [
              "arn:aws:sns:us-east-1:123456789012:topic-name"
              ]
          }
      ]
  }
  ```

------

  This connector allows you to dynamically override the default topic in the input message payload. If your implementation uses this feature, the IAM policy must allow `sns:Publish` permission on all target topics. You can grant granular or conditional access to resources (for example, by using a wildcard \$1 naming scheme).

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

------
#### [ Versions 1 - 2 ]
+ <a name="conn-req-ggc-v1.7.0"></a>AWS IoT Greengrass Core software v1.7 or later.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-sns-req-sns-config"></a>A configured SNS topic. For more information, see [Creating an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-tutorial-create-topic.html) in the *Amazon Simple Notification Service Developer Guide*.
+ <a name="conn-sns-req-iam-policy"></a>The [Greengrass group role](group-role.md) configured to allow the `sns:Publish` action on the target Amazon SNStopic, as shown in the following example IAM policy.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "Stmt1528133056761",
              "Action": [
                  "sns:Publish"
              ],
              "Effect": "Allow",
              "Resource": [
              "arn:aws:sns:us-east-1:123456789012:topic-name"
              ]
          }
      ]
  }
  ```

------

  This connector allows you to dynamically override the default topic in the input message payload. If your implementation uses this feature, the IAM policy must allow `sns:Publish` permission on all target topics. You can grant granular or conditional access to resources (for example, by using a wildcard \$1 naming scheme).

  <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

------

## Connector Parameters
<a name="sns-connector-param"></a>

This connector provides the following parameters:

------
#### [ Version 4 ]

`DefaultSNSArn`  <a name="sns-DefaultSNSArn"></a>
The ARN of the default SNS topic to publish messages to. The destination topic can be overridden by the `sns_topic_arn` property in the input message payload.  
The group role must allow `sns:Publish` permission to all target topics. For more information, see [Requirements](#sns-connector-req).
Display name in the AWS IoT console: **Default SNS topic ARN**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:sns:([a-z]{2}-[a-z]+-\d{1}):(\d{12}):([a-zA-Z0-9-_]+)$`

`IsolationMode`  <a name="IsolationMode"></a>
The [containerization](connectors.md#connector-containerization) mode for this connector. The default is `GreengrassContainer`, which means that the connector runs in an isolated runtime environment inside the AWS IoT Greengrass container.  
The default containerization setting for the group does not apply to connectors.
Display name in the AWS IoT console: **Container isolation mode**  
Required: `false`  
Type: `string`  
Valid values: `GreengrassContainer` or `NoContainer`  
Valid pattern: `^NoContainer$|^GreengrassContainer$`

------
#### [ Versions 1 - 3 ]

`DefaultSNSArn`  <a name="sns-DefaultSNSArn"></a>
The ARN of the default SNS topic to publish messages to. The destination topic can be overridden by the `sns_topic_arn` property in the input message payload.  
The group role must allow `sns:Publish` permission to all target topics. For more information, see [Requirements](#sns-connector-req).
Display name in the AWS IoT console: **Default SNS topic ARN**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:sns:([a-z]{2}-[a-z]+-\d{1}):(\d{12}):([a-zA-Z0-9-_]+)$`

------

### Create Connector Example (AWS CLI)
<a name="sns-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the SNS connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MySNSConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/SNS/versions/4",
            "Parameters": {
                "DefaultSNSArn": "arn:aws:sns:region:account-id:topic-name",
                "IsolationMode" : "GreengrassContainer"
            }
        }
    ]
}'
```

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="sns-connector-data-input"></a>

This connector accepts SNS message information on an MQTT topic, and then publishes the message as is to the target SNS topic. Input messages must be in JSON format.

<a name="topic-filter"></a>**Topic filter in subscription**  
`sns/message`

**Message properties**    
`request`  
Information about the message to send to the SNS topic.  
Required: `true`  
Type: `object` that includes the following properties:    
`message`  
The content of the message as a string or in JSON format. For examples, see [Example input](#sns-connector-data-input-example).  
To send JSON, the `message_structure` property must be set to `json` and the message must be a string-encoded JSON object that contains a `default` key.  
Required: `true`  
Type: `string`  
Valid pattern: `.*`  
`subject`  
The subject of the message.  
Required: `false`  
Type: ASCII text, up to 100 characters. This must begin with a letter, number, or punctuation mark. This must not include line breaks or control characters.  
Valid pattern: `.*`  
`sns_topic_arn`  
The ARN of the SNS topic to publish messages to. If specified, the connector publishes to this topic instead of the default topic.  
The group role must allow `sns:Publish` permission to any target topics. For more information, see [Requirements](#sns-connector-req).
Required: `false`  
Type: `string`  
Valid pattern: `arn:aws:sns:([a-z]{2}-[a-z]+-\d{1}):(\d{12}):([a-zA-Z0-9-_]+)$`  
`message_structure`  
The structure of the message.  
Required: `false`. This must be specified to send a JSON message.  
Type: `string`  
Valid values: `json`  
`id`  
An arbitrary ID for the request. This property is used to map an input request to an output response. When specified, the `id` property in the response object is set to this value. If you don't use this feature, you can omit this property or specify an empty string.  
Required: `false`  
Type: `string`  
Valid pattern: `.*`

**Limits**  
The message size is bounded by a maximum SNS message size of 256 KB.

**Example input: String message**  <a name="sns-connector-data-input-example"></a>
This example sends a string message. It specifies the optional `sns_topic_arn` property, which overrides the default destination topic.  

```
{
    "request": {
        "subject": "Message subject",
        "message": "Message data",
        "sns_topic_arn": "arn:aws:sns:region:account-id:topic2-name"
    },
    "id": "request123"
}
```

**Example input: JSON message**  
This example sends a message as a string encoded JSON object that includes the `default` key.  

```
{
    "request": {
        "subject": "Message subject",
        "message": "{ \"default\": \"Message data\" }",
        "message_structure": "json"
    },
    "id": "request123"
}
```

## Output data
<a name="sns-connector-data-output"></a>

This connector publishes status information as output data on an MQTT topic.

<a name="topic-filter"></a>**Topic filter in subscription**  
`sns/message/status`

**Example output: Success**  

```
{
    "response": {
        "sns_message_id": "f80a81bc-f44c-56f2-a0f0-d5af6a727c8a",
        "status": "success"
    },
    "id": "request123"
}
```

**Example output: Failure**  

```
{
   "response" : {
        "error": "InvalidInputException",
        "error_message": "SNS Topic Arn is invalid",
        "status": "fail"
   },
   "id": "request123"
}
```

## Usage Example
<a name="sns-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
If you use other Python runtimes, you can create a symlink from Python3.x to Python 3.7.
The [Get started with connectors (console)](connectors-console.md) and [Get started with connectors (CLI)](connectors-cli.md) topics contain detailed steps that show you how to configure and deploy an example Twilio Notifications connector.

1. Make sure you meet the [requirements](#sns-connector-req) for the connector.

   <a name="set-up-group-role"></a>For the group role requirement, you must configure the role to grant the required permissions and make sure the role has been added to the group. For more information, see [Managing the Greengrass group role (console)](group-role.md#manage-group-role-console) or [Managing the Greengrass group role (CLI)](group-role.md#manage-group-role-cli).

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#sns-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. Add the connector and configure its [parameters](#sns-connector-param).

   1. Add subscriptions that allow the connector to receive [input data](#sns-connector-data-input) and send [output data](#sns-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="sns-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector.

```
import greengrasssdk
import time
import json

iot_client = greengrasssdk.client('iot-data')
send_topic = 'sns/message'

def create_request_with_all_fields():
    return  {
        "request": {
            "message": "Message from SNS Connector Test"
        },
        "id" : "req_123"
    }

def publish_basic_message():
    messageToPublish = create_request_with_all_fields()
    print("Message To Publish: ", messageToPublish)
    iot_client.publish(topic=send_topic,
        payload=json.dumps(messageToPublish))

publish_basic_message()

def lambda_handler(event, context):
    return
```

## Licenses
<a name="sns-connector-license"></a>

The SNS connector includes the following third-party software/licensing:<a name="boto-3-licenses"></a>
+ [AWS SDK for Python (Boto3)](https://pypi.org/project/boto3/)/Apache License 2.0
+ [botocore](https://pypi.org/project/botocore/)/Apache License 2.0
+ [dateutil](https://pypi.org/project/python-dateutil/1.4/)/PSF License
+ [docutils](https://pypi.org/project/docutils/)/BSD License, GNU General Public License (GPL), Python Software Foundation License, Public Domain
+ [jmespath](https://pypi.org/project/jmespath/)/MIT License
+ [s3transfer](https://pypi.org/project/s3transfer/)/Apache License 2.0
+ [urllib3](https://pypi.org/project/urllib3/)/MIT License

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="sns-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 4 | <a name="isolation-mode-changelog"></a>Added the `IsolationMode` parameter to configure the containerization mode for the connector. | 
| 3 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 2 | Fix to reduce excessive logging. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="sns-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+ [ Publish action](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sns.html#SNS.Client.publish) in the Boto 3 documentation
+ [What is Amazon Simple Notification Service?](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) in the *Amazon Simple Notification Service Developer Guide*

# Splunk Integration connector
<a name="splunk-connector"></a>

**Warning**  <a name="connectors-extended-life-phase-warning"></a>
This connector has moved into the *extended life phase*, and AWS IoT Greengrass won't release updates that provide features, enhancements to existing features, security patches, or bug fixes. For more information, see [AWS IoT Greengrass Version 1 maintenance policy](maintenance-policy.md).

The Splunk Integration [connector](connectors.md) publishes data from Greengrass devices to Splunk. This allows you to use Splunk to monitor and analyze the Greengrass core environment, and act on local events. The connector integrates with HTTP Event Collector (HEC). For more information, see [Introduction to Splunk HTTP Event Collector](https://dev.splunk.com/view/event-collector/SP-CAAAE6M) in the Splunk documentation.

This connector receives logging and event data on an MQTT topic and publishes the data as is to the Splunk API.

You can use this connector to support industrial scenarios, such as:
+ Operators can use periodic data from actuators and sensors (for example, temperature, pressure, and water readings) to initiate alarms when values exceed certain thresholds.
+ Developers use data collected from industrial machinery to build ML models that can monitor the equipment for potential issues.

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 4 | `arn:aws:greengrass:region::/connectors/SplunkIntegration/versions/4` | 
| 3 | `arn:aws:greengrass:region::/connectors/SplunkIntegration/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/SplunkIntegration/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/SplunkIntegration/versions/1` | 

For information about version changes, see the [Changelog](#splunk-connector-changelog).

## Requirements
<a name="splunk-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 3 - 4 ]
+ <a name="conn-req-ggc-v1.9.3-secrets"></a>AWS IoT Greengrass Core software v1.9.3 or later. AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs).
**Note**  
This requirement includes allowing access to your Secrets Manager secrets. If you're using the default Greengrass service role, Greengrass has permission to get the values of secrets with names that start with *greengrass-*.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ <a name="conn-splunk-req-http-event-collector"></a>The HTTP Event Collector functionality must be enabled in Splunk. For more information, see [Set up and use HTTP eEvent Collector in Splunk Web](https://docs.splunk.com/Documentation/Splunk/7.2.0/Data/UsetheHTTPEventCollector) in the Splunk documentation.
+ <a name="conn-splunk-req-secret"></a>A text type secret in AWS Secrets Manager that stores your Splunk HTTP Event Collector token. For more information, see [About event collector tokens](https://docs.splunk.com/Documentation/Splunk/7.2.0/Data/UsetheHTTPEventCollector#About_Event_Collector_tokens) in the Splunk documentation and [Creating a basic secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) in the *AWS Secrets Manager User Guide*.
**Note**  
To create the secret in the Secrets Manager console, enter your token on the **Plaintext** tab. Don't include quotation marks or other formatting. In the API, specify the token as the value for the `SecretString` property.
+ A secret resource in the Greengrass group that references the Secrets Manager secret. For more information, see [Deploy secrets to the AWS IoT Greengrass core](secrets.md).

------
#### [ Versions 1 - 2 ]
+ <a name="conn-req-ggc-v1.7.0-secrets"></a>AWS IoT Greengrass Core software v1.7 or later. AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs).
**Note**  
This requirement includes allowing access to your Secrets Manager secrets. If you're using the default Greengrass service role, Greengrass has permission to get the values of secrets with names that start with *greengrass-*.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ <a name="conn-splunk-req-http-event-collector"></a>The HTTP Event Collector functionality must be enabled in Splunk. For more information, see [Set up and use HTTP eEvent Collector in Splunk Web](https://docs.splunk.com/Documentation/Splunk/7.2.0/Data/UsetheHTTPEventCollector) in the Splunk documentation.
+ <a name="conn-splunk-req-secret"></a>A text type secret in AWS Secrets Manager that stores your Splunk HTTP Event Collector token. For more information, see [About event collector tokens](https://docs.splunk.com/Documentation/Splunk/7.2.0/Data/UsetheHTTPEventCollector#About_Event_Collector_tokens) in the Splunk documentation and [Creating a basic secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) in the *AWS Secrets Manager User Guide*.
**Note**  
To create the secret in the Secrets Manager console, enter your token on the **Plaintext** tab. Don't include quotation marks or other formatting. In the API, specify the token as the value for the `SecretString` property.
+ A secret resource in the Greengrass group that references the Secrets Manager secret. For more information, see [Deploy secrets to the AWS IoT Greengrass core](secrets.md).

------

## Connector Parameters
<a name="splunk-connector-param"></a>

This connector provides the following parameters:

------
#### [ Version 4 ]

`SplunkEndpoint`  <a name="splunk-SplunkEndpoint"></a>
The endpoint of your Splunk instance. This value must contain the protocol, hostname, and port.  
Display name in the AWS IoT console: **Splunk endpoint**  
Required: `true`  
Type: `string`  
Valid pattern: `^(http:\/\/|https:\/\/)?[a-z0-9]+([-.]{1}[a-z0-9]+)*.[a-z]{2,5}(:[0-9]{1,5})?(\/.*)?$`

`MemorySize`  <a name="splunk-MemorySize"></a>
The amount of memory (in KB) to allocate to the connector.  
Display name in the AWS IoT console: **Memory size**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`SplunkQueueSize`  <a name="splunk-SplunkQueueSize"></a>
The maximum number of items to save in memory before the items are submitted or discarded. When this limit is met, the oldest items in the queue are replaced with newer items. This limit typically applies when there's no connection to the internet.  
Display name in the AWS IoT console: **Maximum items to retain**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`SplunkFlushIntervalSeconds`  <a name="splunk-SplunkFlushIntervalSeconds"></a>
The interval (in seconds) for publishing received data to Splunk HEC. The maximum value is 900. To configure the connector to publish items as they are received (without batching), specify 0.  
Display name in the AWS IoT console: **Splunk publish interval**  
Required: `true`  
Type: `string`  
Valid pattern: `[0-9]|[1-9]\d|[1-9]\d\d|900`

`SplunkTokenSecretArn`  <a name="splunk-SplunkTokenSecretArn"></a>
The secret in AWS Secrets Manager that stores the Splunk token. This must be a text type secret.  
Display name in the AWS IoT console: **ARN of Splunk auth token secret**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:secretsmanager:[a-z]{2}-[a-z]+-\d{1}:\d{12}?:secret:[a-zA-Z0-9-_]+-[a-zA-Z0-9-_]+`

`SplunkTokenSecretArn-ResourceId`  <a name="splunk-SplunkTokenSecretArn-ResourceId"></a>
The secret resource in the Greengrass group that references the Splunk secret.  
Display name in the AWS IoT console: **Splunk auth token resource**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`SplunkCustomCALocation`  <a name="splunk-SplunkCustomCALocation"></a>
The file path of the custom certificate authority (CA) for Splunk (for example, `/etc/ssl/certs/splunk.crt`).  
Display name in the AWS IoT console: **Splunk custom certificate authority location**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|/.*`

`IsolationMode`  <a name="IsolationMode"></a>
The [containerization](connectors.md#connector-containerization) mode for this connector. The default is `GreengrassContainer`, which means that the connector runs in an isolated runtime environment inside the AWS IoT Greengrass container.  
The default containerization setting for the group does not apply to connectors.
Display name in the AWS IoT console: **Container isolation mode**  
Required: `false`  
Type: `string`  
Valid values: `GreengrassContainer` or `NoContainer`  
Valid pattern: `^NoContainer$|^GreengrassContainer$`

------
#### [ Version 1 - 3 ]

`SplunkEndpoint`  <a name="splunk-SplunkEndpoint"></a>
The endpoint of your Splunk instance. This value must contain the protocol, hostname, and port.  
Display name in the AWS IoT console: **Splunk endpoint**  
Required: `true`  
Type: `string`  
Valid pattern: `^(http:\/\/|https:\/\/)?[a-z0-9]+([-.]{1}[a-z0-9]+)*.[a-z]{2,5}(:[0-9]{1,5})?(\/.*)?$`

`MemorySize`  <a name="splunk-MemorySize"></a>
The amount of memory (in KB) to allocate to the connector.  
Display name in the AWS IoT console: **Memory size**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`SplunkQueueSize`  <a name="splunk-SplunkQueueSize"></a>
The maximum number of items to save in memory before the items are submitted or discarded. When this limit is met, the oldest items in the queue are replaced with newer items. This limit typically applies when there's no connection to the internet.  
Display name in the AWS IoT console: **Maximum items to retain**  
Required: `true`  
Type: `string`  
Valid pattern: `^[0-9]+$`

`SplunkFlushIntervalSeconds`  <a name="splunk-SplunkFlushIntervalSeconds"></a>
The interval (in seconds) for publishing received data to Splunk HEC. The maximum value is 900. To configure the connector to publish items as they are received (without batching), specify 0.  
Display name in the AWS IoT console: **Splunk publish interval**  
Required: `true`  
Type: `string`  
Valid pattern: `[0-9]|[1-9]\d|[1-9]\d\d|900`

`SplunkTokenSecretArn`  <a name="splunk-SplunkTokenSecretArn"></a>
The secret in AWS Secrets Manager that stores the Splunk token. This must be a text type secret.  
Display name in the AWS IoT console: **ARN of Splunk auth token secret**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:secretsmanager:[a-z]{2}-[a-z]+-\d{1}:\d{12}?:secret:[a-zA-Z0-9-_]+-[a-zA-Z0-9-_]+`

`SplunkTokenSecretArn-ResourceId`  <a name="splunk-SplunkTokenSecretArn-ResourceId"></a>
The secret resource in the Greengrass group that references the Splunk secret.  
Display name in the AWS IoT console: **Splunk auth token resource**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`SplunkCustomCALocation`  <a name="splunk-SplunkCustomCALocation"></a>
The file path of the custom certificate authority (CA) for Splunk (for example, `/etc/ssl/certs/splunk.crt`).  
Display name in the AWS IoT console: **Splunk custom certificate authority location**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|/.*`

------

### Create Connector Example (AWS CLI)
<a name="splunk-connector-create"></a>

The following CLI command creates a `ConnectorDefinition` with an initial version that contains the Splunk Integration connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MySplunkIntegrationConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/SplunkIntegration/versions/4",
            "Parameters": {
                "SplunkEndpoint": "https://myinstance.cloud.splunk.com:8088",
                "MemorySize": 200000,
                "SplunkQueueSize": 10000,
                "SplunkFlushIntervalSeconds": 5,
                "SplunkTokenSecretArn":"arn:aws:secretsmanager:region:account-id:secret:greengrass-secret-hash",
                "SplunkTokenSecretArn-ResourceId": "MySplunkResource", 
                "IsolationMode" : "GreengrassContainer"
            }
        }
    ]
}'
```

**Note**  
The Lambda function in this connector has a [long-lived](lambda-functions.md#lambda-lifecycle) lifecycle.

In the AWS IoT Greengrass console, you can add a connector from the group's **Connectors** page. For more information, see [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="splunk-connector-data-input"></a>

This connector accepts logging and event data on an MQTT topic and publishes the received data as is to the Splunk API. Input messages must be in JSON format.

<a name="topic-filter"></a>**Topic filter in subscription**  
`splunk/logs/put`

**Message properties**    
`request`  
The event data to send to the Splunk API. Events must meet the specifications of the [services/collector](https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTinput#services.2Fcollector) API.  
Required: `true`  
Type: `object`. Only the `event` property is required.  
`id`  
An arbitrary ID for the request. This property is used to map an input request to an output status.  
Required: `false`  
Type: `string`

**Limits**  
All limits that are imposed by the Splunk API apply when using this connector. For more information, see [services/collector](https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTinput#services.2Fcollector).

**Example input**  

```
{
    "request": {
        "event": "some event",
        "fields": {
            "severity": "INFO",
            "category": [
                "value1",
                "value2"
            ]
        }
    },
    "id": "request123"
}
```

## Output data
<a name="splunk-connector-data-output"></a>

This connector publishes output data on two topics:
+ Status information on the `splunk/logs/put/status` topic.
+ Errors on the `splunk/logs/put/error` topic.

**Topic filter:** `splunk/logs/put/status`  
Use this topic to listen for the status of the requests. Each time that the connector sends a batch of received data to the Splunk API, it publishes a list of the IDs of the requests that succeeded and failed.    
**Example output**  

```
{
    "response": {
        "succeeded": [
            "request123",
            ...
        ],
        "failed": [
            "request789",
            ...
        ]
    }
}
```

**Topic filter:** `splunk/logs/put/error`  
Use this topic to listen for errors from the connector. The `error_message` property that describes the error or timeout encountered while processing the request.    
**Example output**  

```
{
    "response": {
        "error": "UnauthorizedException",
        "error_message": "invalid splunk token",
        "status": "fail"
    }
}
```
If the connector detects a retryable error (for example, connection errors), it retries the publish in the next batch.

## Usage Example
<a name="splunk-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
If you use other Python runtimes, you can create a symlink from Python3.x to Python 3.7.
The [Get started with connectors (console)](connectors-console.md) and [Get started with connectors (CLI)](connectors-cli.md) topics contain detailed steps that show you how to configure and deploy an example Twilio Notifications connector.

1. Make sure you meet the [requirements](#splunk-connector-req) for the connector.

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#splunk-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. <a name="connectors-setup-secret-resource"></a>Add the required secret resource and grant read access to the Lambda function.

   1. Add the connector and configure its [parameters](#splunk-connector-param).

   1. Add subscriptions that allow the connector to receive [input data](#splunk-connector-data-input) and send [output data](#splunk-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="splunk-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector.

```
import greengrasssdk
import time
import json

iot_client = greengrasssdk.client('iot-data')
send_topic = 'splunk/logs/put'

def create_request_with_all_fields():
    return {
        "request": {
            "event": "Access log test message."
        },
        "id" : "req_123"
    }

def publish_basic_message():
    messageToPublish = create_request_with_all_fields()
    print("Message To Publish: ", messageToPublish)
    iot_client.publish(topic=send_topic,
        payload=json.dumps(messageToPublish))

publish_basic_message()

def lambda_handler(event, context):
    return
```

## Licenses
<a name="splunk-connector-license"></a>

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="splunk-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 4 | <a name="isolation-mode-changelog"></a>Added the `IsolationMode` parameter to configure the containerization mode for the connector. | 
| 3 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 2 | Fix to reduce excessive logging. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="splunk-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)

# Twilio Notifications connector
<a name="twilio-notifications-connector"></a>

**Warning**  <a name="connectors-extended-life-phase-warning"></a>
This connector has moved into the *extended life phase*, and AWS IoT Greengrass won't release updates that provide features, enhancements to existing features, security patches, or bug fixes. For more information, see [AWS IoT Greengrass Version 1 maintenance policy](maintenance-policy.md).

The Twilio Notifications [connector](connectors.md) makes automated phone calls or sends text messages through Twilio. You can use this connector to send notifications in response to events in the Greengrass group. For phone calls, the connector can forward a voice message to the recipient.

This connector receives Twilio message information on an MQTT topic, and then triggers a Twilio notification.

**Note**  
For a tutorial that shows how to use the Twilio Notifications connector, see [Getting started with Greengrass connectors (console)](connectors-console.md) or [Getting started with Greengrass connectors (CLI)](connectors-cli.md).

This connector has the following versions.


| Version | ARN | 
| --- | --- | 
| 5 | `arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/5` | 
| 4 | `arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/4` | 
| 3 | `arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/3` | 
| 2 | `arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/2` | 
| 1 | `arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/1` | 

For information about version changes, see the [Changelog](#twilio-notifications-connector-changelog).

## Requirements
<a name="twilio-notifications-connector-req"></a>

This connector has the following requirements:

------
#### [ Version 4 - 5 ]
+ <a name="conn-req-ggc-v1.9.3-secrets"></a>AWS IoT Greengrass Core software v1.9.3 or later. AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs).
**Note**  
This requirement includes allowing access to your Secrets Manager secrets. If you're using the default Greengrass service role, Greengrass has permission to get the values of secrets with names that start with *greengrass-*.
+ <a name="conn-req-py-3.7-and-3.8"></a>[Python](https://www.python.org/) version 3.7 or 3.8 installed on the core device and added to the PATH environment variable.
**Note**  <a name="use-runtime-py3.8"></a>
To use Python 3.8, run the following command to create a symbolic link from the the default Python 3.7 installation folder to the installed Python 3.8 binaries.  

  ```
  sudo ln -s path-to-python-3.8/python3.8 /usr/bin/python3.7
  ```
This configures your device to meet the Python requirement for AWS IoT Greengrass.
+ A Twilio account SID, auth token, and Twilio-enabled phone number. After you create a Twilio project, these values are available on the project dashboard.
**Note**  
You can use a Twilio trial account. If you're using a trial account, you must add non-Twilio recipient phone numbers to a list of verified phone numbers. For more information, see [ How to Work with your Free Twilio Trial Account](https://www.twilio.com/docs/usage/tutorials/how-to-use-your-free-trial-account).
+ <a name="conn-twilio-req-secret"></a>A text type secret in AWS Secrets Manager that stores the Twilio auth token. For more information, see [Creating a basic secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) in the *AWS Secrets Manager User Guide*.
**Note**  
To create the secret in the Secrets Manager console, enter your token on the **Plaintext** tab. Don't include quotation marks or other formatting. In the API, specify the token as the value for the `SecretString` property.
+ A secret resource in the Greengrass group that references the Secrets Manager secret. For more information, see [Deploy secrets to the AWS IoT Greengrass core](secrets.md).

------
#### [ Versions 1 - 3 ]
+ <a name="conn-req-ggc-v1.7.0-secrets"></a>AWS IoT Greengrass Core software v1.7 or later. AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs).
**Note**  
This requirement includes allowing access to your Secrets Manager secrets. If you're using the default Greengrass service role, Greengrass has permission to get the values of secrets with names that start with *greengrass-*.
+ [Python](https://www.python.org/) version 2.7 installed on the core device and added to the PATH environment variable.
+ A Twilio account SID, auth token, and Twilio-enabled phone number. After you create a Twilio project, these values are available on the project dashboard.
**Note**  
You can use a Twilio trial account. If you're using a trial account, you must add non-Twilio recipient phone numbers to a list of verified phone numbers. For more information, see [ How to Work with your Free Twilio Trial Account](https://www.twilio.com/docs/usage/tutorials/how-to-use-your-free-trial-account).
+ <a name="conn-twilio-req-secret"></a>A text type secret in AWS Secrets Manager that stores the Twilio auth token. For more information, see [Creating a basic secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) in the *AWS Secrets Manager User Guide*.
**Note**  
To create the secret in the Secrets Manager console, enter your token on the **Plaintext** tab. Don't include quotation marks or other formatting. In the API, specify the token as the value for the `SecretString` property.
+ A secret resource in the Greengrass group that references the Secrets Manager secret. For more information, see [Deploy secrets to the AWS IoT Greengrass core](secrets.md).

------

## Connector Parameters
<a name="twilio-notifications-connector-param"></a>

This connector provides the following parameters.

------
#### [ Version 5 ]

`TWILIO_ACCOUNT_SID`  <a name="twilio-TWILIO_ACCOUNT_SID"></a>
The Twilio account SID that's used to invoke the Twilio API.  
Display name in the AWS IoT console: **Twilio account SID**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`TwilioAuthTokenSecretArn`  <a name="twilio-TwilioAuthTokenSecretArn"></a>
The ARN of the Secrets Manager secret that stores the Twilio auth token.  
This is used to access the value of the local secret on the core.
Display name in the AWS IoT console: **ARN of Twilio auth token secret**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:secretsmanager:[a-z0-9\-]+:[0-9]{12}:secret:([a-zA-Z0-9\\]+/)*[a-zA-Z0-9/_+=,.@\-]+-[a-zA-Z0-9]+`

`TwilioAuthTokenSecretArn-ResourceId`  <a name="twilio-TwilioAuthTokenSecretArn-ResourceId"></a>
The ID of the secret resource in the Greengrass group that references the secret for the Twilio auth token.  
Display name in the AWS IoT console: **Twilio auth token resource**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`DefaultFromPhoneNumber`  <a name="twilio-DefaultFromPhoneNumber"></a>
The default Twilio-enabled phone number that Twilio uses to send messages. Twilio uses this number to initiate the text or call.  
+ If you don't configure a default phone number, you must specify a phone number in the `from_number` property in the input message body.
+ If you do configure a default phone number, you can optionally override the default by specifying the `from_number` property in the input message body.
Display name in the AWS IoT console: **Default from phone number**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|\+[0-9]+`

`IsolationMode`  <a name="IsolationMode"></a>
The [containerization](connectors.md#connector-containerization) mode for this connector. The default is `GreengrassContainer`, which means that the connector runs in an isolated runtime environment inside the AWS IoT Greengrass container.  
The default containerization setting for the group does not apply to connectors.
Display name in the AWS IoT console: **Container isolation mode**  
Required: `false`  
Type: `string`  
Valid values: `GreengrassContainer` or `NoContainer`  
Valid pattern: `^NoContainer$|^GreengrassContainer$`

------
#### [ Version 1 - 4 ]

`TWILIO_ACCOUNT_SID`  <a name="twilio-TWILIO_ACCOUNT_SID"></a>
The Twilio account SID that's used to invoke the Twilio API.  
Display name in the AWS IoT console: **Twilio account SID**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`TwilioAuthTokenSecretArn`  <a name="twilio-TwilioAuthTokenSecretArn"></a>
The ARN of the Secrets Manager secret that stores the Twilio auth token.  
This is used to access the value of the local secret on the core.
Display name in the AWS IoT console: **ARN of Twilio auth token secret**  
Required: `true`  
Type: `string`  
Valid pattern: `arn:aws:secretsmanager:[a-z0-9\-]+:[0-9]{12}:secret:([a-zA-Z0-9\\]+/)*[a-zA-Z0-9/_+=,.@\-]+-[a-zA-Z0-9]+`

`TwilioAuthTokenSecretArn-ResourceId`  <a name="twilio-TwilioAuthTokenSecretArn-ResourceId"></a>
The ID of the secret resource in the Greengrass group that references the secret for the Twilio auth token.  
Display name in the AWS IoT console: **Twilio auth token resource**  
Required: `true`  
Type: `string`  
Valid pattern: `.+`

`DefaultFromPhoneNumber`  <a name="twilio-DefaultFromPhoneNumber"></a>
The default Twilio-enabled phone number that Twilio uses to send messages. Twilio uses this number to initiate the text or call.  
+ If you don't configure a default phone number, you must specify a phone number in the `from_number` property in the input message body.
+ If you do configure a default phone number, you can optionally override the default by specifying the `from_number` property in the input message body.
Display name in the AWS IoT console: **Default from phone number**  
Required: `false`  
Type: `string`  
Valid pattern: `^$|\+[0-9]+`

------

### Create Connector Example (AWS CLI)
<a name="twilio-notifications-connector-create"></a>

The following example CLI command creates a `ConnectorDefinition` with an initial version that contains the Twilio Notifications connector.

```
aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
    "Connectors": [
        {
            "Id": "MyTwilioNotificationsConnector",
            "ConnectorArn": "arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/5",
            "Parameters": {
                "TWILIO_ACCOUNT_SID": "abcd12345xyz",
                "TwilioAuthTokenSecretArn": "arn:aws:secretsmanager:region:account-id:secret:greengrass-secret-hash",
                "TwilioAuthTokenSecretArn-ResourceId": "MyTwilioSecret",
                "DefaultFromPhoneNumber": "+19999999999",
                "IsolationMode" : "GreengrassContainer"
            }
        }
    ]
}'
```

For tutorials that show how add the Twilio Notifications connector to a group, see [Getting started with Greengrass connectors (CLI)](connectors-cli.md) and [Getting started with Greengrass connectors (console)](connectors-console.md).

## Input data
<a name="twilio-notifications-connector-data-input"></a>

This connector accepts Twilio message information on two MQTT topics. Input messages must be in JSON format.
+ Text message information on the `twilio/txt` topic.
+ Phone message information on the `twilio/call` topic.

**Note**  
The input message payload can include a text message (`message`) or voice message (`voice_message_location`), but not both.

**Topic filter: `twilio/txt`**    
**Message properties**    
`request`  
Information about the Twilio notification.  
Required: `true`  
Type: `object` that includes the following properties:    
`recipient`  
The message recipient. Only one recipient is supported.  
Required: `true`  
Type: `object` that include the following properties:    
`name`  
The name of the recipient.  
Required: `true`  
Type: `string`  
Valid pattern: `.*`  
`phone_number`  
The phone number of the recipient.  
Required: `true`  
Type: `string`  
Valid pattern: `\+[1-9]+`  
`message`  
The text content of the text message. Only text messages are supported on this topic. For voice messages, use `twilio/call`.  
Required: `true`  
Type: `string`  
Valid pattern: `.+`  
`from_number`  
The phone number of the sender. Twilio uses this phone number to initiate the message. This property is required if the `DefaultFromPhoneNumber` parameter isn't configured. If `DefaultFromPhoneNumber` is configured, you can use this property to override the default.  
Required: `false`  
Type: `string`  
Valid pattern: `\+[1-9]+`  
`retries`  
The number of retries. The default is 0.  
Required: `false`  
Type: `integer`  
`id`  
An arbitrary ID for the request. This property is used to map an input request to an output response.   
Required: `true`  
Type: `string`  
Valid pattern: `.+`  
**Example input**  

```
{
    "request": {
        "recipient": {
            "name": "Darla",
            "phone_number": "+12345000000",
            "message": "Hello from the edge"
        },
        "from_number": "+19999999999",
        "retries": 3
    },
    "id": "request123"
}
```

**Topic filter: `twilio/call`**    
**Message properties**    
`request`  
Information about the Twilio notification.  
Required: `true`  
Type: `object` that includes the following properties:    
`recipient`  
The message recipient. Only one recipient is supported.  
Required: `true`  
Type: `object` that include the following properties:    
`name`  
The name of the recipient.  
Required: `true`  
Type: `string`  
Valid pattern: `.+`  
`phone_number`  
The phone number of the recipient.  
Required: `true`  
Type: `string`  
Valid pattern: `\+[1-9]+`  
`voice_message_location`  
The URL of the audio content for the voice message. This must be in TwiML format. Only voice messages are supported on this topic. For text messages, use `twilio/txt`.  
Required: `true`  
Type: `string`  
Valid pattern: `.+`  
`from_number`  
The phone number of the sender. Twilio uses this phone number to initiate the message. This property is required if the `DefaultFromPhoneNumber` parameter isn't configured. If `DefaultFromPhoneNumber` is configured, you can use this property to override the default.  
Required: `false`  
Type: `string`  
Valid pattern: `\+[1-9]+`  
`retries`  
The number of retries. The default is 0.  
Required: `false`  
Type: `integer`  
`id`  
An arbitrary ID for the request. This property is used to map an input request to an output response.   
Required: `true`  
Type: `string`  
Valid pattern: `.+`  
**Example input**  

```
{
    "request": {
        "recipient": {
            "name": "Darla",
            "phone_number": "+12345000000",
            "voice_message_location": "https://some-public-TwiML"
        },
        "from_number": "+19999999999",
        "retries": 3
    },
    "id": "request123"
}
```

## Output data
<a name="twilio-notifications-connector-data-output"></a>

This connector publishes status information as output data on an MQTT topic.

<a name="topic-filter"></a>**Topic filter in subscription**  
`twilio/message/status`

**Example output: Success**  

```
{
    "response": {
        "status": "success",
        "payload": {
            "from_number": "+19999999999",
            "messages": {
                "message_status": "queued",
                "to_number": "+12345000000",
                "name": "Darla"
            }
        }
    },
    "id": "request123"
}
```

**Example output: Failure**  

```
{
    "response": {
        "status": "fail",
        "error_message": "Recipient name cannot be None",
        "error": "InvalidParameter",
        "payload": None
        }
    },
    "id": "request123"
}
```
The `payload` property in the output is the response from the Twilio API when the message is sent. If the connector detects that the input data is invalid (for example, it doesn't specify a required input field), the connector returns an error and sets the value to `None`. The following are example payloads:  

```
{
    'from_number':'+19999999999',
    'messages': {
        'name':'Darla',
        'to_number':'+12345000000',
        'message_status':'undelivered'
    }
}
```

```
{
    'from_number':'+19999999999',
    'messages': {
        'name':'Darla',
        'to_number':'+12345000000',
        'message_status':'queued'
    }
}
```

## Usage Example
<a name="twilio-notifications-connector-usage"></a>

<a name="connectors-setup-intro"></a>Use the following high-level steps to set up an example Python 3.7 Lambda function that you can use to try out the connector.

**Note**  <a name="connectors-setup-get-started-topics"></a>
The [Getting started with Greengrass connectors (console)](connectors-console.md) and [Getting started with Greengrass connectors (CLI)](connectors-cli.md) topics contain end-to-end steps that show how to set up, deploy, and test the Twilio Notifications connector.

1. Make sure you meet the [requirements](#twilio-notifications-connector-req) for the connector.

1. <a name="connectors-setup-function"></a>Create and publish a Lambda function that sends input data to the connector.

   Save the [example code](#twilio-notifications-connector-usage-example) as a PY file. <a name="connectors-setup-function-sdk"></a>Download and unzip the [AWS IoT Greengrass Core SDK for Python](lambda-functions.md#lambda-sdks-core). Then, create a zip package that contains the PY file and the `greengrasssdk` folder at the root level. This zip package is the deployment package that you upload to AWS Lambda.

   <a name="connectors-setup-function-publish"></a>After you create the Python 3.7 Lambda function, publish a function version and create an alias.

1. Configure your Greengrass group.

   1. <a name="connectors-setup-gg-function"></a>Add the Lambda function by its alias (recommended). Configure the Lambda lifecycle as long-lived (or `"Pinned": true` in the CLI).

   1. <a name="connectors-setup-secret-resource"></a>Add the required secret resource and grant read access to the Lambda function.

   1. Add the connector and configure its [parameters](#twilio-notifications-connector-param).

   1. Add subscriptions that allow the connector to receive [input data](#twilio-notifications-connector-data-input) and send [output data](#twilio-notifications-connector-data-output) on supported topic filters.
      + <a name="connectors-setup-subscription-input-data"></a>Set the Lambda function as the source, the connector as the target, and use a supported input topic filter.
      + <a name="connectors-setup-subscription-output-data"></a>Set the connector as the source, AWS IoT Core as the target, and use a supported output topic filter. You use this subscription to view status messages in the AWS IoT console.

1. <a name="connectors-setup-deploy-group"></a>Deploy the group.

1. <a name="connectors-setup-test-sub"></a>In the AWS IoT console, on the **Test** page, subscribe to the output data topic to view status messages from the connector. The example Lambda function is long-lived and starts sending messages immediately after the group is deployed.

   When you're finished testing, you can set the Lambda lifecycle to on-demand (or `"Pinned": false` in the CLI) and deploy the group. This stops the function from sending messages.

### Example
<a name="twilio-notifications-connector-usage-example"></a>

The following example Lambda function sends an input message to the connector. This example triggers a text message.

```
import greengrasssdk
import json

iot_client = greengrasssdk.client('iot-data')
TXT_INPUT_TOPIC = 'twilio/txt'
CALL_INPUT_TOPIC = 'twilio/call'

def publish_basic_message():

    txt = {
        "request": {
            "recipient" : {
                "name": "Darla",
                "phone_number": "+12345000000",
                "message": 'Hello from the edge'
            },
            "from_number" : "+19999999999"
        },
        "id" : "request123"
    }
    
    print("Message To Publish: ", txt)

    client.publish(topic=TXT_INPUT_TOPIC,
                   payload=json.dumps(txt))

publish_basic_message()

def lambda_handler(event, context):
    return
```

## Licenses
<a name="twilio-notifications-connector-license"></a>

The Twilio Notifications connector includes the following third-party software/licensing:
+ [twilio-python](https://github.com/twilio/twilio-python)/MIT

This connector is released under the [Greengrass Core Software License Agreement](https://greengrass-release-license.s3.us-west-2.amazonaws.com/greengrass-license-v1.pdf).

## Changelog
<a name="twilio-notifications-connector-changelog"></a>

The following table describes the changes in each version of the connector.


| Version | Changes | 
| --- | --- | 
| 5 | <a name="isolation-mode-changelog"></a>Added the `IsolationMode` parameter to configure the containerization mode for the connector. | 
| 4 | <a name="upgrade-runtime-py3.7"></a>Upgraded the Lambda runtime to Python 3.7, which changes the runtime requirement. | 
| 3 | Fix to reduce excessive logging. | 
| 2 | Minor bug fixes and improvements. | 
| 1 | Initial release.  | 

<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## See also
<a name="twilio-notifications-connector-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [Getting started with Greengrass connectors (CLI)](connectors-cli.md)
+ [Twilio API Reference](https://www.twilio.com/docs/api)

# Getting started with Greengrass connectors (console)
<a name="connectors-console"></a>

This feature is available for AWS IoT Greengrass Core v1.7 and later.

This tutorial shows how to use the AWS Management Console to work with connectors.

Use connectors to accelerate your development life cycle. Connectors are prebuilt, reusable modules that can make it easier to interact with services, protocols, and resources. They can help you deploy business logic to Greengrass devices more quickly. For more information, see [Integrate with services and protocols using Greengrass connectors](connectors.md).

In this tutorial, you configure and deploy the [ Twilio Notifications](twilio-notifications-connector.md) connector. The connector receives Twilio message information as input data, and then triggers a Twilio text message. The data flow is shown in following diagram.

![\[Data flow from Lambda function to Twilio Notifications connector to Twilio.\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/images/connectors/twilio-solution.png)


After you configure the connector, you create a Lambda function and a subscription.
+ The function evaluates simulated data from a temperature sensor. It conditionally publishes the Twilio message information to an MQTT topic. This is the topic that the connector subscribes to.
+ The subscription allows the function to publish to the topic and the connector to receive data from the topic.

The Twilio Notifications connector requires a Twilio auth token to interact with the Twilio API. The token is a text type secret created in AWS Secrets Manager and referenced from a group resource. This enables AWS IoT Greengrass to create a local copy of the secret on the Greengrass core, where it is encrypted and made available to the connector. For more information, see [Deploy secrets to the AWS IoT Greengrass core](secrets.md).

The tutorial contains the following high-level steps:

1. [Create a Secrets Manager secret](#connectors-console-create-secret)

1. [Add a secret resource to a group](#connectors-console-create-resource)

1. [Add a connector to the group](#connectors-console-create-connector)

1. [Create a Lambda function deployment package](#connectors-console-create-deployment-package)

1. [Create a Lambda function](#connectors-console-create-function)

1. [Add a function to the group](#connectors-console-create-gg-function)

1. [Add subscriptions to the group](#connectors-console-create-subscription)

1. [Deploy the group](#connectors-console-create-deployment)

1. [Test the solution](#connectors-console-test-solution)

The tutorial should take about 20 minutes to complete.

## Prerequisites
<a name="connectors-console-prerequisites"></a>

To complete this tutorial, you need:
+ A Greengrass group and a Greengrass core (v1.9.3 or later). To learn how to create a Greengrass group and core, see [Getting started with AWS IoT Greengrass](gg-gs.md). The Getting Started tutorial also includes steps for installing the AWS IoT Greengrass Core software.
+ Python 3.7 installed on the AWS IoT Greengrass core device.
+  AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs).
**Note**  
This requirement includes allowing access to your Secrets Manager secrets. If you're using the default Greengrass service role, Greengrass has permission to get the values of secrets with names that start with *greengrass-*.
+ A Twilio account SID, auth token, and Twilio-enabled phone number. After you create a Twilio project, these values are available on the project dashboard.
**Note**  
You can use a Twilio trial account. If you're using a trial account, you must add non-Twilio recipient phone numbers to a list of verified phone numbers. For more information, see [ How to Work with your Free Twilio Trial Account](https://www.twilio.com/docs/usage/tutorials/how-to-use-your-free-trial-account).

## Step 1: Create a Secrets Manager secret
<a name="connectors-console-create-secret"></a>

In this step, you use the AWS Secrets Manager console to create a text type secret for your Twilio auth token.

1. <a name="create-secret-step-signin"></a>Sign in to the [AWS Secrets Manager console](https://console.aws.amazon.com/secretsmanager/).
**Note**  
For more information about this process, see [ Step 1: Create and store your secret in AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html) in the *AWS Secrets Manager User Guide*.

1. <a name="create-secret-step-create"></a>Choose **Store a new secret**.

1. <a name="create-secret-step-othertype"></a>Under **Choose secret type**, choose **Other type of secret**.

1. Under **Specify the key/value pairs to be stored for this secret**, on the **Plaintext** tab, enter your Twilio auth token. Remove all of the JSON formatting and enter only the token value.

1. <a name="create-secret-step-encryption"></a>Keep **aws/secretsmanager** selected for the encryption key, and then choose **Next**.
**Note**  
You aren't charged by AWS KMS if you use the default AWS managed key that Secrets Manager creates in your account.

1. For **Secret name**, enter **greengrass-TwilioAuthToken**, and then choose **Next**.
**Note**  
By default, the Greengrass service role allows AWS IoT Greengrass to get the value of secrets with names that start with *greengrass-*. For more information, see [secrets requirements](secrets.md#secrets-reqs).

1. <a name="create-secret-step-rotation"></a>This tutorial doesn't require rotation, so choose disable automatic rotation, and then choose **Next**.

1. <a name="create-secret-step-review"></a>On the **Review** page, review your settings, and then choose **Store**.

   Next, you create a secret resource in your Greengrass group that references the secret.

## Step 2: Add a secret resource to a Greengrass group
<a name="connectors-console-create-resource"></a>

In this step, you add a *secret resource* to the Greengrass group. This resource is a reference to the secret that you created in the previous step.

1. <a name="console-gg-groups"></a>In the AWS IoT console navigation pane, under **Manage**, expand **Greengrass devices**, and then choose **Groups (V1)**.

1. <a name="create-secret-resource-step-choosegroup"></a>Choose the group that you want to add the secret resource to.

1. <a name="create-secret-resource-step-secretstab"></a>On the group configuration page, choose the **Resources** tab, and then scroll down to the **Secrets** section. The **Secrets** section displays the secret resources that belong to the group. You can add, edit, and remove secret resources from this section.
**Note**  
Alternatively, the console allows you to create a secret and secret resource when you configure a connector or Lambda function. You can do this from the connector's **Configure parameters** page or the Lambda function's **Resources** page.

1. <a name="create-secret-resource-step-addsecretresource"></a>Choose **Add** under the **Secrets** section.

1. On the **Add a secret resource** page, enter **MyTwilioAuthToken** for the **Resource name**.

1. For the **Secret**, choose **greengrass-TwilioAuthToken**.

1. <a name="create-secret-resource-step-selectlabels"></a>In the **Select labels (Optional)** section, the AWSCURRENT staging label represents the latest version of the secret. This label is always included in a secret resource.
**Note**  
This tutorial requires the AWSCURRENT label only. You can optionally include labels that are required by your Lambda function or connector.

1. Choose **Add resource**.

## Step 3: Add a connector to the Greengrass group
<a name="connectors-console-create-connector"></a>

In this step, you configure parameters for the [Twilio Notifications connector](twilio-notifications-connector.md) and add it to the group.

1. On the group configuration page, choose **Connectors**, and then choose **Add a connector**.

1. On the **Add connector** page, choose **Twilio Notifications**.

1. Choose the version.

1. In the **Configuration** section:
   + For **Twilio auth token resource**, enter the resource that you created in the previous step.
**Note**  
When you enter the resource, the **ARN of Twilio auth token secret** property is populated for you.
   + For **Default from phone number**, enter your Twilio-enabled phone number.
   + For **Twilio account SID**, enter your Twilio account SID.

1. Choose **Add resource**.

## Step 4: Create a Lambda function deployment package
<a name="connectors-console-create-deployment-package"></a>

To create a Lambda function, you must first create a Lambda function *deployment package* that contains the function code and dependencies. Greengrass Lambda functions require the [AWS IoT Greengrass Core SDK](lambda-functions.md#lambda-sdks-core) for tasks such as communicating with MQTT messages in the core environment and accessing local secrets. This tutorial creates a Python function, so you use the Python version of the SDK in the deployment package.

1. <a name="download-ggc-sdk"></a> From the [AWS IoT Greengrass Core SDK](what-is-gg.md#gg-core-sdk-download) downloads page, download the AWS IoT Greengrass Core SDK for Python to your computer.

1. <a name="unzip-ggc-sdk"></a>Unzip the downloaded package to get the SDK. The SDK is the `greengrasssdk` folder.

1. Save the following Python code function in a local file named `temp_monitor.py`.

   ```
   import greengrasssdk
   import json
   import random
   
   client = greengrasssdk.client('iot-data')
   
   # publish to the Twilio Notifications connector through the twilio/txt topic
   def function_handler(event, context):
       temp = event['temperature']
       
       # check the temperature
       # if greater than 30C, send a notification
       if temp > 30:
           data = build_request(event)
           client.publish(topic='twilio/txt', payload=json.dumps(data))
           print('published:' + str(data))
           
       print('temperature:' + str(temp))
       return
   
   # build the Twilio request from the input data
   def build_request(event):
       to_name = event['to_name']
       to_number = event['to_number']
       temp_report = 'temperature:' + str(event['temperature'])
   
       return {
           "request": {
               "recipient": {
                   "name": to_name,
                   "phone_number": to_number,
                   "message": temp_report
               }
           },
           "id": "request_" + str(random.randint(1,101))
       }
   ```

1. Zip the following items into a file named `temp_monitor_python.zip`. When creating the ZIP file, include only the code and dependencies, not the containing folder.
   + **temp\$1monitor.py**. App logic.
   + **greengrasssdk**. Required library for Python Greengrass Lambda functions that publish MQTT messages.

   This is your Lambda function deployment package.

Now, create a Lambda function that uses the deployment package.

## Step 5: Create a Lambda function in the AWS Lambda console
<a name="connectors-console-create-function"></a>

In this step, you use the AWS Lambda console to create a Lambda function and configure it to use your deployment package. Then, you publish a function version and create an alias.

1. First, create the Lambda function.

   1. <a name="lambda-console-open"></a>In the AWS Management Console, choose **Services**, and open the AWS Lambda console.

   1. <a name="lambda-console-create-function"></a>Choose **Create function** and then choose **Author from scratch**.

   1. In the **Basic information** section, use the following values:
      + For **Function name**, enter **TempMonitor**.
      + For **Runtime**, choose **Python 3.7**.
      + For **Permissions**, keep the default setting. This creates an execution role that grants basic Lambda permissions. This role isn't used by AWS IoT Greengrass.

   1. <a name="lambda-console-save-function"></a>At the bottom of the page, choose **Create function**.

1. Next, register the handler and upload your Lambda function deployment package.

   1. <a name="lambda-console-upload"></a>On the **Code** tab, under **Code source**, choose **Upload from**. From the dropdown, choose **.zip file**.  
![\[The Upload from dropdown with .zip file highlighted.\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/images/lra-console/upload-deployment-package.png)

   1. Choose **Upload**, and then choose your `temp_monitor_python.zip` deployment package. Then, choose **Save**.

   1. <a name="lambda-console-runtime-settings-para"></a>On the **Code** tab for the function, under **Runtime settings**, choose **Edit**, and then enter the following values.
      + For **Runtime**, choose **Python 3.7**.
      + For **Handler**, enter **temp\$1monitor.function\$1handler**

   1. <a name="lambda-console-save-config"></a>Choose **Save**.
**Note**  
The **Test** button on the AWS Lambda console doesn't work with this function. The AWS IoT Greengrass Core SDK doesn't contain modules that are required to run your Greengrass Lambda functions independently in the AWS Lambda console. These modules (for example, `greengrass_common`) are supplied to the functions after they are deployed to your Greengrass core.

1. Now, publish the first version of your Lambda function and create an [alias for the version](https://docs.aws.amazon.com/lambda/latest/dg/versioning-aliases.html).
**Note**  
Greengrass groups can reference a Lambda function by alias (recommended) or by version. Using an alias makes it easier to manage code updates because you don't have to change your subscription table or group definition when the function code is updated. Instead, you just point the alias to the new function version.

   1. <a name="shared-publish-function-version"></a>From the **Actions** menu, choose **Publish new version**.

   1. <a name="shared-publish-function-version-description"></a>For **Version description**, enter **First version**, and then choose **Publish**.

   1. On the **TempMonitor: 1** configuration page, from the **Actions** menu, choose **Create alias**.

   1. On the **Create a new alias** page, use the following values:
      + For **Name**, enter **GG\$1TempMonitor**.
      + For **Version**, choose **1**.
**Note**  
AWS IoT Greengrass doesn't support Lambda aliases for **\$1LATEST** versions.

   1. Choose **Create**.

Now you're ready to add the Lambda function to your Greengrass group.

## Step 6: Add a Lambda function to the Greengrass group
<a name="connectors-console-create-gg-function"></a>

In this step, you add the Lambda function to the group and then configure its lifecycle and environment variables. For more information, see [Controlling execution of Greengrass Lambda functions by using group-specific configuration](lambda-group-config.md).

1. <a name="choose-add-lambda"></a>On the group configuration page, choose the **Lambda functions** tab.

1. Under **My Lambda functions**, choose **Add**.

1. On the **Add Lambda function** page, choose **TempMonitor** for your Lambda function.

1. For **Lambda function version**, choose **Alias: GG\$1TempMonitor**.

1. Choose **Add Lambda function**.

## Step 7: Add subscriptions to the Greengrass group
<a name="connectors-console-create-subscription"></a>

<a name="connectors-how-to-add-subscriptions-p1"></a>In this step, you add a subscription that enables the Lambda function to send input data to the connector. The connector defines the MQTT topics that it subscribes to, so this subscription uses one of the topics. This is the same topic that the example function publishes to.

<a name="connectors-how-to-add-subscriptions-p2"></a>For this tutorial, you also create subscriptions that allow the function to receive simulated temperature readings from AWS IoT and allow AWS IoT to receive status information from the connector.

1. <a name="shared-subscriptions-addsubscription"></a>On the group configuration page, choose the **Subscriptions** tab, and then choose **Add Subscription**.

1. On the **Create a subscription** page, configure the source and target, as follows:

   1. For **Source type**, choose **Lambda function**, and then choose **TempMonitor**.

   1. For **Target type**, choose **Connector**, and then choose **Twilio Notifications**.

1. For the **Topic filter**, choose **twilio/txt**.

1. Choose **Create subscription**.

1. Repeat steps 1 - 4 to create a subscription that allows AWS IoT to publish messages to the function.

   1. For **Source type**, choose **Service**, and then choose **IoT Cloud**.

   1. For **Select a target**, choose **Lambda function**, and then choose **TempMonitor**.

   1. For **Topic filter**, enter **temperature/input**.

1. Repeat steps 1 - 4 to create a subscription that allows the connector to publish messages to AWS IoT.

   1. For **Source type**, choose **Connector**, and then choose **Twilio Notifications**.

   1. For **Target type**, choose **Service**, and then choose **IoT Cloud**.

   1. For **Topic filter**, **twilio/message/status** is entered for you. This is the predefined topic that the connector publishes to.

## Step 8: Deploy the Greengrass group
<a name="connectors-console-create-deployment"></a>

Deploy the group to the core device.

1. <a name="shared-deploy-group-checkggc"></a>Make sure that the AWS IoT Greengrass core is running. Run the following commands in your Raspberry Pi terminal, as needed.

   1. To check whether the daemon is running:

      ```
      ps aux | grep -E 'greengrass.*daemon'
      ```

      If the output contains a `root` entry for `/greengrass/ggc/packages/ggc-version/bin/daemon`, then the daemon is running.
**Note**  
The version in the path depends on the AWS IoT Greengrass Core software version that's installed on your core device.

   1. To start the daemon:

      ```
      cd /greengrass/ggc/core/
      sudo ./greengrassd start
      ```

1. <a name="shared-deploy-group-deploy"></a>On the group configuration page, choose **Deploy**.

1. <a name="shared-deploy-group-ipconfig"></a>

   1. In the **Lambda functions** tab, under the **System Lambda functions** section, select **IP detector** and choose **Edit**.

   1. In the **Edit IP detector settings** dialog box, select ** Automatically detect and override MQTT broker endpoints**.

   1. Choose **Save**.

      This enables devices to automatically acquire connectivity information for the core, such as IP address, DNS, and port number. Automatic detection is recommended, but AWS IoT Greengrass also supports manually specified endpoints. You're only prompted for the discovery method the first time that the group is deployed.
**Note**  
If prompted, grant permission to create the [Greengrass service role](service-role.md) and associate it with your AWS account in the current AWS Region. This role allows AWS IoT Greengrass to access your resources in AWS services.

      The **Deployments** page shows the deployment timestamp, version ID, and status. When completed, the status displayed for the deployment should be **Completed**.

      For troubleshooting help, see [Troubleshooting AWS IoT Greengrass](gg-troubleshooting.md).

**Note**  
<a name="one-conn-version"></a>A Greengrass group can contain only one version of the connector at a time. For information about upgrading a connector version, see [Upgrading connector versions](connectors.md#upgrade-connector-versions).

## Test the solution
<a name="connectors-console-test-solution"></a>

1. <a name="choose-test-page"></a>On the AWS IoT console home page, choose **Test**.

1. For **Subscribe to topic**, use the following values, and then choose **Subscribe**. The Twilio Notifications connector publishes status information to this topic.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-console.html)

1. For **Publish to topic**, use the following values, and then choose **Publish** to invoke the function.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-console.html)

   If successful, the recipient receives the text message and the console displays the `success` status from the [output data](twilio-notifications-connector.md#twilio-notifications-connector-data-output).

   Now, change the `temperature` in the input message to **29** and publish. Because this is less than 30, the TempMonitor function doesn't trigger a Twilio message.

## See also
<a name="connectors-console-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+  [AWS-provided Greengrass connectors](connectors-list.md)

# Getting started with Greengrass connectors (CLI)
<a name="connectors-cli"></a>

This feature is available for AWS IoT Greengrass Core v1.7 and later.

This tutorial shows how to use the AWS CLI to work with connectors.

Use connectors to accelerate your development life cycle. Connectors are prebuilt, reusable modules that can make it easier to interact with services, protocols, and resources. They can help you deploy business logic to Greengrass devices more quickly. For more information, see [Integrate with services and protocols using Greengrass connectors](connectors.md).

In this tutorial, you configure and deploy the [ Twilio Notifications](twilio-notifications-connector.md) connector. The connector receives Twilio message information as input data, and then triggers a Twilio text message. The data flow is shown in following diagram.

![\[Data flow from Lambda function to Twilio Notifications connector to Twilio.\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/images/connectors/twilio-solution.png)


After you configure the connector, you create a Lambda function and a subscription.
+ The function evaluates simulated data from a temperature sensor. It conditionally publishes the Twilio message information to an MQTT topic. This is the topic that the connector subscribes to.
+ The subscription allows the function to publish to the topic and the connector to receive data from the topic.

The Twilio Notifications connector requires a Twilio auth token to interact with the Twilio API. The token is a text type secret created in AWS Secrets Manager and referenced from a group resource. This enables AWS IoT Greengrass to create a local copy of the secret on the Greengrass core, where it is encrypted and made available to the connector. For more information, see [Deploy secrets to the AWS IoT Greengrass core](secrets.md).

The tutorial contains the following high-level steps:

1. [Create a Secrets Manager secret](#connectors-cli-create-secret)

1. [Create a resource definition and version](#connectors-cli-create-resource-definition)

1. [Create a connector definition and version](#connectors-cli-create-connector-definition)

1. [Create a Lambda function deployment package](#connectors-cli-create-deployment-package)

1. [Create a Lambda function](#connectors-cli-create-function)

1. [Create a function definition and version](#connectors-cli-create-function-definition)

1. [Create a subscription definition and version](#connectors-cli-create-subscription-definition)

1. [Create a group version](#connectors-cli-create-group-version)

1. [Create a deployment](#connectors-cli-create-deployment)

1. [Test the solution](#connectors-cli-test-solution)

The tutorial should take about 30 minutes to complete.

**Using the AWS IoT Greengrass API**

It's helpful to understand the following patterns when you work with Greengrass groups and group components (for example, the connectors, functions, and resources in the group).
+ At the top of the hierarchy, a component has a *definition* object that is a container for *version* objects. In turn, a version is a container for the connectors, functions, or other component types.
+ When you deploy to the Greengrass core, you deploy a specific group version. A group version can contain one version of each type of component. A core is required, but the others are included as needed.
+ Versions are immutable, so you must create new versions when you want to make changes. 

**Tip**  
If you receive an error when you run an AWS CLI command, add the `--debug` parameter and then rerun the command to get more information about the error.

The AWS IoT Greengrass API lets you create multiple definitions for a component type. For example, you can create a `FunctionDefinition` object every time that you create a `FunctionDefinitionVersion`, or you can add new versions to an existing definition. This flexibility allows you to customize your version management system.

## Prerequisites
<a name="connectors-cli-prerequisites"></a>

To complete this tutorial, you need:
+ A Greengrass group and a Greengrass core (v1.9.3 or later). To learn how to create a Greengrass group and core, see [Getting started with AWS IoT Greengrass](gg-gs.md). The Getting Started tutorial also includes steps for installing the AWS IoT Greengrass Core software.
+ Python 3.7 installed on the AWS IoT Greengrass core device.
+  AWS IoT Greengrass must be configured to support local secrets, as described in [Secrets Requirements](secrets.md#secrets-reqs).
**Note**  
This requirement includes allowing access to your Secrets Manager secrets. If you're using the default Greengrass service role, Greengrass has permission to get the values of secrets with names that start with *greengrass-*.
+ A Twilio account SID, auth token, and Twilio-enabled phone number. After you create a Twilio project, these values are available on the project dashboard.
**Note**  
You can use a Twilio trial account. If you're using a trial account, you must add non-Twilio recipient phone numbers to a list of verified phone numbers. For more information, see [ How to Work with your Free Twilio Trial Account](https://www.twilio.com/docs/usage/tutorials/how-to-use-your-free-trial-account).
+ AWS CLI installed and configured on your computer. For more information, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) and [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) in the *AWS Command Line Interface User Guide*.

   

  The examples in this tutorial are written for Linux and other Unix-based systems. If you're using Windows, see [Specifying parameter values for the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html) to learn about differences in syntax.

  If the command contains a JSON string, the tutorial provides an example that has the JSON on a single line. On some systems, it might be easier to edit and run commands using this format.

## Step 1: Create a Secrets Manager secret
<a name="connectors-cli-create-secret"></a>

In this step, you use the AWS Secrets Manager API to create a secret for your Twilio auth token.

1. First, create the secret.
   + Replace *twilio-auth-token* with your Twilio auth token.

   ```
   aws secretsmanager create-secret --name greengrass-TwilioAuthToken --secret-string twilio-auth-token
   ```
**Note**  
By default, the Greengrass service role allows AWS IoT Greengrass to get the value of secrets with names that start with *greengrass-*. For more information, see [secrets requirements](secrets.md#secrets-reqs).

1. Copy the `ARN` of the secret from the output. You use this to create the secret resource and to configure the Twilio Notifications connector.

## Step 2: Create a resource definition and version
<a name="connectors-cli-create-resource-definition"></a>

In this step, you use the AWS IoT Greengrass API to create a secret resource for your Secrets Manager secret.

1. Create a resource definition that includes an initial version.
   + Replace *secret-arn* with the `ARN` of the secret that you copied in the previous step.

    

------
#### [ JSON Expanded ]

   ```
   aws greengrass create-resource-definition --name MyGreengrassResources --initial-version '{
       "Resources": [
           {
               "Id": "TwilioAuthToken",
               "Name": "MyTwilioAuthToken",
               "ResourceDataContainer": {
                   "SecretsManagerSecretResourceData": {
                       "ARN": "secret-arn"
                   }
               }
           }
       ]
   }'
   ```

------
#### [ JSON Single-line ]

   ```
   aws greengrass create-resource-definition \
   --name MyGreengrassResources \
   --initial-version '{"Resources": [{"Id": "TwilioAuthToken", "Name": "MyTwilioAuthToken", "ResourceDataContainer": {"SecretsManagerSecretResourceData": {"ARN": "secret-arn"}}}]}'
   ```

------

1. Copy the `LatestVersionArn` of the resource definition from the output. You use this value to add the resource definition version to the group version that you deploy to the core.

## Step 3: Create a connector definition and version
<a name="connectors-cli-create-connector-definition"></a>

In this step, you configure parameters for the Twilio Notifications connector.

1. Create a connector definition with an initial version.
   + Replace *account-sid* with your Twilio account SID.
   + Replace *secret-arn* with the `ARN` of your Secrets Manager secret. The connector uses this to get the value of the local secret.
   + Replace *phone-number* with your Twilio-enabled phone number. Twilio uses this to initiate the text message. This can be overridden in the input message payload. Use the following format: `+19999999999`.

    

------
#### [ JSON Expanded ]

   ```
   aws greengrass create-connector-definition --name MyGreengrassConnectors --initial-version '{
       "Connectors": [
           {
               "Id": "MyTwilioNotificationsConnector",
               "ConnectorArn": "arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/4",
               "Parameters": {
                   "TWILIO_ACCOUNT_SID": "account-sid",
                   "TwilioAuthTokenSecretArn": "secret-arn",
                   "TwilioAuthTokenSecretArn-ResourceId": "TwilioAuthToken",
                   "DefaultFromPhoneNumber": "phone-number"
               }
           }
       ]
   }'
   ```

------
#### [ JSON Single-line ]

   ```
   aws greengrass create-connector-definition \
   --name MyGreengrassConnectors \
   --initial-version '{"Connectors": [{"Id": "MyTwilioNotificationsConnector", "ConnectorArn": "arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/4", "Parameters": {"TWILIO_ACCOUNT_SID": "account-sid", "TwilioAuthTokenSecretArn": "secret-arn", "TwilioAuthTokenSecretArn-ResourceId": "TwilioAuthToken", "DefaultFromPhoneNumber": "phone-number"}}]}'
   ```

------
**Note**  
`TwilioAuthToken` is the ID that you used in the previous step to create the secret resource.

1. Copy the `LatestVersionArn` of the connector definition from the output. You use this value to add the connector definition version to the group version that you deploy to the core.

## Step 4: Create a Lambda function deployment package
<a name="connectors-cli-create-deployment-package"></a>

To create a Lambda function, you must first create a Lambda function *deployment package* that contains the function code and dependencies. Greengrass Lambda functions require the [AWS IoT Greengrass Core SDK](lambda-functions.md#lambda-sdks-core) for tasks such as communicating with MQTT messages in the core environment and accessing local secrets. This tutorial creates a Python function, so you use the Python version of the SDK in the deployment package.

1. <a name="download-ggc-sdk"></a> From the [AWS IoT Greengrass Core SDK](what-is-gg.md#gg-core-sdk-download) downloads page, download the AWS IoT Greengrass Core SDK for Python to your computer.

1. <a name="unzip-ggc-sdk"></a>Unzip the downloaded package to get the SDK. The SDK is the `greengrasssdk` folder.

1. Save the following Python code function in a local file named `temp_monitor.py`.

   ```
   import greengrasssdk
   import json
   import random
   
   client = greengrasssdk.client('iot-data')
   
   # publish to the Twilio Notifications connector through the twilio/txt topic
   def function_handler(event, context):
       temp = event['temperature']
       
       # check the temperature
       # if greater than 30C, send a notification
       if temp > 30:
           data = build_request(event)
           client.publish(topic='twilio/txt', payload=json.dumps(data))
           print('published:' + str(data))
           
       print('temperature:' + str(temp))
       return
   
   # build the Twilio request from the input data
   def build_request(event):
       to_name = event['to_name']
       to_number = event['to_number']
       temp_report = 'temperature:' + str(event['temperature'])
   
       return {
           "request": {
               "recipient": {
                   "name": to_name,
                   "phone_number": to_number,
                   "message": temp_report
               }
           },
           "id": "request_" + str(random.randint(1,101))
       }
   ```

1. Zip the following items into a file named `temp_monitor_python.zip`. When creating the ZIP file, include only the code and dependencies, not the containing folder.
   + **temp\$1monitor.py**. App logic.
   + **greengrasssdk**. Required library for Python Greengrass Lambda functions that publish MQTT messages.

   This is your Lambda function deployment package.

## Step 5: Create a Lambda function
<a name="connectors-cli-create-function"></a>

Now, create a Lambda function that uses the deployment package.

1. <a name="cli-create-empty-lambda-role"></a>Create an IAM role so you can pass in the role ARN when you create the function.

------
#### [ JSON Expanded ]

   ```
   aws iam create-role --role-name Lambda_empty --assume-role-policy '{
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "lambda.amazonaws.com"
               },
              "Action": "sts:AssumeRole"
           }
       ]
   }'
   ```

------
#### [ JSON Single-line ]

   ```
   aws iam create-role --role-name Lambda_empty --assume-role-policy '{"Version": "2012-10-17",		 	 	  "Statement": [{"Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"},"Action": "sts:AssumeRole"}]}'
   ```

------
**Note**  
AWS IoT Greengrass doesn't use this role because permissions for your Greengrass Lambda functions are specified in the Greengrass group role. For this tutorial, you create an empty role.

1. <a name="cli-copy-lambda-role-arn"></a>Copy the `Arn` from the output.

1. Use the AWS Lambda API to create the TempMonitor function. The following command assumes that the zip file is in the current directory.
   + Replace *role-arn* with the `Arn` that you copied.

   ```
   aws lambda create-function \
   --function-name TempMonitor \
   --zip-file fileb://temp_monitor_python.zip \
   --role role-arn \
   --handler temp_monitor.function_handler \
   --runtime python3.7
   ```

1. Publish a version of the function.

   ```
   aws lambda publish-version --function-name TempMonitor --description 'First version'
   ```

1. Create an alias for the published version.

   Greengrass groups can reference a Lambda function by alias (recommended) or by version. Using an alias makes it easier to manage code updates because you don't have to change your subscription table or group definition when the function code is updated. Instead, you just point the alias to the new function version.
**Note**  
AWS IoT Greengrass doesn't support Lambda aliases for **\$1LATEST** versions.

   ```
   aws lambda create-alias --function-name TempMonitor --name GG_TempMonitor --function-version 1
   ```

1. Copy the `AliasArn` from the output. You use this value when you configure the function for AWS IoT Greengrass and when you create a subscription.

Now you're ready to configure the function for AWS IoT Greengrass.

## Step 6: Create a function definition and version
<a name="connectors-cli-create-function-definition"></a>

To use a Lambda function on an AWS IoT Greengrass core, you create a function definition version that references the Lambda function by alias and defines the group-level configuration. For more information, see [Controlling execution of Greengrass Lambda functions by using group-specific configuration](lambda-group-config.md).

1. Create a function definition that includes an initial version.
   + Replace *alias-arn* with the `AliasArn` that you copied when you created the alias.

    

------
#### [ JSON Expanded ]

   ```
   aws greengrass create-function-definition --name MyGreengrassFunctions --initial-version '{
       "Functions": [
           {
               "Id": "TempMonitorFunction",
               "FunctionArn": "alias-arn",
               "FunctionConfiguration": {
                   "Executable": "temp_monitor.function_handler",
                   "MemorySize": 16000,
                   "Timeout": 5
               }
           }
       ]
   }'
   ```

------
#### [ JSON Single-line ]

   ```
   aws greengrass create-function-definition \
   --name MyGreengrassFunctions \
   --initial-version '{"Functions": [{"Id": "TempMonitorFunction", "FunctionArn": "alias-arn", "FunctionConfiguration": {"Executable": "temp_monitor.function_handler", "MemorySize": 16000,"Timeout": 5}}]}'
   ```

------

1. Copy the `LatestVersionArn` from the output. You use this value to add the function definition version to the group version that you deploy to the core.

1. Copy the `Id` from the output. You use this value later when you update the function.

## Step 7: Create a subscription definition and version
<a name="connectors-cli-create-subscription-definition"></a>

<a name="connectors-how-to-add-subscriptions-p1"></a>In this step, you add a subscription that enables the Lambda function to send input data to the connector. The connector defines the MQTT topics that it subscribes to, so this subscription uses one of the topics. This is the same topic that the example function publishes to.

<a name="connectors-how-to-add-subscriptions-p2"></a>For this tutorial, you also create subscriptions that allow the function to receive simulated temperature readings from AWS IoT and allow AWS IoT to receive status information from the connector.

1. Create a subscription definition that contains an initial version that includes the subscriptions.
   + Replace *alias-arn* with the `AliasArn` that you copied when you created the alias for the function. Use this ARN for both subscriptions that use it.

    

------
#### [ JSON Expanded ]

   ```
   aws greengrass create-subscription-definition --initial-version '{
       "Subscriptions": [
           {
               "Id": "TriggerNotification",
               "Source": "alias-arn",
               "Subject": "twilio/txt",
               "Target": "arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/4"
           },        
           {
               "Id": "TemperatureInput",
               "Source": "cloud",
               "Subject": "temperature/input",
               "Target": "alias-arn"
           },
           {
               "Id": "OutputStatus",
               "Source": "arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/4",
               "Subject": "twilio/message/status",
               "Target": "cloud"
           }
       ]
   }'
   ```

------
#### [ JSON Single-line ]

   ```
   aws greengrass create-subscription-definition \
   --initial-version '{"Subscriptions": [{"Id": "TriggerNotification", "Source": "alias-arn", "Subject": "twilio/txt", "Target": "arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/4"},{"Id": "TemperatureInput", "Source": "cloud", "Subject": "temperature/input", "Target": "alias-arn"},{"Id": "OutputStatus", "Source": "arn:aws:greengrass:region::/connectors/TwilioNotifications/versions/4", "Subject": "twilio/message/status", "Target": "cloud"}]}'
   ```

------

1. Copy the `LatestVersionArn` from the output. You use this value to add the subscription definition version to the group version that you deploy to the core.

## Step 8: Create a group version
<a name="connectors-cli-create-group-version"></a>

Now, you're ready to create a group version that contains all of the items that you want to deploy. You do this by creating a group version that references the target version of each component type.

First, get the group ID and the ARN of the core definition version. These values are required to create the group version.

1. Get the ID of the group and latest group version:

   1. <a name="get-group-id-latestversion"></a>Get the IDs of the target Greengrass group and group version. This procedure assumes that this is the latest group and group version. The following query returns the most recently created group.

      ```
      aws greengrass list-groups --query "reverse(sort_by(Groups, &CreationTimestamp))[0]"
      ```

      Or, you can query by name. Group names are not required to be unique, so multiple groups might be returned.

      ```
      aws greengrass list-groups --query "Groups[?Name=='MyGroup']"
      ```
**Note**  
<a name="find-group-ids-console"></a>You can also find these values in the AWS IoT console. The group ID is displayed on the group's **Settings** page. Group version IDs are displayed on the group's **Deployments** tab.

   1. <a name="copy-target-group-id"></a>Copy the `Id` of the target group from the output. You use this to get the core definition version and when you deploy the group.

   1. <a name="copy-latest-group-version-id"></a>Copy the `LatestVersion` from the output, which is the ID of the last version added to the group. You use this to get the core definition version.

1. Get the ARN of the core definition version:

   1. Get the group version. For this step, we assume that the latest group version includes a core definition version.
      + Replace *group-id* with the `Id` that you copied for the group.
      + Replace *group-version-id* with the `LatestVersion` that you copied for the group.

      ```
      aws greengrass get-group-version \
      --group-id group-id \
      --group-version-id group-version-id
      ```

   1. Copy the `CoreDefinitionVersionArn` from the output.

1. Create a group version.
   + Replace *group-id* with the `Id` that you copied for the group.
   + Replace *core-definition-version-arn* with the `CoreDefinitionVersionArn` that you copied for the core definition version.
   + Replace *resource-definition-version-arn* with the `LatestVersionArn` that you copied for the resource definition.
   + Replace *connector-definition-version-arn* with the `LatestVersionArn` that you copied for the connector definition.
   + Replace *function-definition-version-arn* with the `LatestVersionArn` that you copied for the function definition.
   + Replace *subscription-definition-version-arn* with the `LatestVersionArn` that you copied for the subscription definition.

   ```
   aws greengrass create-group-version \
   --group-id group-id \
   --core-definition-version-arn core-definition-version-arn \
   --resource-definition-version-arn resource-definition-version-arn \
   --connector-definition-version-arn connector-definition-version-arn \
   --function-definition-version-arn function-definition-version-arn \
   --subscription-definition-version-arn subscription-definition-version-arn
   ```

1. Copy the value of `Version` from the output. This is the ID of the group version. You use this value to deploy the group version.

## Step 9: Create a deployment
<a name="connectors-cli-create-deployment"></a>

Deploy the group to the core device.

1. <a name="check-gg-daemon-is-running"></a>In a core device terminal, make sure that the AWS IoT Greengrass daemon is running.

   1. To check whether the daemon is running:

      ```
      ps aux | grep -E 'greengrass.*daemon'
      ```

      If the output contains a `root` entry for `/greengrass/ggc/packages/1.11.6/bin/daemon`, then the daemon is running.

   1. To start the daemon:

      ```
      cd /greengrass/ggc/core/
      sudo ./greengrassd start
      ```

1. <a name="create-deployment"></a>Create a deployment.
   + Replace *group-id* with the `Id` that you copied for the group.
   + Replace *group-version-id* with the `Version` that you copied for the new group version.

   ```
   aws greengrass create-deployment \
   --deployment-type NewDeployment \
   --group-id group-id \
   --group-version-id group-version-id
   ```

1. <a name="copy-deployment-id"></a>Copy the `DeploymentId` from the output.

1. <a name="get-deployment-status"></a>Get the deployment status.
   + Replace *group-id* with the `Id` that you copied for the group.
   + Replace *deployment-id* with the `DeploymentId` that you copied for the deployment.

   ```
   aws greengrass get-deployment-status \
   --group-id group-id \
   --deployment-id deployment-id
   ```

   If the status is `Success`, the deployment was successful. For troubleshooting help, see [Troubleshooting AWS IoT Greengrass](gg-troubleshooting.md).

## Test the solution
<a name="connectors-cli-test-solution"></a>

1. <a name="choose-test-page"></a>On the AWS IoT console home page, choose **Test**.

1. For **Subscribe to topic**, use the following values, and then choose **Subscribe**. The Twilio Notifications connector publishes status information to this topic.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-cli.html)

1. For **Publish to topic**, use the following values, and then choose **Publish** to invoke the function.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/greengrass/v1/developerguide/connectors-cli.html)

   If successful, the recipient receives the text message and the console displays the `success` status from the [output data](twilio-notifications-connector.md#twilio-notifications-connector-data-output).

   Now, change the `temperature` in the input message to **29** and publish. Because this is less than 30, the TempMonitor function doesn't trigger a Twilio message.

## See also
<a name="connectors-cli-see-also"></a>
+ [Integrate with services and protocols using Greengrass connectors](connectors.md)
+ [AWS-provided Greengrass connectors](connectors-list.md)
+ [Getting started with Greengrass connectors (console)](connectors-console.md)
+ [AWS Secrets Manager commands](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager) in the *AWS CLI Command Reference*
+ <a name="see-also-iam-cli"></a>[AWS Identity and Access Management (IAM) commands](https://docs.aws.amazon.com/cli/latest/reference/iam) in the *AWS CLI Command Reference*
+ <a name="see-also-lambda-cli"></a>[AWS Lambda commands](https://docs.aws.amazon.com/cli/latest/reference/lambda) in the *AWS CLI Command Reference*
+ <a name="see-also-gg-cli"></a>[AWS IoT Greengrass commands](https://docs.aws.amazon.com/cli/latest/reference/greengrass/index.html) in the *AWS CLI Command Reference*