

# Perform machine learning inference
<a name="perform-machine-learning-inference"></a>

With AWS IoT Greengrass, you can perform machine learning (ML) inference on your edge devices on locally generated data using cloud-trained models. You benefit from the low latency and cost savings of running local inference, yet still take advantage of cloud computing power for training models and complex processing.

AWS IoT Greengrass makes the steps required to perform inference more efficient. You can train your inference models anywhere and deploy them locally as *machine learning components*. For example, you can build and train deep-learning models in [Amazon SageMaker AI](https://console.aws.amazon.com/sagemaker). Then, you can store these models in an [Amazon S3](https://console.aws.amazon.com/s3) bucket, so you can use these models as artifacts in your components to perform inference on your core devices.

**Topics**
+ [How AWS IoT Greengrass ML inference works](#how-ml-inference-works)
+ [What's different in AWS IoT Greengrass Version 2?](#ml-differences)
+ [Requirements](#ml-requirements)
+ [Supported model sources](#ml-model-sources)
+ [Supported machine learning runtimes](#ml-runtime-libraries)
+ [AWS-provided machine learning components](#ml-components)
+ [Use Amazon SageMaker AI Edge Manager on Greengrass core devices](use-sagemaker-edge-manager.md)
+ [Customize your machine learning components](ml-customization.md)
+ [Troubleshooting machine learning inference](ml-troubleshooting.md)

## How AWS IoT Greengrass ML inference works
<a name="how-ml-inference-works"></a>

AWS provides [machine learning components](#ml-components) that you can use to create one-step deployments to perform machine learning inference on your device. You can also use these components as templates to create custom components to meet your specific requirements.<a name="ml-component-types"></a>

AWS provides the following categories of machine learning components:
+ **Model component**—Contains machine learning models as Greengrass artifacts.
+ **Runtime component**—Contains the script that installs the machine learning framework and its dependencies on the Greengrass core device.
+ **Inference component**—Contains the inference code and includes component dependencies to install the machine learning framework and download pre-trained machine learning models.

Each deployment that you create to perform machine learning inference consists of at least one component that runs your inference application, installs the machine learning framework, and downloads your machine learning models. To perform sample inference with AWS-provided components, you deploy an inference component to your core device, which automatically includes the corresponding model and runtime components as dependencies. To customize your deployments, you can plug in or swap out the sample model components with custom model components, or you can use the component recipes for the AWS-provided components as templates to create your own custom inference, model, and runtime components. 

To perform machine learning inference by using custom components:

1. Create a model component. This component contains the machine learning models that you want to use to perform inference. AWS provides sample pre-trained DLR and TensorFlow Lite models. To use a custom model, create your own model component.

1. Create a runtime component. This component contains the scripts required to install the machine learning runtime for your models. AWS provides sample runtime components for [Deep Learning Runtime](https://github.com/neo-ai/neo-ai-dlr) (DLR) and [TensorFlow Lite](https://www.tensorflow.org/lite/guide/python). To use other runtimes with your custom models and inference code, create your own runtime components.

1. Create an inference component. This component contains your inference code, and includes your model and runtime components as dependencies. AWS provides sample inference components for image classification and object detection using DLR and TensorFlow Lite. To perform other types of inference, or to use custom models and runtimes, create your own inference component.

1. Deploy the inference component. When you deploy this component, AWS IoT Greengrass also automatically deploys the model and runtime component dependencies.

To get started with AWS-provided components, see [Tutorial: Perform sample image classification inference using TensorFlow Lite](ml-tutorial-image-classification.md).

For information about creating custom machine learning components, see [Customize your machine learning components](ml-customization.md).

## What's different in AWS IoT Greengrass Version 2?
<a name="ml-differences"></a>

AWS IoT Greengrass consolidates functional units for machine learning—such as models, runtimes, and inference code— into components that enable you to use a one-step process to install the machine learning runtime, download your trained models, and perform inference on your device. 

By using the AWS-provided machine learning components, you have the flexibility to start performing machine learning inference with sample inference code and pre-trained models. You can plug in custom model components to use your own custom-trained models with the inference and runtime components that AWS provides. For a completely customized machine learning solution, you can use the public components as templates to create custom components and use any runtime, model, or inference type that you want.

## Requirements
<a name="ml-requirements"></a>

To create and use machine learning components, you must have the following:
+ A Greengrass core device. If you don't have one, see [Tutorial: Getting started with AWS IoT Greengrass V2](getting-started.md).
+ Minimum 500 MB local storage space to use AWS-provided sample machine learning components.

## Supported model sources
<a name="ml-model-sources"></a>

AWS IoT Greengrass supports using custom-trained machine learning models that are stored in Amazon S3. You can also use Amazon SageMaker AI edge packaging jobs to directly create model components for your SageMaker AI Neo-compiled models. For information about using SageMaker AI Edge Manager with AWS IoT Greengrass, see [Use Amazon SageMaker AI Edge Manager on Greengrass core devices](use-sagemaker-edge-manager.md).

The S3 buckets that contain your models must meet the following requirements:
+ They must not be encrypted using SSE-C. For buckets that use server-side encryption, AWS IoT Greengrass machine learning inference currently supports the SSE-S3 or SSE-KMS encryption options only. For more information about server-side encryption options, see [Protecting data using server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html) in the *Amazon Simple Storage Service User Guide*.
+ Their names must not include periods (`.`). For more information, see the rule about using virtual hosted-style buckets with SSL in [Rules for bucket naming](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules) in the *Amazon Simple Storage Service User Guide*.
+ <a name="sr-artifacts-req"></a>The S3 buckets that store your model sources must be in the same AWS account and AWS Region as your machine learning components.
+ AWS IoT Greengrass must have `read` permission to the model source. To enable AWS IoT Greengrass to access the S3 buckets, the [Greengrass device role](device-service-role.md) must allow the `s3:GetObject` action. For more information about the device role, see [Authorize core devices to interact with AWS services](device-service-role.md).

## Supported machine learning runtimes
<a name="ml-runtime-libraries"></a>

AWS IoT Greengrass enables you to create custom components to use any machine learning runtime of your choice to perform machine learning inference with your custom-trained models. For information about creating custom machine learning components, see [Customize your machine learning components](ml-customization.md).

To make the process of getting started with machine learning more efficient, AWS IoT Greengrass provides sample inference, model, and runtime components that use the following machine learning runtimes: 
+  [Deep Learning Runtime](https://github.com/neo-ai/neo-ai-dlr) (DLR) v1.6.0 and v1.3.0
+  [TensorFlow Lite](https://www.tensorflow.org/lite/guide/python) v2.5.0 

## AWS-provided machine learning components
<a name="ml-components"></a>

The following table lists the AWS-provided components used for machine learning. 

**Note**  <a name="component-nucleus-dependency-update-note"></a>
Several AWS-provided components depend on specific minor versions of the Greengrass nucleus. Because of this dependency, you need to update these components when you update the Greengrass nucleus to a new minor version. For information about the specific versions of the nucleus that each component depends on, see the corresponding component topic. For more information about updating the nucleus, see [Update the AWS IoT Greengrass Core software (OTA)](update-greengrass-core-v2.md).


| Component | Description | [Component type](develop-greengrass-components.md#component-types) | Supported OS | [Open source](open-source.md) | 
| --- | --- | --- | --- | --- | 
| [SageMaker AI Edge Manager](sagemaker-edge-manager-component.md) | Deploys the Amazon SageMaker AI Edge Manager agent on the Greengrass core device. | Generic | Linux, Windows | No | 
| [DLR image classification](dlr-image-classification-component.md) | Inference component that uses the DLR image classification model store and the DLR runtime component as dependencies to install DLR, download sample image classification models, and perform image classification inference on supported devices. | Generic | Linux, Windows | No | 
| [DLR object detection](dlr-object-detection-component.md) | Inference component that uses the DLR object detection model store and the DLR runtime component as dependencies to install DLR, download sample object detection models, and perform object detection inference on supported devices. | Generic | Linux, Windows | No | 
| [DLR image classification model store](dlr-image-classification-model-store-component.md) | Model component that contains sample ResNet-50 image classification models as Greengrass artifacts. | Generic | Linux, Windows | No | 
| [DLR object detection model store](dlr-object-detection-model-store-component.md) | Model component that contains sample YOLOv3 object detection models as Greengrass artifacts. | Generic | Linux, Windows | No | 
| [DLR runtime](dlr-component.md) | Runtime component that contains an installation script that is used to install DLR and its dependencies on the Greengrass core device. | Generic | Linux, Windows | No | 
| [TensorFlow Lite image classification](tensorflow-lite-image-classification-component.md) | Inference component that uses the TensorFlow Lite image classification model store and the TensorFlow Lite runtime component as dependencies to install TensorFlow Lite, download sample image classification models, and perform image classification inference on supported devices. | Generic | Linux, Windows | No | 
| [TensorFlow Lite object detection](tensorflow-lite-object-detection-component.md) | Inference component that uses the TensorFlow Lite object detection model store and the TensorFlow Lite runtime component as dependencies to install TensorFlow Lite, download sample object detection models, and perform object detection inference on supported devices. | Generic | Linux, Windows | No | 
| [TensorFlow Lite image classification model store](tensorflow-lite-image-classification-model-store-component.md) | Model component that contains a sample MobileNet v1 model as a Greengrass artifact. | Generic | Linux, Windows | No | 
| [TensorFlow Lite object detection model store](tensorflow-lite-object-detection-model-store-component.md) | Model component that contains a sample Single Shot Detection (SSD) MobileNet model as a Greengrass artifact. | Generic | Linux, Windows | No | 
| [TensorFlow Lite runtime](tensorflow-lite-component.md) | Runtime component that contains an installation script that is used to install TensorFlow Lite and its dependencies on the Greengrass core device. | Generic | Linux, Windows | No | 

# Use Amazon SageMaker AI Edge Manager on Greengrass core devices
<a name="use-sagemaker-edge-manager"></a>

**Important**  
SageMaker AI Edge Manager was discontinued on April 26th, 2024. For more information about continuing to deploy your models to edge devices, see [SageMaker AI Edge Manager end of life](https://docs.aws.amazon.com/sagemaker/latest/dg/edge-eol.html).

Amazon SageMaker AI Edge Manager is a software agent that runs on edge devices. SageMaker AI Edge Manager provides model management for edge devices so that you can package and use Amazon SageMaker AI Neo-compiled models directly on Greengrass core devices. By using SageMaker AI Edge Manager, you can also sample model input and output data from your core devices, and send that data to the AWS Cloud for monitoring and analysis. Because SageMaker AI Edge Manager uses SageMaker AI Neo to optimize your models for your target hardware, you don't need to install the DLR runtime directly on your device. On Greengrass devices, SageMaker AI Edge Manager doesn't load local AWS IoT certificates or call the AWS IoT credential provider endpoint directly. Instead, SageMaker AI Edge Manager uses the [token exchange service](token-exchange-service-component.md) to fetch temporary credential from a TES endpoint. 

This section describes how SageMaker AI Edge Manager works on Greengrass core devices.



## How SageMaker AI Edge Manager works on Greengrass devices
<a name="how-to-use-sdge-manager-with-greengrass"></a>

To deploy the SageMaker AI Edge Manager agent to your core devices, create a deployment that includes the `aws.greengrass.SageMakerEdgeManager` component. AWS IoT Greengrass manages the installation and lifecycle of the Edge Manager agent on your devices. When a new version of the agent binary is available, deploy the updated version of the `aws.greengrass.SageMakerEdgeManager` component to upgrade the version of the agent that is installed on your device. 

When you use SageMaker AI Edge Manager with AWS IoT Greengrass, your workflow includes the following high-level steps:

1. Compile models with SageMaker AI Neo.

1. Package your SageMaker AI Neo-compiled models using SageMaker AI edge packaging jobs. When you run an edge packaging job for your model, you can choose to create a model component with the packaged model as an artifact that can be deployed to your Greengrass core device. 

1. Create a custom inference component. You use this inference component to interact with the Edge Manager agent to perform inference on the core device. These operations include loading models, invoke prediction requests to run inference, and unloading models when the component shuts down. 

1. Deploy the SageMaker AI Edge Manager component, the packaged model component, and the inference component to run your model on the SageMaker AI inference engine (Edge Manager agent) on your device.

For more information about creating edge packaging jobs and inference components that work with SageMaker AI Edge Manager, see [Deploy Model Package and Edge Manager Agent with AWS IoT Greengrass](https://docs.aws.amazon.com/sagemaker/latest/dg/edge-greengrass.html) in the *Amazon SageMaker AI Developer Guide*.

The [Tutorial: Get started with SageMaker AI Edge Manager](get-started-with-edge-manager-on-greengrass.md) tutorial shows you how to set up and use the SageMaker AI Edge Manager agent on an existing Greengrass core device, using AWS-provided example code that you can use to create sample inference and model components. 

When you use SageMaker AI Edge Manager on Greengrass core devices, you can also use the capture data feature to upload sample data to the AWS Cloud. Capture data is a SageMaker AI feature that you use to upload inference input, inference results, and additional inference data to an S3 bucket or a local directory for future analysis. For more information about using capture data with SageMaker AI Edge Manager, see [Manage Model](https://docs.aws.amazon.com/sagemaker/latest/dg/edge-manage-model.html#edge-manage-model-capturedata) in the *Amazon SageMaker AI Developer Guide*.

## Requirements
<a name="greengrass-edge-manager-agent-requirements"></a>

You must meet the following requirements to use the SageMaker AI Edge Manager agent on Greengrass core devices.<a name="sm-edge-manager-component-reqs"></a>
+ <a name="sm-req-core-device"></a>A Greengrass core device running on Amazon Linux 2, a Debian-based Linux platform (x86\$164 or Armv8), or Windows (x86\$164). If you don't have one, see [Tutorial: Getting started with AWS IoT Greengrass V2](getting-started.md).
+ <a name="sm-req-python"></a>[Python](https://www.python.org/downloads/) 3.6 or later, including `pip` for your version of Python, installed on your core device.
+ The [Greengrass device role](device-service-role.md) configured with the following: 
  + <a name="sm-req-iam-trust-relationship"></a>A trust relationship that allows `credentials.iot.amazonaws.com` and `sagemaker.amazonaws.com` to assume the role, as shown in the following IAM policy example.

    ```
    { 
      "Version": "2012-10-17",		 	 	 
      "Statement": [ 
        { 
          "Effect": "Allow", 
          "Principal": {
            "Service": "credentials.iot.amazonaws.com"
           }, 
          "Action": "sts:AssumeRole" 
        },
        { 
          "Effect": "Allow", 
          "Principal": {
            "Service": "sagemaker.amazonaws.com"
          }, 
          "Action": "sts:AssumeRole" 
        } 
      ] 
    }
    ```
  + <a name="sm-req-iam-sagemanakeredgedevicefleetpolicy"></a>The [AmazonSageMakerEdgeDeviceFleetPolicy](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/service-role/AmazonSageMakerEdgeDeviceFleetPolicy) IAM managed policy.
  + <a name="sm-req-iam-s3-putobject"></a>The `s3:PutObject` action, as shown in the following IAM policy example.

    ```
    {
      "Version": "2012-10-17",		 	 	 
      "Statement": [
        {
          "Action": [
            "s3:PutObject"
          ],
          "Resource": [
            "*"
          ],
          "Effect": "Allow"
        }
      ]
    }
    ```
+ <a name="sm-req-s3-bucket"></a>An Amazon S3 bucket created in the same AWS account and AWS Region as your Greengrass core device. SageMaker AI Edge Manager requires an S3 bucket to create an edge device fleet, and to store sample data from running inference on your device. For information about creating S3 buckets, see [Getting started with Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html).
+ <a name="sm-req-edge-device-fleet"></a>A SageMaker AI edge device fleet that uses the same AWS IoT role alias as your Greengrass core device. For more information, see [Create an edge device fleet](get-started-with-edge-manager-on-greengrass.md#create-edge-device-fleet-for-greengrass).
+ <a name="sm-req-edge-device"></a>Your Greengrass core device registered as an edge device in your SageMaker AI Edge device fleet. The edge device name must match the AWS IoT thing name for your core device. For more information, see [Register your Greengrass core device](get-started-with-edge-manager-on-greengrass.md#register-greengrass-core-device-in-sme).

## Get started with SageMaker AI Edge Manager
<a name="use-sm-edge-manager"></a>

You can complete a tutorial to get started using SageMaker AI Edge Manager. The tutorial shows you how to get started using SageMaker AI Edge Manager with AWS-provided sample components on an existing core device. These sample components use the SageMaker AI Edge Manager component as a dependency to deploy the Edge Manager agent, and perform inference using pre-trained models that were compiled using SageMaker AI Neo. For more information, see [Tutorial: Get started with SageMaker AI Edge Manager](get-started-with-edge-manager-on-greengrass.md).

# Customize your machine learning components
<a name="ml-customization"></a>

In AWS IoT Greengrass, you can configure sample [machine learning components](perform-machine-learning-inference.md#ml-components) to customize how you perform machine learning inference on your devices with the inference, model, and runtime components as the building blocks. AWS IoT Greengrass also provides you the flexibility to use the sample components as templates and create your own custom components as needed. You can mix and match this modular approach to customize your machine learning inference components in the following ways:

**Using sample inference components**  
+ Modify the configuration of inference components when you deploy them.
+ Use a custom model with the sample inference component by replacing the sample model store component with a custom model component. Your custom model must be trained using the same runtime as the sample model.

**Using custom inference components**  
+ Use custom inference code with the sample models and runtimes by adding public model components and runtime components as dependencies of custom inference components.
+ Create and add custom model components or runtime components as dependencies of custom inference components. You must use custom components if you want to use custom inference code or a runtime for which AWS IoT Greengrass doesn't provide a sample component. 

**Topics**
+ [Modify the configuration of a public inference component](#modify-ml-component-config)
+ [Use a custom model with the sample inference component](#override-public-model-store)
+ [Create custom machine learning components](#create-private-ml-components)
+ [Create a custom inference component](#create-inference-component)

## Modify the configuration of a public inference component
<a name="modify-ml-component-config"></a>

In the [AWS IoT Greengrass console](https://console.aws.amazon.com/greengrass), the component page displays the default configuration of that component. For example, the default configuration of the TensorFlow Lite image classification component looks like the following:

```
{
  "accessControl": {
    "aws.greengrass.ipc.mqttproxy": {
      "aws.greengrass.TensorFlowLiteImageClassification:mqttproxy:1": {
        "policyDescription": "Allows access to publish via topic ml/tflite/image-classification.",
        "operations": [
          "aws.greengrass#PublishToIoTCore"
        ],
        "resources": [
          "ml/tflite/image-classification"
        ]
      }
    }
  },
  "PublishResultsOnTopic": "ml/tflite/image-classification",
  "ImageName": "cat.jpeg",
  "InferenceInterval": 3600,
  "ModelResourceKey": {
    "model": "TensorFlowLite-Mobilenet"
  }
}
```

When you deploy a public inference component, you can modify the default configuration to customize your deployment. For information about the available configuration parameters for each public inference component, see the component topic in [AWS-provided machine learning components](perform-machine-learning-inference.md#ml-components).

This section describes how to deploy a modified component from the AWS IoT Greengrass console. For information about deploying components using the AWS CLI, see [Create deployments](create-deployments.md).<a name="modify-ml-component-config-console"></a>

**To deploy a modified public inference component (console)**

1. Sign in to the [AWS IoT Greengrass console](https://console.aws.amazon.com/greengrass).

1. In the navigation menu, choose **Components**.

1. On the **Components** page, on the **Public components** tab, choose the component you want to deploy.

1. On the component page, choose **Deploy**.

1. <a name="add-deployment"></a>From **Add to deployment**, choose one of the following:

   1. To merge this component to an existing deployment on your target device, choose **Add to existing deployment**, and then select the deployment that you want to revise.

   1. To create a new deployment on your target device, choose **Create new deployment**. If you have an existing deployment on your device, choosing this step replaces the existing deployment. 

1. <a name="specify-deployment-target"></a>On the **Specify target** page, do the following: 

   1. Under **Deployment** information, enter or modify the friendly name for your deployment.

   1. Under **Deployment targets**, select a target for your deployment, and choose **Next**. You cannot change the deployment target if you are revising an existing deployment.

1. On the **Select components** page, under **Public components** verify that the inference component with your modified configuration is selected, and choose **Next**.

1. On the **Configure components** page, do the following: 

   1. Select the inference component, and choose **Configure component**.

   1. Under **Configuration update**, enter the configuration values that you want to update. For example, enter the following configuration update in the **Configuration to merge** box to change the inference interval to 15 seconds, and instruct the component to look for the image named `custom.jpg` in the `/custom-ml-inference/images/` folder. 

      ```
      {
        "InferenceInterval": "15",
        "ImageName": "custom.jpg",
        "ImageDirectory": "/custom-ml-inference/images/"
      }
      ```

      To reset a component's entire configuration to its default values, specify a single empty string `""` in the **Reset paths** box. 

   1. Choose **Confirm**, and then choose **Next**.

1. On the **Configure advanced setting** page, keep the default configuration settings, and choose **Next**.

1. On the **Review** page, choose **Deploy**

## Use a custom model with the sample inference component
<a name="override-public-model-store"></a>

If you want to use the sample inference component with your own machine learning models for a runtime for which AWS IoT Greengrass provides a sample runtime component, you must override the public model components with components that use those models as artifacts. At a high-level you complete the following steps to use a custom model with the sample inference component:

1. Create a model component that uses a custom model in an S3 bucket as an artifact. Your custom model must be trained using the same runtime as the model that you want to replace.

1. Modify the `ModelResourceKey` configuration parameter in the inference component to use the custom model. For information about updating the configuration of the inference component, see [Modify the configuration of a public inference component](#modify-ml-component-config)

When you deploy the inference component, AWS IoT Greengrass looks for the latest version of its component dependencies. It overrides the dependent public model component if a later custom version of the component exists in the same AWS account and AWS Region. 

### Create a custom model component (console)
<a name="create-model-store-component-console"></a>

1. Upload your model to an S3 bucket. For information about uploading your models to an S3 bucket, see [Working with Amazon S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html) in the *Amazon Simple Storage Service User Guide*.
**Note**  <a name="s3-artifacts-note"></a>
<a name="sr-artifacts-req"></a>You must store your artifacts in S3 buckets that are in the same AWS account and AWS Region as the components. To enable AWS IoT Greengrass to access these artifacts, the [Greengrass device role](device-service-role.md) must allow the `s3:GetObject` action. For more information about the device role, see [Authorize core devices to interact with AWS services](device-service-role.md).

1. In the [AWS IoT Greengrass console](https://console.aws.amazon.com/greengrass) navigation menu, choose **Components**.

1. Retrieve the component recipe for the public model store component.

   1. On the **Components** page, on the **Public components** tab, look for and choose the public model component for which you want to create a new version. For example, `variant.DLR.ImageClassification.ModelStore`.

   1. On the component page, choose **View recipe** and copy the displayed JSON recipe.

1. On the **Components** page, on the **My components** tab, choose **Create component**.

1. On the **Create component** page, under **Component information**, select **Enter recipe as JSON** as your component source.

1. In the **Recipe** box, paste the component recipe that you previously copied.

1. <a name="override-model-recipe-config"></a>In the recipe, update the following values:
   + `ComponentVersion`: Increment the minor version of the component. 

     When you create a custom component to override a public model component, you must update only the minor version of the existing component version. For example, if the public component version is `2.1.0`, you can create a custom component with version `2.1.1`.
   + `Manifests.Artifacts.Uri`: Update each URI value to the Amazon S3 URI of the model that you want to use.
**Note**  
Do not change the name of the component.

1. Choose **Create component**.

### Create a custom model component (AWS CLI)
<a name="create-model-store-component-cli"></a>

1. Upload your model to an S3 bucket. For information about uploading your models to an S3 bucket, see [Working with Amazon S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html) in the *Amazon Simple Storage Service User Guide*.
**Note**  <a name="s3-artifacts-note"></a>
<a name="sr-artifacts-req"></a>You must store your artifacts in S3 buckets that are in the same AWS account and AWS Region as the components. To enable AWS IoT Greengrass to access these artifacts, the [Greengrass device role](device-service-role.md) must allow the `s3:GetObject` action. For more information about the device role, see [Authorize core devices to interact with AWS services](device-service-role.md).

1. Run the following command to retrieve the component recipe of the public component. This command writes the component recipe to the output file that you provide in your command. Convert the retrieved base64-encoded string to JSON or YAML, as needed.

------
#### [ Linux, macOS, or Unix ]

   ```
   aws greengrassv2 get-component \
       --arn <arn> \
       --recipe-output-format <recipe-format> \
       --query recipe \
       --output text | base64 --decode > <recipe-file>
   ```

------
#### [ Windows Command Prompt (CMD) ]

   ```
   aws greengrassv2 get-component ^
       --arn <arn> ^
       --recipe-output-format <recipe-format> ^
       --query recipe ^
       --output text > <recipe-file>.base64
   
   certutil -decode <recipe-file>.base64 <recipe-file>
   ```

------
#### [ PowerShell ]

   ```
   aws greengrassv2 get-component `
       --arn <arn> `
       --recipe-output-format <recipe-format> `
       --query recipe `
       --output text > <recipe-file>.base64
   
   certutil -decode <recipe-file>.base64 <recipe-file>
   ```

------

1. Update the name of the recipe file to `<component-name>-<component-version>`, where component version is the target version of the new component. For example, `variant.DLR.ImageClassification.ModelStore-2.1.1.yaml`. 

1. <a name="override-model-recipe-config"></a>In the recipe, update the following values:
   + `ComponentVersion`: Increment the minor version of the component. 

     When you create a custom component to override a public model component, you must update only the minor version of the existing component version. For example, if the public component version is `2.1.0`, you can create a custom component with version `2.1.1`.
   + `Manifests.Artifacts.Uri`: Update each URI value to the Amazon S3 URI of the model that you want to use.
**Note**  
Do not change the name of the component.

1. Run the following command to create a new component using the recipe you retrieved and modified.

   ```
   aws greengrassv2 create-component-version \
       --inline-recipe fileb://path/to/component/recipe
   ```
**Note**  
This step creates the component in the AWS IoT Greengrass service in the AWS Cloud. You can use the Greengrass CLI to develop, test, and deploy your component locally before you upload it to the cloud. For more information, see [Develop AWS IoT Greengrass components](develop-greengrass-components.md).

For more information about creating components, see [Develop AWS IoT Greengrass components](develop-greengrass-components.md).

## Create custom machine learning components
<a name="create-private-ml-components"></a>

You must create custom components if you want to use custom inference code or a runtime for which AWS IoT Greengrass doesn't provide a sample component. You can use your custom inference code with the AWS-provided sample machine learning models and runtimes, or you can develop a completely customized machine learning inference solution with your own models and runtime. If your models use a runtime for which AWS IoT Greengrass provides a sample runtime component, then you can use that runtime component, and you need to create custom components only for your inference code and the models you want to use. 

**Topics**
+ [Retrieve the recipe for a public component](#get-ml-component-recipes)
+ [Retrieve sample component artifacts](#get-ml-component-artifacts)
+ [Upload component artifacts to an S3 bucket](#upload-ml-component-artifacts)
+ [Create custom components](#create-ml-components)

### Retrieve the recipe for a public component
<a name="get-ml-component-recipes"></a>

You can use the recipe of an existing public machine learning component as a template to create a custom component. To view the component recipe for the latest version of a public component, use the console or the AWS CLI as follows:
+ **Using the console**

  1. On the **Components** page, on the **Public components** tab, look for and choose the public component.

  1. On the component page, choose **View recipe**.
+ **Using AWS CLI**

  Run the following command to retrieve the component recipe of the public variant component. This command writes the component recipe to the JSON or YAML recipe file that you provide in your command. 

------
#### [ Linux, macOS, or Unix ]

  ```
  aws greengrassv2 get-component \
      --arn <arn> \
      --recipe-output-format <recipe-format> \
      --query recipe \
      --output text | base64 --decode > <recipe-file>
  ```

------
#### [ Windows Command Prompt (CMD) ]

  ```
  aws greengrassv2 get-component ^
      --arn <arn> ^
      --recipe-output-format <recipe-format> ^
      --query recipe ^
      --output text > <recipe-file>.base64
  
  certutil -decode <recipe-file>.base64 <recipe-file>
  ```

------
#### [ PowerShell ]

  ```
  aws greengrassv2 get-component `
      --arn <arn> `
      --recipe-output-format <recipe-format> `
      --query recipe `
      --output text > <recipe-file>.base64
  
  certutil -decode <recipe-file>.base64 <recipe-file>
  ```

------

  Replace the values in your command as follows:
  + `<arn>`. The Amazon Resource Name (ARN) of the public component. 
  + `<recipe-format>`. The format in which you want to create the recipe file. Supported values are `JSON` and `YAML`.
  + `<recipe-file>`. The name of the recipe in the format `<component-name>-<component-version>`. 

### Retrieve sample component artifacts
<a name="get-ml-component-artifacts"></a>

You can use the artifacts used by the public machine learning components as templates to create your custom component artifacts, such as inference code or runtime installation scripts. 

To view the sample artifacts that are included in the public machine learning components, deploy the public inference component and then view the artifacts on your device in the `/greengrass/v2/packages/artifacts-unarchived/component-name/component-version/` folder. 

### Upload component artifacts to an S3 bucket
<a name="upload-ml-component-artifacts"></a>

Before you can create a custom component, you must upload the component artifacts to an S3 bucket and use the S3 URIs in your component recipe. For example, to use custom inference code in your inference component, upload the code to an S3 bucket. You can then use the Amazon S3 URI of your inference code as an artifact in your component. 

For information about uploading content to an S3 bucket, see [Working with Amazon S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html) in the *Amazon Simple Storage Service User Guide*.

**Note**  <a name="s3-artifacts-note"></a>
<a name="sr-artifacts-req"></a>You must store your artifacts in S3 buckets that are in the same AWS account and AWS Region as the components. To enable AWS IoT Greengrass to access these artifacts, the [Greengrass device role](device-service-role.md) must allow the `s3:GetObject` action. For more information about the device role, see [Authorize core devices to interact with AWS services](device-service-role.md).

### Create custom components
<a name="create-ml-components"></a>

You can use the artifacts and recipes that you retrieved to create your custom machine learning components. For an example, see [Create a custom inference component](#create-inference-component).

For detailed information about creating and deploying components to Greengrass devices, see [Develop AWS IoT Greengrass components](develop-greengrass-components.md) and [Deploy AWS IoT Greengrass components to devices](manage-deployments.md).

## Create a custom inference component
<a name="create-inference-component"></a>

This section shows you how to create a custom inference component using the DLR image classification component as a template.

**Topics**
+ [Upload your inference code to an Amazon S3 bucket](#create-inference-code)
+ [Create a recipe for your inference component](#create-inference-component-recipe)
+ [Create the inference component](#create-private-inference-component)

### Upload your inference code to an Amazon S3 bucket
<a name="create-inference-code"></a>

Create your inference code and then upload it to an S3 bucket. For information about uploading content to an S3 bucket, see [Working with Amazon S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html) in the *Amazon Simple Storage Service User Guide*.

**Note**  <a name="s3-artifacts-note"></a>
<a name="sr-artifacts-req"></a>You must store your artifacts in S3 buckets that are in the same AWS account and AWS Region as the components. To enable AWS IoT Greengrass to access these artifacts, the [Greengrass device role](device-service-role.md) must allow the `s3:GetObject` action. For more information about the device role, see [Authorize core devices to interact with AWS services](device-service-role.md).

### Create a recipe for your inference component
<a name="create-inference-component-recipe"></a>

1. Run the following command to retrieve the component recipe of the DLR image classification component. This command writes the component recipe to the JSON or YAML recipe file that you provide in your command. 

------
#### [ Linux, macOS, or Unix ]

   ```
   aws greengrassv2 get-component \
       --arn arn:aws:greengrass:region:aws:components:aws.greengrass.DLRImageClassification:versions:version \
       --recipe-output-format JSON | YAML \
       --query recipe \
       --output text | base64 --decode > <recipe-file>
   ```

------
#### [ Windows Command Prompt (CMD) ]

   ```
   aws greengrassv2 get-component ^
       --arn arn:aws:greengrass:region:aws:components:aws.greengrass.DLRImageClassification:versions:version ^
       --recipe-output-format JSON | YAML ^
       --query recipe ^
       --output text > <recipe-file>.base64
   
   certutil -decode <recipe-file>.base64 <recipe-file>
   ```

------
#### [ PowerShell ]

   ```
   aws greengrassv2 get-component `
       --arn arn:aws:greengrass:region:aws:components:aws.greengrass.DLRImageClassification:versions:version `
       --recipe-output-format JSON | YAML `
       --query recipe `
       --output text > <recipe-file>.base64
   
   certutil -decode <recipe-file>.base64 <recipe-file>
   ```

------

   Replace *<recipe-file>* with the name of the recipe in the format `<component-name>-<component-version>`. 

1. In the `ComponentDependencies` object in your recipe, do one or more of the following depending on the model and runtime components that you want to use:
   + Keep the DLR component dependency if you want to use DLR-compiled models. You can also replace it with a dependency on a custom runtime component, as shown in the following example.

     **Runtime component**

------
#### [ JSON ]

     ```
     { 
         "<runtime-component>": {
             "VersionRequirement": "<version>",
             "DependencyType": "HARD"
         }
     }
     ```

------
#### [ YAML ]

     ```
     <runtime-component>:
         VersionRequirement: "<version>"
         DependencyType: HARD
     ```

------
   + Keep the DLR image classification model store dependency to use the pre-trained ResNet-50 models that AWS provides, or modify it to use a custom model component. When you include a dependency for a public model component, if a later custom version of the component exists in the same AWS account and AWS Region, then the inference component uses that custom component. Specify the model component dependency as shown in the following examples.

     **Public model component**

------
#### [ JSON ]

     ```
     {
         "variant.DLR.ImageClassification.ModelStore": {
             "VersionRequirement": "<version>",
             "DependencyType": "HARD"
         }
     }
     ```

------
#### [ YAML ]

     ```
     variant.DLR.ImageClassification.ModelStore:
         VersionRequirement: "<version>"
         DependencyType: HARD
     ```

------

     **Custom model component**

------
#### [ JSON ]

     ```
     {
         "<custom-model-component>": {
             "VersionRequirement": "<version>",
             "DependencyType": "HARD"
         }
     }
     ```

------
#### [ YAML ]

     ```
     <custom-model-component>:
         VersionRequirement: "<version>"
         DependencyType: HARD
     ```

------

1. In the `ComponentConfiguration` object, add the default configuration for this component. You can later modify this configuration when you deploy the component. The following excerpt shows the component configuration for the DLR image classification component. 

   For example, if you use a custom model component as a dependency for your custom inference component, then modify `ModelResourceKey` to provide the names of the models that you are using.

------
#### [ JSON ]

   ```
   {
     "accessControl": {
       "aws.greengrass.ipc.mqttproxy": {
         "aws.greengrass.ImageClassification:mqttproxy:1": {
           "policyDescription": "Allows access to publish via topic ml/dlr/image-classification.",
           "operations": [
             "aws.greengrass#PublishToIoTCore"
           ],
           "resources": [
             "ml/dlr/image-classification"
           ]
         }
       }
     },
     "PublishResultsOnTopic": "ml/dlr/image-classification",
     "ImageName": "cat.jpeg",
     "InferenceInterval": 3600,
     "ModelResourceKey": {
       "armv7l": "DLR-resnet50-armv7l-cpu-ImageClassification",
       "x86_64": "DLR-resnet50-x86_64-cpu-ImageClassification",
       "aarch64": "DLR-resnet50-aarch64-cpu-ImageClassification"
     }
   }
   ```

------
#### [ YAML ]

   ```
   accessControl:
       aws.greengrass.ipc.mqttproxy:
           'aws.greengrass.ImageClassification:mqttproxy:1':
               policyDescription: 'Allows access to publish via topic ml/dlr/image-classification.'
               operations:
                   - 'aws.greengrass#PublishToIoTCore'
               resources:
                   - ml/dlr/image-classification
   PublishResultsOnTopic: ml/dlr/image-classification
   ImageName: cat.jpeg
   InferenceInterval: 3600
   ModelResourceKey:
       armv7l: "DLR-resnet50-armv7l-cpu-ImageClassification"
       x86_64: "DLR-resnet50-x86_64-cpu-ImageClassification"
       aarch64: "DLR-resnet50-aarch64-cpu-ImageClassification"
   ```

------

1. In the `Manifests` object, provide information about the artifacts and the configuration of this component that are used when the component is deployed to different platforms and any other information required to successfully run the component. The following excerpt shows the configuration of the `Manifests` object for Linux platform in the DLR image classification component.

------
#### [ JSON ]

   ```
   {
     "Manifests": [
       {
         "Platform": {
           "os": "linux",
           "architecture": "arm"
         },
         "Name": "32-bit armv7l - Linux (raspberry pi)",
         "Artifacts": [
           {
             "URI": "s3://SAMPLE-BUCKET/sample-artifacts-directory/image_classification.zip",
             "Unarchive": "ZIP"
           }
         ],
         "Lifecycle": {
           "Setenv": {
             "DLR_IC_MODEL_DIR": "{variant.DLR.ImageClassification.ModelStore:artifacts:decompressedPath}/{configuration:/ModelResourceKey/armv7l}",
             "DEFAULT_DLR_IC_IMAGE_DIR": "{artifacts:decompressedPath}/image_classification/sample_images/"
           },
           "Run": {
             "RequiresPrivilege": true,
             "script": ". {variant.DLR:configuration:/MLRootPath}/greengrass_ml_dlr_venv/bin/activate\npython3 {artifacts:decompressedPath}/image_classification/inference.py"
           }
         }
       }
     ]
   }
   ```

------
#### [ YAML ]

   ```
   Manifests:
     - Platform:
         os: linux
         architecture: arm
       Name: 32-bit armv7l - Linux (raspberry pi)
       Artifacts:
         - URI: s3://SAMPLE-BUCKET/sample-artifacts-directory/image_classification.zip
           Unarchive: ZIP
       Lifecycle:
         SetEnv:
           DLR_IC_MODEL_DIR: "{variant.DLR.ImageClassification.ModelStore:artifacts:decompressedPath}/{configuration:/ModelResourceKey/armv7l}"
           DEFAULT_DLR_IC_IMAGE_DIR: "{artifacts:decompressedPath}/image_classification/sample_images/"
         Run:
           RequiresPrivilege: true
           script: |-
             . {variant.DLR:configuration:/MLRootPath}/greengrass_ml_dlr_venv/bin/activate
             python3 {artifacts:decompressedPath}/image_classification/inference.py
   ```

------

 For detailed information about creating component recipes, see [AWS IoT Greengrass component recipe reference](component-recipe-reference.md).

### Create the inference component
<a name="create-private-inference-component"></a>

Use the AWS IoT Greengrass console or the AWS CLI to create a component using the recipe you just defined. After you create the component, you can deploy it to perform inference on your device. For an example of how to deploy an inference component, see [Tutorial: Perform sample image classification inference using TensorFlow Lite](ml-tutorial-image-classification.md).

#### Create custom inference component (console)
<a name="create-inference-component-console"></a>

1. Sign in to the [AWS IoT Greengrass console](https://console.aws.amazon.com/greengrass).

1. In the navigation menu, choose **Components**.

1. On the **Components** page, on the **My components** tab, choose **Create component**.

1. On the **Create component** page, under **Component information**, select either **Enter recipe as JSON** or **Enter recipe as YAML** as your component source.

1. In the **Recipe** box, enter the custom recipe that you created. 

1. Click **Create component**.

#### Create custom inference component (AWS CLI)
<a name="create-inference-component-cli"></a>

Run the following command to create a new custom component using the recipe that you created.

```
aws greengrassv2 create-component-version \
    --inline-recipe fileb://path/to/recipe/file
```

**Note**  
This step creates the component in the AWS IoT Greengrass service in the AWS Cloud. You can use the Greengrass CLI to develop, test, and deploy your component locally before you upload it to the cloud. For more information, see [Develop AWS IoT Greengrass components](develop-greengrass-components.md).

# Troubleshooting machine learning inference
<a name="ml-troubleshooting"></a>

Use the troubleshooting information and solutions in this section to help resolve issues with your machine learning components. For the public machine learning inference components, see the error messages in the following component logs:

------
#### [ Linux or Unix ]
+ `/greengrass/v2/logs/aws.greengrass.DLRImageClassification.log`
+ `/greengrass/v2/logs/aws.greengrass.DLRObjectDetection.log`
+ `/greengrass/v2/logs/aws.greengrass.TensorFlowLiteImageClassification.log`
+ `/greengrass/v2/logs/aws.greengrass.TensorFlowLiteObjectDetection.log`

------
#### [ Windows ]
+ `C:\greengrass\v2\logs\aws.greengrass.DLRImageClassification.log`
+ `C:\greengrass\v2\logs\aws.greengrass.DLRObjectDetection.log`
+ `C:\greengrass\v2\logs\aws.greengrass.TensorFlowLiteImageClassification.log`
+ `C:\greengrass\v2\logs\aws.greengrass.TensorFlowLiteObjectDetection.log`

------

If a component is installed correctly, then the component log contains the location of the library that it uses for inference.

**Topics**
+ [Failed to fetch library](#rpi-update-error)
+ [Cannot open shared object file](#rpi-import-cv-error)
+ [Error: ModuleNotFoundError: No module named '<library>'](#troubleshooting-venv-errors-not-found)
+ [No CUDA-capable device is detected](#troubleshooting-cuda-error)
+ [No such file or directory](#troubleshooting-venv-errors-no-such-file)
+ [RuntimeError: module compiled against API version 0xf but this version of NumPy is <version>](#troubleshooting-rpi-numpy-version-error)
+ [picamera.exc.PiCameraError: Camera is not enabled](#troubleshooting-rpi-camera-stack-error)
+ [Memory errors](#troubleshooting-memory-errors)
+ [Disk space errors](#troubleshooting-disk-space-errors)
+ [Timeout errors](#troubleshooting-timeout-errors)

## Failed to fetch library
<a name="rpi-update-error"></a>

The following error occurs when the installer script fails to download a required library during deployment on a Raspberry Pi device.

```
Err:2 http://raspbian.raspberrypi.org/raspbian buster/main armhf python3.7-dev armhf 3.7.3-2+deb10u1
404 Not Found [IP: 93.93.128.193 80] 
E: Failed to fetch http://raspbian.raspberrypi.org/raspbian/pool/main/p/python3.7/libpython3.7-dev_3.7.3-2+deb10u1_armhf.deb 404 Not Found [IP: 93.93.128.193 80]
```

Run `sudo apt-get update` and deploy your component again.

## Cannot open shared object file
<a name="rpi-import-cv-error"></a>

You might see errors similar to the following when the installer script fails to download a required dependency for `opencv-python` during deployment on a Raspberry Pi device.

```
ImportError: libopenjp2.so.7: cannot open shared object file: No such file or directory
```

Run the following command to manually install the dependencies for `opencv-python`:

```
sudo apt-get install libopenjp2-7 libilmbase23 libopenexr-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libgtk-3-0 libwebp-dev
```

## Error: ModuleNotFoundError: No module named '<library>'
<a name="troubleshooting-venv-errors-not-found"></a>

You might see this error in the ML runtime component logs (`variant.DLR.log` or `variant.TensorFlowLite.log`) when the ML runtime library or its dependencies aren't installed correctly. This error can occur in the following cases:
+ If you use the `UseInstaller` option, which is enabled by default, this error indicates that the ML runtime component failed to install the runtime or its dependencies. Do the following:

  1. Configure the ML runtime component to disable the `UseInstaller` option.

  1. Install the ML runtime and its dependencies, and make them available to the system user that runs the ML components. For more information, see the following:
     + [DLR runtime UseInstaller option](dlr-component.md#dlr-component-config-useinstaller-term)
     + [TensorFlow Lite runtime UseInstaller option](tensorflow-lite-component.md#tensorflow-lite-component-config-useinstaller-term)
+ If you don't use the `UseInstaller` option, this error indicates that the ML runtime or its dependencies aren't installed for the system user that runs the ML components. Do the following:

  1. Check that the library is installed for the system user that runs the ML components. Replace *ggc\$1user* with the name of the system user, and replace *tflite\$1runtime* with the name of the library to check.

------
#### [ Linux or Unix ]

     ```
     sudo -H -u ggc_user bash -c "python3 -c 'import tflite_runtime'"
     ```

------
#### [ Windows ]

     ```
     runas /user:ggc_user "py -3 -c \"import tflite_runtime\""
     ```

------

  1. If the library isn't installed, install it for that user. Replace *ggc\$1user* with the name of the system user, and replace *tflite\$1runtime* with the name of the library.

------
#### [ Linux or Unix ]

     ```
     sudo -H -u ggc_user bash -c "python3 -m pip install --user tflite_runtime"
     ```

------
#### [ Windows ]

     ```
     runas /user:ggc_user "py -3 -m pip install --user tflite_runtime"
     ```

------

     For more information about the dependencies for each ML runtime, see the following:
     + [DLR runtime UseInstaller option](dlr-component.md#dlr-component-config-useinstaller-term)
     + [TensorFlow Lite runtime UseInstaller option](tensorflow-lite-component.md#tensorflow-lite-component-config-useinstaller-term)

  1. If the issue persists, install the library for another user to confirm whether this device can install the library. The user could be, for example, your user, the root user, or an administrator user. If you can't install the library successfully for any user, your device might not support the library. Consult the library's documentation to review requirements and troubleshoot installation issues.

## No CUDA-capable device is detected
<a name="troubleshooting-cuda-error"></a>

You might see the following error when you use GPU acceleration. Run the following command to enable GPU access for the Greengrass user.

```
sudo usermod -a -G video ggc_user
```

## No such file or directory
<a name="troubleshooting-venv-errors-no-such-file"></a>

The following errors indicate that the runtime component was unable to set up the virtual environment correctly:
+ `MLRootPath/greengrass_ml_dlr_conda/bin/conda: No such file or directory `
+ `MLRootPath/greengrass_ml_dlr_venv/bin/activate: No such file or directory ` 
+ `MLRootPath/greengrass_ml_tflite_conda/bin/conda: No such file or directory ` 
+ `MLRootPath/greengrass_ml_tflite_venv/bin/activate: No such file or directory `

Check the logs to make sure that all runtime dependencies were installed correctly. For more information about the libraries installed by the installer script, see the following topics:
+ [DLR runtime](dlr-component.md)
+ [TensorFlow Lite runtime](tensorflow-lite-component.md)

By default *MLRootPath* is set to `/greengrass/v2/work/component-name/greengrass_ml`. To change this location, include the [DLR runtime](dlr-component.md) or [TensorFlow Lite runtime](tensorflow-lite-component.md) runtime component directly in your deployment, and specify a modified value for the `MLRootPath` parameter in a configuration merge update. For more information about configuring component, see [Update component configurations](update-component-configurations.md).

**Note**  
For the DLR component v1.3.x, you set the `MLRootPath` parameter in the configuration of the inference component, and the default value is `$HOME/greengrass_ml`.

## RuntimeError: module compiled against API version 0xf but this version of NumPy is <version>
<a name="troubleshooting-rpi-numpy-version-error"></a>

You might see the following errors when you run machine learning inference on a Raspberry Pi running Raspberry Pi OS Bullseye.

```
RuntimeError: module compiled against API version 0xf but this version of numpy is 0xd
ImportError: numpy.core.multiarray failed to import
```

This error occurs because Raspberry Pi OS Bullseye includes an earlier version of NumPy than the version that OpenCV requires. To fix this issue, run the following command to upgrade NumPy to the latest version.

```
pip3 install --upgrade numpy
```

## picamera.exc.PiCameraError: Camera is not enabled
<a name="troubleshooting-rpi-camera-stack-error"></a>

You might see the following error when you run machine learning inference on a Raspberry Pi running Raspberry Pi OS Bullseye.

```
picamera.exc.PiCameraError: Camera is not enabled. Try running 'sudo raspi-config' and ensure that the camera has been enabled.
```

This error occurs because Raspberry Pi OS Bullseye includes a new camera stack that isn't compatible with the ML components. To fix this issue, enable the legacy camera stack.<a name="raspberry-pi-bullseye-enable-legacy-camera-stack"></a>

**To enable the legacy camera stack**

1. Run the following command to open the Raspberry Pi configuration tool.

   ```
   sudo raspi-config
   ```

1. Select **Interface Options**.

1. Select **Legacy camera** to enable the legacy camera stack.

1. Reboot the Raspberry Pi.

## Memory errors
<a name="troubleshooting-memory-errors"></a>

The following errors typically occur when the device does not have enough memory and the component process is interrupted.
+ `stderr. Killed.`
+ `exitCode=137`

We recommend a minimum of 500 MB of memory to deploy a public machine learning inference component.

## Disk space errors
<a name="troubleshooting-disk-space-errors"></a>

The `no space left on device` error typically occurs when a device does not have enough storage. Make sure that there is enough disk space available on your device before you deploy the component again. We recommend a minimum of 500 MB of free disk space to deploy a public machine learning inference component.

## Timeout errors
<a name="troubleshooting-timeout-errors"></a>

The public machine learning components download large machine learning model files that are larger than 200 MB. If the download times out during deployment, check your internet connection speed and retry the deployment.