DLR image classification
The DLR image classification component (aws.greengrass.DLRImageClassification
) contains sample inference code to
perform image classification inference using Deep Learning Runtime and resnet-50 models. This component
uses the variant DLR image classification model store and the DLR runtime components as dependencies to download
DLR and the sample models.
To use this inference component with a custom-trained DLR model, create a custom version of the dependent model
store component. To use your own custom inference code, you can use the recipe of this component
as a template to create a custom inference
component.
Versions
This component has the following versions:
Type
This component is a generic component (aws.greengrass.generic
). The Greengrass nucleus runs the component's lifecycle
scripts.
For more information, see Component types.
Operating system
This component can be installed on core devices that run the following operating systems:
Requirements
This component has the following requirements:
-
On Greengrass core devices running Amazon Linux 2 or Ubuntu 18.04, GNU C Library (glibc) version 2.27 or
later installed on the device.
-
On Armv7l devices, such as Raspberry Pi, dependencies for OpenCV-Python installed on the
device. Run the following command to install the dependencies.
sudo apt-get install libopenjp2-7 libilmbase23 libopenexr-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libgtk-3-0 libwebp-dev
-
Raspberry Pi devices that run Raspberry Pi OS Bullseye must meet the following
requirements:
-
NumPy 1.22.4 or later installed on the device. Raspberry Pi OS Bullseye includes an
earlier version of NumPy, so you can run the following command to upgrade NumPy on the
device.
pip3 install --upgrade numpy
-
The legacy camera stack enabled on the device. Raspberry Pi OS Bullseye includes a
new camera stack that is enabled by default and isn't compatible, so you must enable the
legacy camera stack.
To enable the legacy camera stack
-
Run the following command to open the Raspberry Pi configuration tool.
sudo raspi-config
-
Select Interface Options.
-
Select Legacy camera to enable the legacy camera
stack.
-
Reboot the Raspberry Pi.
Dependencies
When you deploy a component, AWS IoT Greengrass also deploys compatible versions of its dependencies. This means that you must meet the requirements for the component and all of its dependencies to successfully deploy the component. This section lists the dependencies for the released versions of this component and the semantic version constraints that define the component versions for each dependency. You can also view the dependencies for each version of the component in the AWS IoT Greengrass console. On the component details page, look for the Dependencies list.
- 2.1.13 and 2.1.14
-
The following table lists the dependencies for version 2.1.13 and 2.1.14 of this
component.
- 2.1.12
-
The following table lists the dependencies for version 2.1.12 of this
component.
- 2.1.11
-
The following table lists the dependencies for version 2.1.11 of this
component.
- 2.1.10
-
The following table lists the dependencies for version 2.1.10 of this
component.
- 2.1.9
-
The following table lists the dependencies for version 2.1.9 of this
component.
- 2.1.8
-
The following table lists the dependencies for version 2.1.8 of this
component.
- 2.1.7
-
The following table lists the dependencies for version 2.1.7 of this
component.
- 2.1.6
-
The following table lists the dependencies for version 2.1.6 of this
component.
- 2.1.4 - 2.1.5
-
The following table lists the dependencies for versions 2.1.4 to 2.1.5 of this
component.
- 2.1.3
-
The following table lists the dependencies for version 2.1.3 of this
component.
- 2.1.2
-
The following table lists the dependencies for version 2.1.2 of this
component.
- 2.1.1
-
The following table lists the dependencies for version 2.1.1 of this
component.
- 2.0.x
-
The following table lists the dependencies for version 2.0.x of this
component.
Dependency |
Compatible versions |
Dependency type |
Greengrass nucleus |
~2.0.0 |
Soft |
DLR image classification model store |
~2.0.0 |
Hard |
DLR |
~1.3.0 |
Soft |
Configuration
This component provides the following configuration parameters that you can
customize when you deploy the component.
- 2.1.x
-
accessControl
-
(Optional) The object that contains the authorization policy that allows the
component to publish messages to the default notifications topic.
Default:
{
"aws.greengrass.ipc.mqttproxy": {
"aws.greengrass.DLRImageClassification:mqttproxy:1": {
"policyDescription": "Allows access to publish via topic ml/dlr/image-classification.",
"operations": [
"aws.greengrass#PublishToIoTCore"
],
"resources": [
"ml/dlr/image-classification"
]
}
}
}
PublishResultsOnTopic
-
(Optional) The topic on which you want to
publish the inference results. If you modify this value, then you must also modify the
value of resources
in the accessControl
parameter to match your
custom topic name.
Default: ml/dlr/image-classification
Accelerator
-
The accelerator that you want to use. Supported values are cpu
and
gpu
.
The sample models in the dependent model component support only CPU acceleration. To
use GPU acceleration with a different custom model, create a custom model component to override
the public model component.
Default: cpu
ImageDirectory
-
(Optional) The path of the folder on the device
where inference components read images. You can modify this value to any location on your
device to which you have read/write access.
Default:
/greengrass/v2
/packages/artifacts-unarchived/component-name
/image_classification/sample_images/
If you set the value of UseCamera
to true
, then this
configuration parameter is ignored.
ImageName
-
(Optional) The name of the image that the inference
component uses as an input to a make prediction. The component looks for the image in the
folder specified in ImageDirectory
. By default, the component uses the sample
image in the default image directory. AWS IoT Greengrass supports the following image formats:
jpeg
, jpg
, png
, and npy
.
Default: cat.jpeg
If you set the value of UseCamera
to true
, then this
configuration parameter is ignored.
InferenceInterval
-
(Optional) The time in seconds between each prediction made by the inference code. The
sample inference code runs indefinitely and repeats its predictions at the specified time
interval. For example, you can change this to a shorter interval if you want to use images
taken by a camera for real-time prediction.
Default: 3600
ModelResourceKey
-
(Optional) The models that are used in the
dependent public model component. Modify this parameter only if you override the public
model component with a custom component.
Default:
{
"armv7l": "DLR-resnet50-armv7l-cpu-ImageClassification",
"aarch64": "DLR-resnet50-aarch64-cpu-ImageClassification",
"x86_64": "DLR-resnet50-x86_64-cpu-ImageClassification",
"windows": "DLR-resnet50-win-cpu-ImageClassification"
}
UseCamera
-
(Optional) String value that defines whether to use images from a camera connected to
the Greengrass core device. Supported values are true
and
false
.
When you set this value to true
, the sample inference code accesses the
camera on your device and runs inference locally on the captured image. The values of the
ImageName
and ImageDirectory
parameters are ignored. Make sure
that the user running this component has read/write access to the location where the
camera stores captured images.
Default: false
When you view the recipe of this component, the UseCamera
configuration
parameter doesn't appear in the default configuration. However, you can modify the value
of this parameter in a configuration
merge update when you deploy the component.
When you set UseCamera
to true
, you must also create a
symlink to enable the inference component to access your camera from the virtual
environment that is created by the runtime component. For more information about using a
camera with the sample inference components, see Update component configurations.
- 2.0.x
-
MLRootPath
-
(Optional) The path of the folder on Linux core devices
where inference components read images and write inference results. You can modify this
value to any location on your device to which the user running this component has
read/write access.
Default:
/greengrass/v2
/work/variant.DLR/greengrass_ml
Default:
/greengrass/v2
/work/variant.TensorFlowLite/greengrass_ml
Accelerator
-
The accelerator that you want to use. Supported values are cpu
and
gpu
.
The sample models in the dependent model component support only CPU acceleration. To
use GPU acceleration with a different custom model, create a custom model component to override
the public model component.
Default: cpu
ImageName
-
(Optional) The name of the image that the
inference component uses as an input to a make prediction. The component looks for the
image in the folder specified in ImageDirectory
. The default location is
MLRootPath
/images
. AWS IoT Greengrass supports the
following image formats: jpeg
, jpg
, png
, and
npy
.
Default: cat.jpeg
InferenceInterval
-
(Optional) The time in seconds between each prediction made by the inference code. The
sample inference code runs indefinitely and repeats its predictions at the specified time
interval. For example, you can change this to a shorter interval if you want to use images
taken by a camera for real-time prediction.
Default: 3600
ModelResourceKey
-
(Optional) The models that are used in the
dependent public model component. Modify this parameter only if you override the public
model component with a custom component.
Default:
armv7l: "DLR-resnet50-armv7l-cpu-ImageClassification"
x86_64: "DLR-resnet50-x86_64-cpu-ImageClassification"
Local log file
This component uses the following log file.
- Linux
-
/greengrass/v2
/logs/aws.greengrass.DLRImageClassification.log
- Windows
-
C:\greengrass\v2
\logs\aws.greengrass.DLRImageClassification.log
To view this component's logs
Changelog
The following table describes the changes in each version of the component.
Version
|
Changes
|
2.1.14
|
Version updated for Greengrass nucleus 2.12.5 release. |
2.1.13
|
Version updated for Greengrass nucleus version 2.12.0 release. |
2.1.12
|
Version updated for Greengrass nucleus version 2.11.0 release. |
2.1.11
|
Version updated for Greengrass nucleus version 2.10.0 release. |
2.1.10
|
Version updated for Greengrass nucleus version 2.9.0 release. |
2.1.9
|
Version updated for Greengrass nucleus version 2.8.0 release. |
2.1.8
|
Version updated for Greengrass nucleus version 2.7.0 release.
|
2.1.7
|
Version updated for Greengrass nucleus version 2.6.0 release.
|
2.1.6
|
Version updated for Greengrass nucleus version 2.5.0 release.
|
2.1.5
|
Component released in all AWS Regions.
|
2.1.4
|
Version updated for Greengrass nucleus version 2.4.0 release.
This version isn't available in Europe (London)
(eu-west-2 ).
|
2.1.3
|
Version updated for Greengrass nucleus version 2.3.0 release.
|
2.1.2
|
Version updated for Greengrass nucleus version 2.2.0 release.
|
2.1.1
|
- New features
-
-
Use Deep Learning Runtime v1.6.0.
-
Add support for sample image classification on Armv8 (AArch64)
platforms. This extends machine learning support for Greengrass core devices
running NVIDIA Jetson, such as the Jetson Nano.
-
Enable camera integration for sample inference. Use the new
UseCamera configuration parameter to enable the sample
inference code to access the camera on your Greengrass core device and run
inference locally on the captured image.
-
Add support for publishing inference results to the AWS Cloud. Use
the new PublishResultsOnTopic configuration parameter to
specify the topic on which you want to publish results.
-
Add the new ImageDirectory configuration parameter that
enables you to specify a custom directory for the image on which you want
to perform inference.
- Bug fixes and improvements
-
-
Write inference results to the component log file instead of a
separate inference file.
-
Use the AWS IoT Greengrass Core software logging module to log component output.
-
Use the AWS IoT Device SDK to read the component configuration and apply
configuration changes.
|
2.0.4
|
Initial version.
|