Getting started with Amazon Lookout for Vision - Amazon Lookout for Vision

End of support notice: On October 31, 2025, AWS will discontinue support for Amazon Lookout for Vision. After October 31, 2025, you will no longer be able to access the Lookout for Vision console or Lookout for Vision resources. For more information, visit this blog post.

Getting started with Amazon Lookout for Vision

Before starting these Getting started instructions, we recommend that you read Understanding Amazon Lookout for Vision.

The Getting Started instructions show you how to use create an example image segmentation model. If you want to create an example image classification model, see Image classification dataset.

If you want to quickly try an example model, we provide example training images and mask images. We also provide a Python script that creates an image segmentation manifest file. You use the manifest file to create a dataset for your project and you don't need to label the images in the dataset. When you create a model with your own images, you must label the images in the dataset. For more information, see Creating your dataset.

The images we provide are of normal and anomalous cookies. An anomalous cookie has a crack across the cookie shape. The model you train with the images predicts a classification (normal or anomalous) and finds the area (mask) of cracks in an anomalous cookie, as shown in the following example.

Chocolate chip cookie with a visible crack across its surface on a green background.

Step 1: Create the manifest file and upload images

In this procedure, you clone the Amazon Lookout for Vision documentation repository to your computer. You then use a Python (version 3.7 or higher) script to create a manifest file and upload the training images and mask images to an Amazon S3 location that you specify. You use the manifest file to create your model. Later, you use test images in the local repository to try your model.

To create the manifest file and upload images
  1. Set up Amazon Lookout for Vision by following the instructions at Setup Amazon Lookout for Vision. Be sure to install the AWS SDK for Python.

  2. In the AWS Region in which you want to use Lookout for Vision, create an S3 bucket.

  3. In the Amazon S3 bucket, create a folder named getting-started.

  4. Note the Amazon S3 URI and Amazon Resource name (ARN) for the folder. You use them to set up permissions and to run the script.

  5. Make sure that the user calling the script has permissions to call the s3:PutObject operation. You can use the following policy. To assign permissions, see Assigning permissions.

    { "Version": "2012-10-17", "Statement": [{ "Sid": "Statement1", "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": [ "arn:aws:s3::: ARN for S3 folder in step 4/*" ] }] }
  6. Make sure that you have a local profile named lookoutvision-access and that the profile user has the permission from the previous step. For more information, see Using a profile on your local computer.

  7. Download the zip file, getting-started.zip. The zip file contains the getting started dataset and set up script.

  8. Unzip the file getting-started.zip.

  9. At the command prompt, do the following:

    1. Navigate to the getting-started folder.

    2. Run the following command to create a manifest file and upload the training images and image masks to the Amazon S3 path you noted in step 4.

      python getting_started.py S3-URI-from-step-4
    3. When the script completes, note the path to the train.manifest file that the script displays after Create dataset using manifest file:. The path should be similar to s3://path to getting started folder/manifests/train.manifest.

Step 2: Create the model

In this procedure, you create a project and dataset using the images and manifest file that you previously uploaded to your Amazon S3 bucket. You then create the model and view the evaluation results from model training.

Because you create the dataset from the getting started manifest file, you don't need to label the dataset's images. When you create a dataset with your own images, you do need to label images. For more information, see Labeling images.

Important

You are charged for a successful training of a model.

To create a model
  1. Open the Amazon Lookout for Vision console at https://console.aws.amazon.com/lookoutvision/.

  2. Make sure you are in the same AWS Region that you created the Amazon S3 bucket in Step 1: Create the manifest file and upload images. To change the Region, choose the name of the currently displayed Region in the navigation bar. Then select the Region to which you want to switch.

  3. Choose Get started.

    Amazon Lookout for Vision service description and Get started button highlighted.
  4. In the Projects section, choose Create project.

    Dashboard overview with empty statistics and a "Create project" button highlighted.
  5. On the Create project page, do the following:

    1. In Project name, enter getting-started.

    2. Choose Create project.

    Project creation interface for anomaly detection model with project name input field.
  6. On the project page, in the How it works section, choose Create dataset.

    Getting-started info page showing steps to prepare dataset and train model.
  7. On the Create dataset page, do the following:

    1. Choose Create a single dataset.

    2. In the Image source configuration section, choose Import images labeled by SageMaker Ground Truth.

    3. For .manifest file location, enter the Amazon S3 location of the manifest file that you noted in step 6.c. of Step 1: Create the manifest file and upload images. The Amazon S3 location should be similar to s3://path to getting started folder/manifests/train.manifest

    4. Choose Create dataset.

    Dataset configuration options with single dataset creation selected and image import methods.
  8. On the project details page, in the Images section, view the dataset images. You can view the classification and image segmentation information (mask and anomaly labels) for each dataset image. You can also search for images, filter images by labeling status (labeled/unlabeled), or filter images by the anomaly labels assigned to them.

    Image labeling interface showing three chocolate chip cookies with cracks, labeled as anomalies.
  9. On the project details page, choose Train model.

    Getting-started page with instructions to prepare datasets and a Train model button.
  10. On the Train model details page, choose Train model.

  11. In the Do you want to train your model? dialog box, choose Train model.

  12. In the project Models page, you can see that training has started. Check the current status by viewing the Status column for the model version. Training the model takes at least 30 minutes to complete. Training has successfully finished when the status changes to Training complete.

  13. When training finishes, choose the model Model 1 in the Models page.

    Models page showing one model named Model 1 with Training complete status.
  14. In the model's details page, view the evaluation results in the Performance metrics tab. There are metrics for the following:

    • Overall model performance metrics (precision, recall, and F1 score) for the classification predictions made by the model.

      Model performance metrics showing 100% precision, recall, and F1 score for 20 test images.
    • Performance metrics for anomaly labels found in the test images (Average IoU, F1 score)

      Table showing performance metrics for "cracked" label with 10 test images, 86.1% F1 score, and 74.53% Average IoU.
    • Predictions for test images (classification, segmentation masks, and anomaly labels)

      Three chocolate chip cookies on dark surfaces, two with green anomalies labeled as "cracked".

    As model training is non-deterministic, your evaluation results might differ from the results on shown on this page. For more information, see Improving your Amazon Lookout for Vision model.

Step 3: Start the model

In this step, you start hosting the model so that it is ready to analyze images. For more information, see Running your trained Amazon Lookout for Vision model.

Note

You are charged for the amount of time that your model runs. You stop your model in Step 5: Stop the model.

To start the model.
  1. On the model's details page, choose Use model and then choose Integrate API to the cloud.

    Model 1 page with "Use model" button and dropdown option "Integrate API to the cloud".
  2. In the AWS CLI commands section, copy the start-model AWS CLI command.

    AWS CLI command to start a Lookout for Vision model with project and version details.
  3. Make sure that the AWS CLI is configured to run in the same AWS Region in which you are using the Amazon Lookout for Vision console. To change the AWS Region that the AWS CLI uses, see Install the AWS SDKS.

  4. At the command prompt, start the model by entering the start-model command. If you are using the lookoutvision profile to get credentials, add the --profile lookoutvision-access parameter. For example:

    aws lookoutvision start-model \ --project-name getting-started \ --model-version 1 \ --min-inference-units 1 \ --profile lookoutvision-access

    If the call is successful, the following output is displayed:

    { "Status": "STARTING_HOSTING" }
  5. Back in the console, choose Models in the navigation pane.

    AWSLookout for Vision console showing CLI commands to start model and detect anomalies.
  6. Wait until the status of the model (Model 1) in the Status column displays Hosted. If you've previously trained a model in the project, wait for the latest model version to complete.

    Model 1 with Hosted status, 100% precision and recall, created on September 21st, 2022.

Step 4: Analyze an image

In this step, you analyze an image with your model. We provide example images that you can use in the getting started test-images folder in the Lookout for Vision documentation repository on your computer. For more information, see Detecting anomalies in an image.

To analyze an image
  1. On the Models page, choose the model Model 1.

    Models table showing Model 1 with Hosted status, creation date, and 100% precision and recall.
  2. On the model's details page, choose Use model and then choose Integrate API to the cloud.

    Model 1 page with "Use model" button and dropdown option "Integrate API to the cloud".
  3. In the AWS CLI commands section, copy the detect-anomalies AWS CLI command.

    AWS CLI command for detect-anomalies with parameters for project, model version, and image file.
  4. At the command prompt, analyze an anomalous image by entering the detect-anomalies command from the previous step. For the --body parameter, specify an anomalous image from the getting started test-images folder on your computer. If you are using the lookoutvision profile to get credentials, add the --profile lookoutvision-access parameter. For example:

    aws lookoutvision detect-anomalies \ --project-name getting-started \ --model-version 1 \ --content-type image/jpeg \ --body /path/to/test-images/test-anomaly-1.jpg \ --profile lookoutvision-access

    The output should look similar to the following:

    { "DetectAnomalyResult": { "Source": { "Type": "direct" }, "IsAnomalous": true, "Confidence": 0.983975887298584, "Anomalies": [ { "Name": "background", "PixelAnomaly": { "TotalPercentageArea": 0.9818974137306213, "Color": "#FFFFFF" } }, { "Name": "cracked", "PixelAnomaly": { "TotalPercentageArea": 0.018102575093507767, "Color": "#23A436" } } ], "AnomalyMask": "iVBORw0KGgoAAAANSUhEUgAAAkAAAAMACA......" } }
  5. In the output, note the following:

    • IsAnomalous is a Boolean for the predicted classification. true if the image is anomalous, otherwise false.

    • Confidence is a float value representing the confidence that Amazon Lookout for Vision has in the prediction. 0 is the lowest confidence, 1 is the highest confidence.

    • Anomalies is a list of anomalies found in the image. Name is the anomaly label. PixelAnomaly includes the total percentage area of the anomaly (TotalPercentageArea) and a color (Color) for the anomaly label. The list also includes a "background" anomaly that covers the area outside of anomalies found on the image.

    • AnomalyMask is a mask image that shows the location of the anomalies on the analyzed image.

    You can use information in the response to display a blend of the analyzed image and anomaly mask, as shown in the following example. For example code, see Showing classification and segmentation information.

    Chocolate chip cookie with green segmentation highlighting cracked areas, labeled as anomalous.
  6. At the command prompt, analyze a normal image from the getting started test-images folder. If you are using the lookoutvision profile to get credentials, add the --profile lookoutvision-access parameter. For example:

    aws lookoutvision detect-anomalies \ --project-name getting-started \ --model-version 1 \ --content-type image/jpeg \ --body /path/to/test-images/test-normal-1.jpg \ --profile lookoutvision-access

    The output should look similar to the following:

    { "DetectAnomalyResult": { "Source": { "Type": "direct" }, "IsAnomalous": false, "Confidence": 0.9916400909423828, "Anomalies": [ { "Name": "background", "PixelAnomaly": { "TotalPercentageArea": 1.0, "Color": "#FFFFFF" } } ], "AnomalyMask": "iVBORw0KGgoAAAANSUhEUgAAAkAAAA....." } }
  7. In the output, note that the false value for IsAnomalous classifies the image as having no anomalies. Use Confidence to help decide your confidence in the classification. Also, the Anomalies array only has the background anomaly label.

Step 5: Stop the model

In this step, you stop hosting the model. You are charged for the amount of time your model is running. If you aren't using the model, you should stop it. You can restart the model when you next need it. For more information, see Starting your Amazon Lookout for Vision model.

To stop the model.
  1. Choose Models in the navigation pane.

    AWSLookout for Vision console showing CLI commands to start model and detect anomalies.
  2. In the Models page, choose the model Model 1.

    Models table showing Model 1 with Hosted status, creation date, and 100% precision and recall.
  3. On the model's details page, choose Use model and then choose Integrate API to the cloud.

    Model 1 page with "Use model" button and dropdown option "Integrate API to the cloud".
  4. In the AWS CLI commands section, copy the stop-model AWS CLI command.

    Copy button icon next to AWS CLI command for stopping a Lookout for Vision model.
  5. At the command prompt, stop the model by entering the stop-model AWS CLI command from the previous step. If you are using the lookoutvision profile to get credentials, add the --profile lookoutvision-access parameter. For example:

    aws lookoutvision stop-model \ --project-name getting-started \ --model-version 1 \ --profile lookoutvision-access

    If the call is successful, the following output is displayed:

    { "Status": "STOPPING_HOSTING" }
  6. Back in the console, choose Models in the left navigation page.

  7. The model has stopped when the status of the model in the Status column is Training complete.

Next steps

When you are ready create a model with your own images, start by following the instructions in Creating your project. The instructions include steps for creating a model with the Amazon Lookout for Vision console and with the AWS SDK.

If you want to try other example datasets, see Example code and datasets.