啟動 Amazon Rekognition 自訂標籤模型 - Rekognition

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

啟動 Amazon Rekognition 自訂標籤模型

您可以使用主控台或使用版本操作,開始執行 Amazon Rekognition 自訂標籤模型。StartProject

重要

您需要根據模型執行的時數以及模型執行時使用的推論單元數量付費。如需詳細資訊,請參閱 執行培訓過的 Amazon Rekognition 自訂標籤模型

啟動模型可能需要幾分鐘才能完成。若要檢查模型就緒的目前狀態,請檢查專案的詳細資訊頁面或使用DescribeProject版本

模型啟動後,您可以使用「DetectCustom標籤」(Label),使用模型來分析影像。如需詳細資訊,請參閱 使用經過培訓的模型分析圖像。主控台也提供了呼叫 DetectCustomLabels 的範例程式碼。

啟動 Amazon Rekognition 自訂標籤模型 (主控台)

跟隨以下步驟,開始透過主控台執行 Amazon Rekognition 自訂標籤模型。您可以直接從主控台啟動模型,也可以使用主控台提供的 AWS SDK 程式碼。

啟動模型 (主控台)
  1. 開啟 Amazon Rekognition 主控台:https://console.aws.amazon.com/rekognition/

  2. 選擇使用自訂標籤

  3. 選擇開始使用

  4. 在左側導覽視窗中,選擇 專案

  5. 專案 資源頁面上,選擇包含要啟動的培訓模型的專案。

  6. 模型 的區域中,選擇您要啟動的模型。

  7. 選擇 使用模型 標籤。

  8. 執行以下任意一項:

    Start model using the console

    啟動或停止模型 的區域中,執行以下操作:

    1. 選擇您要使用的推論單元數量。如需詳細資訊,請參閱 執行培訓過的 Amazon Rekognition 自訂標籤模型

    2. 選擇 開始使用

    3. 啟動模型 的對話框中,選擇 啟動

    Start model using the AWS SDK

    使用模型 的區域中,執行以下操作:

    1. 選擇 API 程式碼。

    2. 選擇 AWS CLIPython

    3. 啟動模型 中複製範例程式碼。

    4. 使用範例程式碼來啟動模型。如需詳細資訊,請參閱 啟動 Amazon Rekognition 自訂標籤模型 (SDK)

  9. 若要返回專案概述頁面,請選擇頁面頂部的專案名稱。

  10. 模型 的區域中,檢查模型的狀態。當模型狀態為 執行中 時,您可以使用模型來分析圖像。如需詳細資訊,請參閱 使用經過培訓的模型分析圖像

啟動 Amazon Rekognition 自訂標籤模型 (SDK)

您可以透過呼叫StartProject版本 API 並在ProjectVersionArn輸入參數中傳遞模型的 Amazon 資源名稱 (ARN) 來啟動模型。您也可指定您要使用的推論單元數量。如需詳細資訊,請參閱 執行培訓過的 Amazon Rekognition 自訂標籤模型

模型可能需要一段時間才能啟動。本主題中的 Python 和 Java 範例使用等待程式來等待模型啟動。等待程式是一種實用程式方法,用於輪詢特定狀態發生。或者,您可以通過調用DescribeProject版本來檢查當前狀態。

啟動模型 (SDK)
  1. 如果您尚未這樣做,請安裝並設定 AWS CLI 和 AWS SDK。如需詳細資訊,請參閱 步驟 4:設定 AWS CLI 和開 AWS 發套件

  2. 使用以下範例程式碼來啟動模型。

    CLI

    project-version-arn 的值變更為您要啟動的模型的 ARN。將 --min-inference-units 的值變更為您要使用的推論單元數量。(可選)將 --max-inference-units 變更為 Amazon Rekognition 自訂標籤可用於自動擴展模型的最大推論單元數量。

    aws rekognition start-project-version --project-version-arn model_arn \ --min-inference-units minimum number of units \ --max-inference-units maximum number of units \ --profile custom-labels-access
    Python

    請提供以下命令列參數:

    • project_arn — 包含您要啟動的模型專案的 ARN。

    • model_arn — 您要啟動的模型 ARN。

    • min_inference_units — 您要使用的推論單元的數量。

    • (選用) --max_inference_units Amazon Rekognition 自訂標籤可用於自動擴展模型的最大推論單元數量。

    # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Purpose Shows how to start running an Amazon Lookout for Vision model. """ import argparse import logging import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) def get_model_status(rek_client, project_arn, model_arn): """ Gets the current status of an Amazon Rekognition Custom Labels model :param rek_client: The Amazon Rekognition Custom Labels Boto3 client. :param project_name: The name of the project that you want to use. :param model_arn: The name of the model that you want the status for. :return: The model status """ logger.info("Getting status for %s.", model_arn) # Extract the model version from the model arn. version_name = (model_arn.split("version/", 1)[1]).rpartition('/')[0] models = rek_client.describe_project_versions(ProjectArn=project_arn, VersionNames=[version_name]) for model in models['ProjectVersionDescriptions']: logger.info("Status: %s", model['StatusMessage']) return model["Status"] error_message = f"Model {model_arn} not found." logger.exception(error_message) raise Exception(error_message) def start_model(rek_client, project_arn, model_arn, min_inference_units, max_inference_units=None): """ Starts the hosting of an Amazon Rekognition Custom Labels model. :param rek_client: The Amazon Rekognition Custom Labels Boto3 client. :param project_name: The name of the project that contains the model that you want to start hosting. :param min_inference_units: The number of inference units to use for hosting. :param max_inference_units: The number of inference units to use for auto-scaling the model. If not supplied, auto-scaling does not happen. """ try: # Start the model logger.info(f"Starting model: {model_arn}. Please wait....") if max_inference_units is None: rek_client.start_project_version(ProjectVersionArn=model_arn, MinInferenceUnits=int(min_inference_units)) else: rek_client.start_project_version(ProjectVersionArn=model_arn, MinInferenceUnits=int( min_inference_units), MaxInferenceUnits=int(max_inference_units)) # Wait for the model to be in the running state version_name = (model_arn.split("version/", 1)[1]).rpartition('/')[0] project_version_running_waiter = rek_client.get_waiter( 'project_version_running') project_version_running_waiter.wait( ProjectArn=project_arn, VersionNames=[version_name]) # Get the running status return get_model_status(rek_client, project_arn, model_arn) except ClientError as err: logger.exception("Client error: Problem starting model: %s", err) raise def add_arguments(parser): """ Adds command line arguments to the parser. :param parser: The command line parser. """ parser.add_argument( "project_arn", help="The ARN of the project that contains that the model you want to start." ) parser.add_argument( "model_arn", help="The ARN of the model that you want to start." ) parser.add_argument( "min_inference_units", help="The minimum number of inference units to use." ) parser.add_argument( "--max_inference_units", help="The maximum number of inference units to use for auto-scaling the model.", required=False ) def main(): logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") try: # Get command line arguments. parser = argparse.ArgumentParser(usage=argparse.SUPPRESS) add_arguments(parser) args = parser.parse_args() # Start the model. session = boto3.Session(profile_name='custom-labels-access') rekognition_client = session.client("rekognition") status = start_model(rekognition_client, args.project_arn, args.model_arn, args.min_inference_units, args.max_inference_units) print(f"Finished starting model: {args.model_arn}") print(f"Status: {status}") except ClientError as err: error_message = f"Client error: Problem starting model: {err}" logger.exception(error_message) print(error_message) except Exception as err: error_message = f"Problem starting model:{err}" logger.exception(error_message) print(error_message) if __name__ == "__main__": main()
    Java V2

    請提供以下命令列參數:

    • project_arn — 包含您要啟動的模型專案的 ARN。

    • model_arn — 您要啟動的模型 ARN。

    • min_inference_units — 您要使用的推論單元的數量。

    • (可選) max_inference_units — Amazon Rekognition 自訂標籤可用於自動擴展模型的最大推論單元數量。如果您未有指定值,則不會進行自動擴展。

    /* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */ package com.example.rekognition; import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider; import software.amazon.awssdk.core.waiters.WaiterResponse; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DescribeProjectVersionsRequest; import software.amazon.awssdk.services.rekognition.model.DescribeProjectVersionsResponse; import software.amazon.awssdk.services.rekognition.model.ProjectVersionDescription; import software.amazon.awssdk.services.rekognition.model.ProjectVersionStatus; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.StartProjectVersionRequest; import software.amazon.awssdk.services.rekognition.model.StartProjectVersionResponse; import software.amazon.awssdk.services.rekognition.waiters.RekognitionWaiter; import java.util.Optional; import java.util.logging.Level; import java.util.logging.Logger; public class StartModel { public static final Logger logger = Logger.getLogger(StartModel.class.getName()); public static int findForwardSlash(String modelArn, int n) { int start = modelArn.indexOf('/'); while (start >= 0 && n > 1) { start = modelArn.indexOf('/', start + 1); n -= 1; } return start; } public static void startMyModel(RekognitionClient rekClient, String projectArn, String modelArn, Integer minInferenceUnits, Integer maxInferenceUnits ) throws Exception, RekognitionException { try { logger.log(Level.INFO, "Starting model: {0}", modelArn); StartProjectVersionRequest startProjectVersionRequest = null; if (maxInferenceUnits == null) { startProjectVersionRequest = StartProjectVersionRequest.builder() .projectVersionArn(modelArn) .minInferenceUnits(minInferenceUnits) .build(); } else { startProjectVersionRequest = StartProjectVersionRequest.builder() .projectVersionArn(modelArn) .minInferenceUnits(minInferenceUnits) .maxInferenceUnits(maxInferenceUnits) .build(); } StartProjectVersionResponse response = rekClient.startProjectVersion(startProjectVersionRequest); logger.log(Level.INFO, "Status: {0}", response.statusAsString() ); // Get the model version int start = findForwardSlash(modelArn, 3) + 1; int end = findForwardSlash(modelArn, 4); String versionName = modelArn.substring(start, end); // wait until model starts DescribeProjectVersionsRequest describeProjectVersionsRequest = DescribeProjectVersionsRequest.builder() .versionNames(versionName) .projectArn(projectArn) .build(); RekognitionWaiter waiter = rekClient.waiter(); WaiterResponse<DescribeProjectVersionsResponse> waiterResponse = waiter .waitUntilProjectVersionRunning(describeProjectVersionsRequest); Optional<DescribeProjectVersionsResponse> optionalResponse = waiterResponse.matched().response(); DescribeProjectVersionsResponse describeProjectVersionsResponse = optionalResponse.get(); for (ProjectVersionDescription projectVersionDescription : describeProjectVersionsResponse .projectVersionDescriptions()) { if(projectVersionDescription.status() == ProjectVersionStatus.RUNNING) { logger.log(Level.INFO, "Model is running" ); } else { String error = "Model training failed: " + projectVersionDescription.statusAsString() + " " + projectVersionDescription.statusMessage() + " " + modelArn; logger.log(Level.SEVERE, error); throw new Exception(error); } } } catch (RekognitionException e) { logger.log(Level.SEVERE, "Could not start model: {0}", e.getMessage()); throw e; } } public static void main(String[] args) { String modelArn = null; String projectArn = null; Integer minInferenceUnits = null; Integer maxInferenceUnits = null; final String USAGE = "\n" + "Usage: " + "<project_name> <version_name> <min_inference_units> <max_inference_units>\n\n" + "Where:\n" + " project_arn - The ARN of the project that contains the model that you want to start. \n\n" + " model_arn - The ARN of the model version that you want to start.\n\n" + " min_inference_units - The number of inference units to start the model with.\n\n" + " max_inference_units - The maximum number of inference units that Custom Labels can use to " + " automatically scale the model. If the value is null, automatic scaling doesn't happen.\n\n"; if (args.length < 3 || args.length >4) { System.out.println(USAGE); System.exit(1); } projectArn = args[0]; modelArn = args[1]; minInferenceUnits=Integer.parseInt(args[2]); if (args.length == 4) { maxInferenceUnits = Integer.parseInt(args[3]); } try { // Get the Rekognition client. RekognitionClient rekClient = RekognitionClient.builder() .credentialsProvider(ProfileCredentialsProvider.create("custom-labels-access")) .region(Region.US_WEST_2) .build(); // Start the model. startMyModel(rekClient, projectArn, modelArn, minInferenceUnits, maxInferenceUnits); System.out.println(String.format("Model started: %s", modelArn)); rekClient.close(); } catch (RekognitionException rekError) { logger.log(Level.SEVERE, "Rekognition client error: {0}", rekError.getMessage()); System.exit(1); } catch (Exception rekError) { logger.log(Level.SEVERE, "Error: {0}", rekError.getMessage()); System.exit(1); } } }