选择您的 Cookie 首选项

我们使用必要 Cookie 和类似工具提供我们的网站和服务。我们使用性能 Cookie 收集匿名统计数据,以便我们可以了解客户如何使用我们的网站并进行改进。必要 Cookie 无法停用,但您可以单击“自定义”或“拒绝”来拒绝性能 Cookie。

如果您同意,AWS 和经批准的第三方还将使用 Cookie 提供有用的网站功能、记住您的首选项并显示相关内容,包括相关广告。要接受或拒绝所有非必要 Cookie,请单击“接受”或“拒绝”。要做出更详细的选择,请单击“自定义”。

启动 Amazon Rekognition Custom Labels 模型

聚焦模式

本页内容

启动 Amazon Rekognition Custom Labels 模型 - Rekognition

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

可使用控制台或 StartProjectVersion 操作启动运行 Amazon Rekognition Custom Labels 模型。

重要

您需要为模型运行的小时数和模型在运行时使用的推理单元数付费。有关更多信息,请参阅 运行经过训练的 Amazon Rekognition Custom Labels 模型

启动模型可能需要几分钟才能完成。要查看模型的当前就绪状态,请查看项目的详细信息页面或使用 DescribeProjectVersions

模型启动后,即可使用 DetectCustomLabels,通过模型分析图像。有关更多信息,请参阅 使用经过训练的模型分析图像。控制台还提供了调用 DetectCustomLabels 的示例代码。

启动 Amazon Rekognition Custom Labels 模型(控制台)

按照以下过程通过控制台启动运行 Amazon Rekognition Custom Labels 模型。可以直接从控制台启动模型,也可使用控制台提供的 AWS SDK 代码启动模型。

启动模型(控制台)
  1. 通过以下网址打开 Amazon Rekognition 控制台:https://console.aws.amazon.com/rekognition/

  2. 选择使用自定义标签

  3. 选择开始

  4. 在左侧导航窗格中,选择项目

  5. 项目资源页面上,选择包含要启动的已训练模型的项目。

  6. 模型部分中,选择要启动的模型。

  7. 选择使用模型选项卡。

  8. 执行下列操作之一:

    Start model using the console

    启动或停止模型部分中,执行以下操作:

    1. 选择要使用的推理单元的数量。有关更多信息,请参阅 运行经过训练的 Amazon Rekognition Custom Labels 模型

    2. 选择启动

    3. 启动模型对话框中,选择启动

    Start model using the AWS SDK

    使用模型部分中,执行以下操作:

    1. 选择 API 代码

    2. 选择 AWS CLIPython

    3. 启动模型中,复制示例代码。

    4. 使用示例代码启动模型。有关更多信息,请参阅 启动 Amazon Rekognition Custom Labels 模型 (SDK)

    启动或停止模型部分中,执行以下操作:

    1. 选择要使用的推理单元的数量。有关更多信息,请参阅 运行经过训练的 Amazon Rekognition Custom Labels 模型

    2. 选择启动

    3. 启动模型对话框中,选择启动

  9. 要返回到项目概述页面,请在页面顶部选择项目名称。

  10. 模型部分中,检查模型的状态。当模型状态为 RUNNING 时,就可以使用模型来分析图像。有关更多信息,请参阅 使用经过训练的模型分析图像

启动 Amazon Rekognition Custom Labels 模型 (SDK)

可通过调用 StartProjectVersion API,并在 ProjectVersionArn 输入参数中传递模型的 Amazon 资源名称 (ARN) 来启动模型。此外,还需要指定要使用的推理单元数量。有关更多信息,请参阅 运行经过训练的 Amazon Rekognition Custom Labels 模型

模型可能需要一段时间才能启动。本主题中的 Python 和 Java 示例使用 waiter 来等待模型启动。waiter 是一种实用程序方法,用于轮询是否发生了特定状态。或者,也可通过调用 DescribeProjectVersions 检查当前状态。

启动模型 (SDK)
  1. 安装并配置 AWS CLI 和 AWS SDK(如果尚未如此)。有关更多信息,请参阅 步骤 4:设置 AWS CLI 和 AWS SDKs

  2. 使用以下示例代码启动模型。

    CLI

    project-version-arn 的值更改为要启动的模型的 ARN。将 --min-inference-units 的值更改为要使用的推理单元数。(可选)将 --max-inference-units 更改为 Amazon Rekognition Custom Labels 可用于自动扩缩模型的最大推理单元数。

    aws rekognition start-project-version --project-version-arn model_arn \ --min-inference-units minimum number of units \ --max-inference-units maximum number of units \ --profile custom-labels-access
    Python

    提供以下命令行参数:

    • project_arn:包含要启动的模型的项目的 ARN。

    • model_arn:要启动的模型的 ARN。

    • min_inference_units:要使用的推理单元数。

    • (可选)--max_inference_units:Amazon Rekognition Custom Labels 可用于自动扩缩模型的最大推理单元数。

    # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Purpose Shows how to start running an Amazon Lookout for Vision model. """ import argparse import logging import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) def get_model_status(rek_client, project_arn, model_arn): """ Gets the current status of an Amazon Rekognition Custom Labels model :param rek_client: The Amazon Rekognition Custom Labels Boto3 client. :param project_name: The name of the project that you want to use. :param model_arn: The name of the model that you want the status for. :return: The model status """ logger.info("Getting status for %s.", model_arn) # Extract the model version from the model arn. version_name = (model_arn.split("version/", 1)[1]).rpartition('/')[0] models = rek_client.describe_project_versions(ProjectArn=project_arn, VersionNames=[version_name]) for model in models['ProjectVersionDescriptions']: logger.info("Status: %s", model['StatusMessage']) return model["Status"] error_message = f"Model {model_arn} not found." logger.exception(error_message) raise Exception(error_message) def start_model(rek_client, project_arn, model_arn, min_inference_units, max_inference_units=None): """ Starts the hosting of an Amazon Rekognition Custom Labels model. :param rek_client: The Amazon Rekognition Custom Labels Boto3 client. :param project_name: The name of the project that contains the model that you want to start hosting. :param min_inference_units: The number of inference units to use for hosting. :param max_inference_units: The number of inference units to use for auto-scaling the model. If not supplied, auto-scaling does not happen. """ try: # Start the model logger.info(f"Starting model: {model_arn}. Please wait....") if max_inference_units is None: rek_client.start_project_version(ProjectVersionArn=model_arn, MinInferenceUnits=int(min_inference_units)) else: rek_client.start_project_version(ProjectVersionArn=model_arn, MinInferenceUnits=int( min_inference_units), MaxInferenceUnits=int(max_inference_units)) # Wait for the model to be in the running state version_name = (model_arn.split("version/", 1)[1]).rpartition('/')[0] project_version_running_waiter = rek_client.get_waiter( 'project_version_running') project_version_running_waiter.wait( ProjectArn=project_arn, VersionNames=[version_name]) # Get the running status return get_model_status(rek_client, project_arn, model_arn) except ClientError as err: logger.exception("Client error: Problem starting model: %s", err) raise def add_arguments(parser): """ Adds command line arguments to the parser. :param parser: The command line parser. """ parser.add_argument( "project_arn", help="The ARN of the project that contains that the model you want to start." ) parser.add_argument( "model_arn", help="The ARN of the model that you want to start." ) parser.add_argument( "min_inference_units", help="The minimum number of inference units to use." ) parser.add_argument( "--max_inference_units", help="The maximum number of inference units to use for auto-scaling the model.", required=False ) def main(): logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") try: # Get command line arguments. parser = argparse.ArgumentParser(usage=argparse.SUPPRESS) add_arguments(parser) args = parser.parse_args() # Start the model. session = boto3.Session(profile_name='custom-labels-access') rekognition_client = session.client("rekognition") status = start_model(rekognition_client, args.project_arn, args.model_arn, args.min_inference_units, args.max_inference_units) print(f"Finished starting model: {args.model_arn}") print(f"Status: {status}") except ClientError as err: error_message = f"Client error: Problem starting model: {err}" logger.exception(error_message) print(error_message) except Exception as err: error_message = f"Problem starting model:{err}" logger.exception(error_message) print(error_message) if __name__ == "__main__": main()
    Java V2

    提供以下命令行参数:

    • project_arn:包含要启动的模型的项目的 ARN。

    • model_arn:要启动的模型的 ARN。

    • min_inference_units:要使用的推理单元数。

    • (可选)max_inference_units:Amazon Rekognition Custom Labels 可用于自动扩缩模型的最大推理单元数。如果不指定值,则不会进行自动扩缩。

    /* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */ package com.example.rekognition; import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider; import software.amazon.awssdk.core.waiters.WaiterResponse; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.DescribeProjectVersionsRequest; import software.amazon.awssdk.services.rekognition.model.DescribeProjectVersionsResponse; import software.amazon.awssdk.services.rekognition.model.ProjectVersionDescription; import software.amazon.awssdk.services.rekognition.model.ProjectVersionStatus; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.StartProjectVersionRequest; import software.amazon.awssdk.services.rekognition.model.StartProjectVersionResponse; import software.amazon.awssdk.services.rekognition.waiters.RekognitionWaiter; import java.util.Optional; import java.util.logging.Level; import java.util.logging.Logger; public class StartModel { public static final Logger logger = Logger.getLogger(StartModel.class.getName()); public static int findForwardSlash(String modelArn, int n) { int start = modelArn.indexOf('/'); while (start >= 0 && n > 1) { start = modelArn.indexOf('/', start + 1); n -= 1; } return start; } public static void startMyModel(RekognitionClient rekClient, String projectArn, String modelArn, Integer minInferenceUnits, Integer maxInferenceUnits ) throws Exception, RekognitionException { try { logger.log(Level.INFO, "Starting model: {0}", modelArn); StartProjectVersionRequest startProjectVersionRequest = null; if (maxInferenceUnits == null) { startProjectVersionRequest = StartProjectVersionRequest.builder() .projectVersionArn(modelArn) .minInferenceUnits(minInferenceUnits) .build(); } else { startProjectVersionRequest = StartProjectVersionRequest.builder() .projectVersionArn(modelArn) .minInferenceUnits(minInferenceUnits) .maxInferenceUnits(maxInferenceUnits) .build(); } StartProjectVersionResponse response = rekClient.startProjectVersion(startProjectVersionRequest); logger.log(Level.INFO, "Status: {0}", response.statusAsString() ); // Get the model version int start = findForwardSlash(modelArn, 3) + 1; int end = findForwardSlash(modelArn, 4); String versionName = modelArn.substring(start, end); // wait until model starts DescribeProjectVersionsRequest describeProjectVersionsRequest = DescribeProjectVersionsRequest.builder() .versionNames(versionName) .projectArn(projectArn) .build(); RekognitionWaiter waiter = rekClient.waiter(); WaiterResponse<DescribeProjectVersionsResponse> waiterResponse = waiter .waitUntilProjectVersionRunning(describeProjectVersionsRequest); Optional<DescribeProjectVersionsResponse> optionalResponse = waiterResponse.matched().response(); DescribeProjectVersionsResponse describeProjectVersionsResponse = optionalResponse.get(); for (ProjectVersionDescription projectVersionDescription : describeProjectVersionsResponse .projectVersionDescriptions()) { if(projectVersionDescription.status() == ProjectVersionStatus.RUNNING) { logger.log(Level.INFO, "Model is running" ); } else { String error = "Model training failed: " + projectVersionDescription.statusAsString() + " " + projectVersionDescription.statusMessage() + " " + modelArn; logger.log(Level.SEVERE, error); throw new Exception(error); } } } catch (RekognitionException e) { logger.log(Level.SEVERE, "Could not start model: {0}", e.getMessage()); throw e; } } public static void main(String[] args) { String modelArn = null; String projectArn = null; Integer minInferenceUnits = null; Integer maxInferenceUnits = null; final String USAGE = "\n" + "Usage: " + "<project_name> <version_name> <min_inference_units> <max_inference_units>\n\n" + "Where:\n" + " project_arn - The ARN of the project that contains the model that you want to start. \n\n" + " model_arn - The ARN of the model version that you want to start.\n\n" + " min_inference_units - The number of inference units to start the model with.\n\n" + " max_inference_units - The maximum number of inference units that Custom Labels can use to " + " automatically scale the model. If the value is null, automatic scaling doesn't happen.\n\n"; if (args.length < 3 || args.length >4) { System.out.println(USAGE); System.exit(1); } projectArn = args[0]; modelArn = args[1]; minInferenceUnits=Integer.parseInt(args[2]); if (args.length == 4) { maxInferenceUnits = Integer.parseInt(args[3]); } try { // Get the Rekognition client. RekognitionClient rekClient = RekognitionClient.builder() .credentialsProvider(ProfileCredentialsProvider.create("custom-labels-access")) .region(Region.US_WEST_2) .build(); // Start the model. startMyModel(rekClient, projectArn, modelArn, minInferenceUnits, maxInferenceUnits); System.out.println(String.format("Model started: %s", modelArn)); rekClient.close(); } catch (RekognitionException rekError) { logger.log(Level.SEVERE, "Rekognition client error: {0}", rekError.getMessage()); System.exit(1); } catch (Exception rekError) { logger.log(Level.SEVERE, "Error: {0}", rekError.getMessage()); System.exit(1); } } }

    project-version-arn 的值更改为要启动的模型的 ARN。将 --min-inference-units 的值更改为要使用的推理单元数。(可选)将 --max-inference-units 更改为 Amazon Rekognition Custom Labels 可用于自动扩缩模型的最大推理单元数。

    aws rekognition start-project-version --project-version-arn model_arn \ --min-inference-units minimum number of units \ --max-inference-units maximum number of units \ --profile custom-labels-access
隐私网站条款Cookie 首选项
© 2025, Amazon Web Services, Inc. 或其附属公司。保留所有权利。