啟動 Amazon Lookout for Vision 模型 - Amazon Lookout for Vision

支援終止通知:2025 年 10 月 31 日, AWS 將停止支援 Amazon Lookout for Vision。2025 年 10 月 31 日之後,您將無法再存取 Lookout for Vision 主控台或 Lookout for Vision 資源。如需詳細資訊,請造訪此部落格文章

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

啟動 Amazon Lookout for Vision 模型

您必須先啟動模型,才能使用 Amazon Lookout for Vision 模型來偵測異常。呼叫 StartModel API 並傳遞以下內容來啟動模型:

  • ProjectName – 包含您要啟動之模型的專案名稱。

  • ModelVersion – 您要啟動的模型版本。

  • MinInferenceUnits – 推論單位的最小數量。如需詳細資訊,請參閱推論單元

  • (選用) MaxInferenceUnits – Amazon Lookout for Vision 可用來自動擴展模型的推論單位數量上限。如需詳細資訊,請參閱自動擴展推論單元

Amazon Lookout for Vision 主控台提供範例程式碼,可用來啟動和停止模型。

注意

您需要為模型執行的時間量付費。若要停止執行中的模型,請參閱 停止您的 Amazon Lookout for Vision 模型

您可以使用 AWS SDK 來檢視所有可用 Lookout for Vision 之 AWS 區域中的執行中模型。如需程式碼範例,請參閱 find_running_models.py

啟動模型 (主控台)

Amazon Lookout for Vision 主控台提供一個命令,您可以使用該 AWS CLI 命令來啟動模型。模型啟動後,您可以開始偵測映像中的異常。如需詳細資訊,請參閱偵測映像中的異常

啟動模型 (主控台)
  1. 如果您尚未這麼做,請安裝並設定 AWS CLI 和 AWS SDKs。如需詳細資訊,請參閱步驟 4:設定 AWS CLI 和 SDK AWS SDKs

  2. 開啟 Amazon Lookout for Vision 主控台,網址為 https://https://console.aws.amazon.com/lookoutvision/

  3. 選擇開始使用

  4. 在左側導覽視窗中,選擇 專案

  5. 專案 資源頁面上,選擇包含要啟動的培訓模型的專案。

  6. 模型 的區域中,選擇您要啟動的模型。

  7. 在模型的詳細資訊頁面上,選擇使用模型,然後選擇將 API 整合到雲端

    提示

    如果您想要將模型部署到邊緣裝置,請選擇建立模型封裝任務。如需詳細資訊,請參閱封裝您的 Amazon Lookout for Vision 模型

  8. AWS CLI 命令下,複製呼叫 的 AWS CLI 命令start-model

  9. 在命令提示中,輸入您在上一步驟中複製的 start-model 命令。如果您使用 lookoutvision 設定檔來取得登入資料,請新增 --profile lookoutvision-access 參數。

  10. 在主控台的左側導覽頁面中選擇模型

  11. 檢查狀態欄以取得模型的目前狀態,當狀態為託管時,您可以使用模型來偵測影像中的異常。如需詳細資訊,請參閱偵測映像中的異常

啟動 Amazon Lookout for Vision 模型 (SDK)

您可以透過呼叫 StartModel 操作來啟動模型。

模型可能需要一段時間才能啟動。您可以呼叫 DescribeModel 來檢查目前狀態。如需詳細資訊,請參閱檢視您的模型

啟動模型 (SDK)
  1. 如果您尚未這麼做,請安裝並設定 AWS CLI 和 AWS SDKs。如需詳細資訊,請參閱步驟 4:設定 AWS CLI 和 SDK AWS SDKs

  2. 使用以下範例程式碼來啟動模型。

    CLI

    變更下列值:

    • project-name 至包含您想要啟動之模型的專案名稱。

    • model-version 您想要啟動的模型版本。

    • --min-inference-units 您想要使用的推論單位數量。

    • (選用) --max-inference-unitsAmazon Lookout for Vision 可用來自動擴展模型的推論單位數量上限。

    aws lookoutvision start-model --project-name "project name"\ --model-version model version\ --min-inference-units minimum number of units\ --max-inference-units max number of units \ --profile lookoutvision-access
    Python

    此程式碼取自 AWS 文件開發套件範例 GitHub 儲存庫。請參閱此處的完整範例。

    @staticmethod def start_model( lookoutvision_client, project_name, model_version, min_inference_units, max_inference_units = None): """ Starts the hosting of a Lookout for Vision model. :param lookoutvision_client: A Boto3 Lookout for Vision client. :param project_name: The name of the project that contains the version of the model that you want to start hosting. :param model_version: The version of the model that you want to start hosting. :param min_inference_units: The number of inference units to use for hosting. :param max_inference_units: (Optional) The maximum number of inference units that Lookout for Vision can use to automatically scale the model. """ try: logger.info( "Starting model version %s for project %s", model_version, project_name) if max_inference_units is None: lookoutvision_client.start_model( ProjectName = project_name, ModelVersion = model_version, MinInferenceUnits = min_inference_units) else: lookoutvision_client.start_model( ProjectName = project_name, ModelVersion = model_version, MinInferenceUnits = min_inference_units, MaxInferenceUnits = max_inference_units) print("Starting hosting...") status = "" finished = False # Wait until hosted or failed. while finished is False: model_description = lookoutvision_client.describe_model( ProjectName=project_name, ModelVersion=model_version) status = model_description["ModelDescription"]["Status"] if status == "STARTING_HOSTING": logger.info("Host starting in progress...") time.sleep(10) continue if status == "HOSTED": logger.info("Model is hosted and ready for use.") finished = True continue logger.info("Model hosting failed and the model can't be used.") finished = True if status != "HOSTED": logger.error("Error hosting model: %s", status) raise Exception(f"Error hosting model: {status}") except ClientError: logger.exception("Couldn't host model.") raise
    Java V2

    此程式碼取自 AWS 文件開發套件範例 GitHub 儲存庫。請參閱此處的完整範例。

    /** * Starts hosting an Amazon Lookout for Vision model. Returns when the model has * started or if hosting fails. You are charged for the amount of time that a * model is hosted. To stop hosting a model, use the StopModel operation. * * @param lfvClient An Amazon Lookout for Vision client. * @param projectName The name of the project that contains the model that you * want to host. * @modelVersion The version of the model that you want to host. * @minInferenceUnits The number of inference units to use for hosting. * @maxInferenceUnits The maximum number of inference units that Lookout for * Vision can use for automatically scaling the model. If the * value is null, automatic scaling doesn't happen. * @return ModelDescription The description of the model, which includes the * model hosting status. */ public static ModelDescription startModel(LookoutVisionClient lfvClient, String projectName, String modelVersion, Integer minInferenceUnits, Integer maxInferenceUnits) throws LookoutVisionException, InterruptedException { logger.log(Level.INFO, "Starting Model version {0} for project {1}.", new Object[] { modelVersion, projectName }); StartModelRequest startModelRequest = null; if (maxInferenceUnits == null) { startModelRequest = StartModelRequest.builder().projectName(projectName).modelVersion(modelVersion) .minInferenceUnits(minInferenceUnits).build(); } else { startModelRequest = StartModelRequest.builder().projectName(projectName).modelVersion(modelVersion) .minInferenceUnits(minInferenceUnits).maxInferenceUnits(maxInferenceUnits).build(); } // Start hosting the model. lfvClient.startModel(startModelRequest); DescribeModelRequest describeModelRequest = DescribeModelRequest.builder().projectName(projectName) .modelVersion(modelVersion).build(); ModelDescription modelDescription = null; boolean finished = false; // Wait until model is hosted or failure occurs. do { modelDescription = lfvClient.describeModel(describeModelRequest).modelDescription(); switch (modelDescription.status()) { case HOSTED: logger.log(Level.INFO, "Model version {0} for project {1} is running.", new Object[] { modelVersion, projectName }); finished = true; break; case STARTING_HOSTING: logger.log(Level.INFO, "Model version {0} for project {1} is starting.", new Object[] { modelVersion, projectName }); TimeUnit.SECONDS.sleep(60); break; case HOSTING_FAILED: logger.log(Level.SEVERE, "Hosting failed for model version {0} for project {1}.", new Object[] { modelVersion, projectName }); finished = true; break; default: logger.log(Level.SEVERE, "Unexpected error when hosting model version {0} for project {1}: {2}.", new Object[] { projectName, modelVersion, modelDescription.status() }); finished = true; break; } } while (!finished); logger.log(Level.INFO, "Finished starting model version {0} for project {1} status: {2}", new Object[] { modelVersion, projectName, modelDescription.statusMessage() }); return modelDescription; }
  3. 如果程式碼的輸出為 Model is hosted and ready for use,您可以使用模型來偵測映像中的異常。如需詳細資訊,請參閱偵測映像中的異常