支援終止通知:在 2025 年 10 月 31 日, AWS 將停止對 Amazon Lookout for Vision 的支援。2025 年 10 月 31 日之後,您將無法再存取 Lookout for Vision 主控台或 Lookout for Vision 資源。如需詳細資訊,請造訪此部落格文章。
本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。
開始您的亞馬遜 Lookout for Vision 模型
您必須先啟動模型,才能使用 Amazon Lookout for Vision 模型偵測異常。您可以透過呼叫 StartModelAPI 並傳遞下列指令來啟動模型:
ProjectName— 包含您要啟動之模型的專案名稱。
ModelVersion— 您要啟動的模型版本。
MinInferenceUnits— 推論單位的最小數目。如需詳細資訊,請參閱推論單位。
(選擇性) MaxInferenceUnits— Amazon 觀察視覺可用來自動擴展模型的推論單元數目上限。如需詳細資訊,請參閱自動縮放推論單位。
Amazon Lookout for Vision 主控台提供範例程式碼,您可以使用這些程式碼來啟動和停止模型。
啟動您的模型(控制台)
Amazon 視 Lookout for Vision 察主控台提供可用來啟動模型的AWS CLI命令。模型啟動後,您可以開始偵測影像中的異常。如需詳細資訊,請參閱偵測映像中的異常。
啟動模型(控制台)
-
如果您尚未這樣做,請安裝並設定AWS CLI和 AWS SDK。如需詳細資訊,請參閱步驟 4:設定 AWS CLI 和 AWS SDKs。
打開亞馬遜 Lookout for Vision 控制台 https://console.aws.amazon.com/lookoutvision/.
選擇 Get started (開始使用)。
在左側導覽窗格中,選擇 [專案]。
在 [專案資源] 頁面上,選擇包含您要啟動的訓練模型的專案。
在「模型」區段中,選擇您要啟動的模型。
在模型的詳細資料頁面上,選擇 [使用模型],然後選擇 [將 API 整合至雲端]。
在 AWS CLI 命令下,複製呼叫的 AWS CLI 命令start-model
。
於指令提示下,輸入您在上一個步驟中複製的start-model
指令。如果您使用lookoutvision
設定檔取得認證,請新增--profile lookoutvision-access
參數。
在主控台中,選擇左側導覽頁面中的 [模型]。
檢查「狀態」欄中的模型目前狀態,當狀態為「託管」時,您可以使用模型偵測影像中的異常。如需詳細資訊,請參閱偵測映像中的異常。
開始您的亞馬遜 Lookout for Vision 模型 (SDK)
您可以透過呼叫StartModel作業來啟動模型。
模型可能需要一段時間才能啟動。您可以通過調用來檢查當前狀態DescribeModel。如需詳細資訊,請參閱檢視模型。
若要啟動您的模型 (SDK)
-
如果您尚未這樣做,請安裝並設定AWS CLI和 AWS SDK。如需詳細資訊,請參閱步驟 4:設定 AWS CLI 和 AWS SDKs。
使用下列範例程式碼來啟動模型。
- CLI
-
變更下列值:
project-name
到包含您要啟動的模型的專案名稱。
model-version
到您要啟動的模型版本。
--min-inference-units
到您要使用的推論單元的數量。
(選擇性) Amazon Lookout for Vision 可用--max-inference-units
來自動擴展模型的推論單元數目上限。
aws lookoutvision start-model --project-name "project name
"\
--model-version model version
\
--min-inference-units minimum number of units
\
--max-inference-units max number of units
\
--profile lookoutvision-access
- Python
-
此代碼取自AWS文檔 SDK 示例 GitHub 存儲庫。請參閱此處的完整範例。
@staticmethod
def start_model(
lookoutvision_client, project_name, model_version, min_inference_units, max_inference_units = None):
"""
Starts the hosting of a Lookout for Vision model.
:param lookoutvision_client: A Boto3 Lookout for Vision client.
:param project_name: The name of the project that contains the version of the
model that you want to start hosting.
:param model_version: The version of the model that you want to start hosting.
:param min_inference_units: The number of inference units to use for hosting.
:param max_inference_units: (Optional) The maximum number of inference units that
Lookout for Vision can use to automatically scale the model.
"""
try:
logger.info(
"Starting model version %s for project %s", model_version, project_name)
if max_inference_units is None:
lookoutvision_client.start_model(
ProjectName = project_name,
ModelVersion = model_version,
MinInferenceUnits = min_inference_units)
else:
lookoutvision_client.start_model(
ProjectName = project_name,
ModelVersion = model_version,
MinInferenceUnits = min_inference_units,
MaxInferenceUnits = max_inference_units)
print("Starting hosting...")
status = ""
finished = False
# Wait until hosted or failed.
while finished is False:
model_description = lookoutvision_client.describe_model(
ProjectName=project_name, ModelVersion=model_version)
status = model_description["ModelDescription"]["Status"]
if status == "STARTING_HOSTING":
logger.info("Host starting in progress...")
time.sleep(10)
continue
if status == "HOSTED":
logger.info("Model is hosted and ready for use.")
finished = True
continue
logger.info("Model hosting failed and the model can't be used.")
finished = True
if status != "HOSTED":
logger.error("Error hosting model: %s", status)
raise Exception(f"Error hosting model: {status}")
except ClientError:
logger.exception("Couldn't host model.")
raise
- Java V2
-
此代碼取自AWS文檔 SDK 示例 GitHub 存儲庫。請參閱此處的完整範例。
/**
* Starts hosting an Amazon Lookout for Vision model. Returns when the model has
* started or if hosting fails. You are charged for the amount of time that a
* model is hosted. To stop hosting a model, use the StopModel operation.
*
* @param lfvClient An Amazon Lookout for Vision client.
* @param projectName The name of the project that contains the model that you
* want to host.
* @modelVersion The version of the model that you want to host.
* @minInferenceUnits The number of inference units to use for hosting.
* @maxInferenceUnits The maximum number of inference units that Lookout for
* Vision can use for automatically scaling the model. If the
* value is null, automatic scaling doesn't happen.
* @return ModelDescription The description of the model, which includes the
* model hosting status.
*/
public static ModelDescription startModel(LookoutVisionClient lfvClient, String projectName, String modelVersion,
Integer minInferenceUnits, Integer maxInferenceUnits) throws LookoutVisionException, InterruptedException {
logger.log(Level.INFO, "Starting Model version {0} for project {1}.",
new Object[] { modelVersion, projectName });
StartModelRequest startModelRequest = null;
if (maxInferenceUnits == null) {
startModelRequest = StartModelRequest.builder().projectName(projectName).modelVersion(modelVersion)
.minInferenceUnits(minInferenceUnits).build();
} else {
startModelRequest = StartModelRequest.builder().projectName(projectName).modelVersion(modelVersion)
.minInferenceUnits(minInferenceUnits).maxInferenceUnits(maxInferenceUnits).build();
}
// Start hosting the model.
lfvClient.startModel(startModelRequest);
DescribeModelRequest describeModelRequest = DescribeModelRequest.builder().projectName(projectName)
.modelVersion(modelVersion).build();
ModelDescription modelDescription = null;
boolean finished = false;
// Wait until model is hosted or failure occurs.
do {
modelDescription = lfvClient.describeModel(describeModelRequest).modelDescription();
switch (modelDescription.status()) {
case HOSTED:
logger.log(Level.INFO, "Model version {0} for project {1} is running.",
new Object[] { modelVersion, projectName });
finished = true;
break;
case STARTING_HOSTING:
logger.log(Level.INFO, "Model version {0} for project {1} is starting.",
new Object[] { modelVersion, projectName });
TimeUnit.SECONDS.sleep(60);
break;
case HOSTING_FAILED:
logger.log(Level.SEVERE, "Hosting failed for model version {0} for project {1}.",
new Object[] { modelVersion, projectName });
finished = true;
break;
default:
logger.log(Level.SEVERE, "Unexpected error when hosting model version {0} for project {1}: {2}.",
new Object[] { projectName, modelVersion, modelDescription.status() });
finished = true;
break;
}
} while (!finished);
logger.log(Level.INFO, "Finished starting model version {0} for project {1} status: {2}",
new Object[] { modelVersion, projectName, modelDescription.statusMessage() });
return modelDescription;
}
如果程式碼的輸出為Model is hosted and ready for use
,您可以使用模型偵測影像中的異常。如需詳細資訊,請參閱偵測映像中的異常。