支援終止通知:2025 年 10 月 31 日, AWS 將停止支援 Amazon Lookout for Vision。2025 年 10 月 31 日之後,您將無法再存取 Lookout for Vision 主控台或 Lookout for Vision 資源。如需詳細資訊,請造訪此部落格文章。
本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。
啟動 Amazon Lookout for Vision 模型
您必須先啟動模型,才能使用 Amazon Lookout for Vision 模型來偵測異常。呼叫 StartModel API 並傳遞以下內容來啟動模型:
ProjectName – 包含您要啟動之模型的專案名稱。
ModelVersion – 您要啟動的模型版本。
MinInferenceUnits – 推論單位的最小數量。如需詳細資訊,請參閱推論單元。
(選用) MaxInferenceUnits – Amazon Lookout for Vision 可用來自動擴展模型的推論單位數量上限。如需詳細資訊,請參閱自動擴展推論單元。
Amazon Lookout for Vision 主控台提供範例程式碼,可用來啟動和停止模型。
啟動模型 (主控台)
Amazon Lookout for Vision 主控台提供一個命令,您可以使用該 AWS CLI 命令來啟動模型。模型啟動後,您可以開始偵測映像中的異常。如需詳細資訊,請參閱偵測映像中的異常。
啟動 Amazon Lookout for Vision 模型 (SDK)
您可以透過呼叫 StartModel 操作來啟動模型。
模型可能需要一段時間才能啟動。您可以呼叫 DescribeModel 來檢查目前狀態。如需詳細資訊,請參閱檢視您的模型。
啟動模型 (SDK)
-
如果您尚未這麼做,請安裝並設定 AWS CLI 和 AWS SDKs。如需詳細資訊,請參閱步驟 4:設定 AWS CLI 和 SDK AWS SDKs。
使用以下範例程式碼來啟動模型。
- CLI
-
變更下列值:
project-name
至包含您想要啟動之模型的專案名稱。
model-version
您想要啟動的模型版本。
--min-inference-units
您想要使用的推論單位數量。
(選用) --max-inference-units
Amazon Lookout for Vision 可用來自動擴展模型的推論單位數量上限。
aws lookoutvision start-model --project-name "project name
"\
--model-version model version
\
--min-inference-units minimum number of units
\
--max-inference-units max number of units
\
--profile lookoutvision-access
- Python
-
此程式碼取自 AWS 文件開發套件範例 GitHub 儲存庫。請參閱此處的完整範例。
@staticmethod
def start_model(
lookoutvision_client, project_name, model_version, min_inference_units, max_inference_units = None):
"""
Starts the hosting of a Lookout for Vision model.
:param lookoutvision_client: A Boto3 Lookout for Vision client.
:param project_name: The name of the project that contains the version of the
model that you want to start hosting.
:param model_version: The version of the model that you want to start hosting.
:param min_inference_units: The number of inference units to use for hosting.
:param max_inference_units: (Optional) The maximum number of inference units that
Lookout for Vision can use to automatically scale the model.
"""
try:
logger.info(
"Starting model version %s for project %s", model_version, project_name)
if max_inference_units is None:
lookoutvision_client.start_model(
ProjectName = project_name,
ModelVersion = model_version,
MinInferenceUnits = min_inference_units)
else:
lookoutvision_client.start_model(
ProjectName = project_name,
ModelVersion = model_version,
MinInferenceUnits = min_inference_units,
MaxInferenceUnits = max_inference_units)
print("Starting hosting...")
status = ""
finished = False
# Wait until hosted or failed.
while finished is False:
model_description = lookoutvision_client.describe_model(
ProjectName=project_name, ModelVersion=model_version)
status = model_description["ModelDescription"]["Status"]
if status == "STARTING_HOSTING":
logger.info("Host starting in progress...")
time.sleep(10)
continue
if status == "HOSTED":
logger.info("Model is hosted and ready for use.")
finished = True
continue
logger.info("Model hosting failed and the model can't be used.")
finished = True
if status != "HOSTED":
logger.error("Error hosting model: %s", status)
raise Exception(f"Error hosting model: {status}")
except ClientError:
logger.exception("Couldn't host model.")
raise
- Java V2
-
此程式碼取自 AWS 文件開發套件範例 GitHub 儲存庫。請參閱此處的完整範例。
/**
* Starts hosting an Amazon Lookout for Vision model. Returns when the model has
* started or if hosting fails. You are charged for the amount of time that a
* model is hosted. To stop hosting a model, use the StopModel operation.
*
* @param lfvClient An Amazon Lookout for Vision client.
* @param projectName The name of the project that contains the model that you
* want to host.
* @modelVersion The version of the model that you want to host.
* @minInferenceUnits The number of inference units to use for hosting.
* @maxInferenceUnits The maximum number of inference units that Lookout for
* Vision can use for automatically scaling the model. If the
* value is null, automatic scaling doesn't happen.
* @return ModelDescription The description of the model, which includes the
* model hosting status.
*/
public static ModelDescription startModel(LookoutVisionClient lfvClient, String projectName, String modelVersion,
Integer minInferenceUnits, Integer maxInferenceUnits) throws LookoutVisionException, InterruptedException {
logger.log(Level.INFO, "Starting Model version {0} for project {1}.",
new Object[] { modelVersion, projectName });
StartModelRequest startModelRequest = null;
if (maxInferenceUnits == null) {
startModelRequest = StartModelRequest.builder().projectName(projectName).modelVersion(modelVersion)
.minInferenceUnits(minInferenceUnits).build();
} else {
startModelRequest = StartModelRequest.builder().projectName(projectName).modelVersion(modelVersion)
.minInferenceUnits(minInferenceUnits).maxInferenceUnits(maxInferenceUnits).build();
}
// Start hosting the model.
lfvClient.startModel(startModelRequest);
DescribeModelRequest describeModelRequest = DescribeModelRequest.builder().projectName(projectName)
.modelVersion(modelVersion).build();
ModelDescription modelDescription = null;
boolean finished = false;
// Wait until model is hosted or failure occurs.
do {
modelDescription = lfvClient.describeModel(describeModelRequest).modelDescription();
switch (modelDescription.status()) {
case HOSTED:
logger.log(Level.INFO, "Model version {0} for project {1} is running.",
new Object[] { modelVersion, projectName });
finished = true;
break;
case STARTING_HOSTING:
logger.log(Level.INFO, "Model version {0} for project {1} is starting.",
new Object[] { modelVersion, projectName });
TimeUnit.SECONDS.sleep(60);
break;
case HOSTING_FAILED:
logger.log(Level.SEVERE, "Hosting failed for model version {0} for project {1}.",
new Object[] { modelVersion, projectName });
finished = true;
break;
default:
logger.log(Level.SEVERE, "Unexpected error when hosting model version {0} for project {1}: {2}.",
new Object[] { projectName, modelVersion, modelDescription.status() });
finished = true;
break;
}
} while (!finished);
logger.log(Level.INFO, "Finished starting model version {0} for project {1} status: {2}",
new Object[] { modelVersion, projectName, modelDescription.statusMessage() });
return modelDescription;
}
如果程式碼的輸出為 Model is hosted and ready for use
,您可以使用模型來偵測映像中的異常。如需詳細資訊,請參閱偵測映像中的異常。