AWS::SageMaker::ModelPackage InferenceSpecification
Defines how to perform inference generation after a training job is run.
Syntax
To declare this entity in your AWS CloudFormation template, use the following syntax:
JSON
{ "Containers" :
[ ModelPackageContainerDefinition, ... ]
, "SupportedContentTypes" :[ String, ... ]
, "SupportedRealtimeInferenceInstanceTypes" :[ String, ... ]
, "SupportedResponseMIMETypes" :[ String, ... ]
, "SupportedTransformInstanceTypes" :[ String, ... ]
}
YAML
Containers:
- ModelPackageContainerDefinition
SupportedContentTypes:- String
SupportedRealtimeInferenceInstanceTypes:- String
SupportedResponseMIMETypes:- String
SupportedTransformInstanceTypes:- String
Properties
Containers
-
The Amazon ECR registry path of the Docker image that contains the inference code.
Required: Yes
Type: Array of ModelPackageContainerDefinition
Minimum:
1
Maximum:
15
Update requires: Replacement
SupportedContentTypes
-
The supported MIME types for the input data.
Required: Yes
Type: Array of String
Update requires: Replacement
SupportedRealtimeInferenceInstanceTypes
-
A list of the instance types that are used to generate inferences in real-time.
This parameter is required for unversioned models, and optional for versioned models.
Required: No
Type: Array of String
Update requires: Replacement
SupportedResponseMIMETypes
-
The supported MIME types for the output data.
Required: Yes
Type: Array of String
Update requires: Replacement
SupportedTransformInstanceTypes
-
A list of the instance types on which a transformation job can be run or on which an endpoint can be deployed.
This parameter is required for unversioned models, and optional for versioned models.
Required: No
Type: Array of String
Minimum:
1
Update requires: Replacement