interface CfnModelProps
Language | Type name |
---|---|
.NET | Amazon.CDK.AWS.Sagemaker.CfnModelProps |
Java | software.amazon.awscdk.services.sagemaker.CfnModelProps |
Python | aws_cdk.aws_sagemaker.CfnModelProps |
TypeScript | @aws-cdk/aws-sagemaker » CfnModelProps |
Properties for defining a CfnModel
.
Example
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import * as sagemaker from '@aws-cdk/aws-sagemaker';
declare const environment: any;
const cfnModelProps: sagemaker.CfnModelProps = {
executionRoleArn: 'executionRoleArn',
// the properties below are optional
containers: [{
containerHostname: 'containerHostname',
environment: environment,
image: 'image',
imageConfig: {
repositoryAccessMode: 'repositoryAccessMode',
// the properties below are optional
repositoryAuthConfig: {
repositoryCredentialsProviderArn: 'repositoryCredentialsProviderArn',
},
},
inferenceSpecificationName: 'inferenceSpecificationName',
mode: 'mode',
modelDataUrl: 'modelDataUrl',
modelPackageName: 'modelPackageName',
multiModelConfig: {
modelCacheSetting: 'modelCacheSetting',
},
}],
enableNetworkIsolation: false,
inferenceExecutionConfig: {
mode: 'mode',
},
modelName: 'modelName',
primaryContainer: {
containerHostname: 'containerHostname',
environment: environment,
image: 'image',
imageConfig: {
repositoryAccessMode: 'repositoryAccessMode',
// the properties below are optional
repositoryAuthConfig: {
repositoryCredentialsProviderArn: 'repositoryCredentialsProviderArn',
},
},
inferenceSpecificationName: 'inferenceSpecificationName',
mode: 'mode',
modelDataUrl: 'modelDataUrl',
modelPackageName: 'modelPackageName',
multiModelConfig: {
modelCacheSetting: 'modelCacheSetting',
},
},
tags: [{
key: 'key',
value: 'value',
}],
vpcConfig: {
securityGroupIds: ['securityGroupIds'],
subnets: ['subnets'],
},
};
Properties
Name | Type | Description |
---|---|---|
execution | string | The Amazon Resource Name (ARN) of the IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. |
containers? | IResolvable | IResolvable | Container [] | Specifies the containers in the inference pipeline. |
enable | boolean | IResolvable | Isolates the model container. |
inference | IResolvable | Inference | Specifies details of how containers in a multi-container endpoint are called. |
model | string | The name of the new model. |
primary | IResolvable | Container | The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions. |
tags? | Cfn [] | A list of key-value pairs to apply to this resource. |
vpc | IResolvable | Vpc | A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud . |
executionRoleArn
Type:
string
The Amazon Resource Name (ARN) of the IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs.
Deploying on ML compute instances is part of model hosting. For more information, see SageMaker Roles .
To be able to pass this role to SageMaker, the caller of this API must have the
iam:PassRole
permission.
containers?
Type:
IResolvable
|
IResolvable
|
Container
[]
(optional)
Specifies the containers in the inference pipeline.
enableNetworkIsolation?
Type:
boolean |
IResolvable
(optional)
Isolates the model container.
No inbound or outbound network calls can be made to or from the model container.
inferenceExecutionConfig?
Type:
IResolvable
|
Inference
(optional)
Specifies details of how containers in a multi-container endpoint are called.
modelName?
Type:
string
(optional)
The name of the new model.
primaryContainer?
Type:
IResolvable
|
Container
(optional)
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
tags?
Type:
Cfn
[]
(optional)
A list of key-value pairs to apply to this resource.
For more information, see Resource Tag and Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide .
vpcConfig?
Type:
IResolvable
|
Vpc
(optional)
A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .