Interface CfnModel.ContainerDefinitionProperty
- All Superinterfaces:
software.amazon.jsii.JsiiSerializable
- All Known Implementing Classes:
CfnModel.ContainerDefinitionProperty.Jsii$Proxy
- Enclosing class:
CfnModel
Example:
// The code below shows an example of how to instantiate this type. // The values are placeholders you should change. import software.amazon.awscdk.services.sagemaker.*; Object environment; ContainerDefinitionProperty containerDefinitionProperty = ContainerDefinitionProperty.builder() .containerHostname("containerHostname") .environment(environment) .image("image") .imageConfig(ImageConfigProperty.builder() .repositoryAccessMode("repositoryAccessMode") // the properties below are optional .repositoryAuthConfig(RepositoryAuthConfigProperty.builder() .repositoryCredentialsProviderArn("repositoryCredentialsProviderArn") .build()) .build()) .inferenceSpecificationName("inferenceSpecificationName") .mode("mode") .modelDataSource(ModelDataSourceProperty.builder() .s3DataSource(S3DataSourceProperty.builder() .compressionType("compressionType") .s3DataType("s3DataType") .s3Uri("s3Uri") // the properties below are optional .hubAccessConfig(HubAccessConfigProperty.builder() .hubContentArn("hubContentArn") .build()) .modelAccessConfig(ModelAccessConfigProperty.builder() .acceptEula(false) .build()) .build()) .build()) .modelDataUrl("modelDataUrl") .modelPackageName("modelPackageName") .multiModelConfig(MultiModelConfigProperty.builder() .modelCacheSetting("modelCacheSetting") .build()) .build();
- See Also:
-
Nested Class Summary
Modifier and TypeInterfaceDescriptionstatic final class
A builder forCfnModel.ContainerDefinitionProperty
static final class
An implementation forCfnModel.ContainerDefinitionProperty
-
Method Summary
Modifier and TypeMethodDescriptionbuilder()
default String
This parameter is ignored for models that contain only aPrimaryContainer
.default Object
The environment variables to set in the Docker container.default String
getImage()
The path where inference code is stored.default Object
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).default String
The inference specification name in the model package version.default String
getMode()
Whether the container hosts a single model or multiple models.default Object
Specifies the location of ML model data to deploy.default String
The S3 path where the model artifacts, which result from model training, are stored.default String
The name or Amazon Resource Name (ARN) of the model package to use to create the model.default Object
Specifies additional configuration for multi-model endpoints.Methods inherited from interface software.amazon.jsii.JsiiSerializable
$jsii$toJson
-
Method Details
-
getContainerHostname
This parameter is ignored for models that contain only aPrimaryContainer
.When a
ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline . If you don't specify a value for this parameter for aContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned based on the position of theContainerDefinition
in the pipeline. If you specify a value for theContainerHostName
for anyContainerDefinition
that is part of an inference pipeline, you must specify a value for theContainerHostName
parameter of everyContainerDefinition
in that pipeline.- See Also:
-
getEnvironment
The environment variables to set in the Docker container. Don't include any sensitive data in your environment variables.The maximum length of each key and value in the
Environment
map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to aCreateModel
request, then the maximum length of all of their maps, combined, is also 32 KB.- See Also:
-
getImage
The path where inference code is stored.This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both
registry/repository[:tag]
andregistry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker .The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
- See Also:
-
getImageConfig
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers .
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
- See Also:
-
getInferenceSpecificationName
The inference specification name in the model package version.- See Also:
-
getMode
Whether the container hosts a single model or multiple models.- See Also:
-
getModelDataSource
Specifies the location of ML model data to deploy.Currently you cannot use
ModelDataSource
in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.- See Also:
-
getModelDataUrl
The S3 path where the model artifacts, which result from model training, are stored.This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters .
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide .
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in
ModelDataUrl
.- See Also:
-
getModelPackageName
The name or Amazon Resource Name (ARN) of the model package to use to create the model.- See Also:
-
getMultiModelConfig
Specifies additional configuration for multi-model endpoints.- See Also:
-
builder
-