CfnModel
- class aws_cdk.aws_sagemaker.CfnModel(scope, id, *, execution_role_arn, containers=None, enable_network_isolation=None, inference_execution_config=None, model_name=None, primary_container=None, tags=None, vpc_config=None)
Bases:
CfnResource
A CloudFormation
AWS::SageMaker::Model
.The
AWS::SageMaker::Model
resource to create a model to host at an Amazon SageMaker endpoint. For more information, see Deploying a Model on Amazon SageMaker Hosting Services in the Amazon SageMaker Developer Guide .- CloudformationResource:
AWS::SageMaker::Model
- Link:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-model.html
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_sagemaker as sagemaker # environment: Any cfn_model = sagemaker.CfnModel(self, "MyCfnModel", execution_role_arn="executionRoleArn", # the properties below are optional containers=[sagemaker.CfnModel.ContainerDefinitionProperty( container_hostname="containerHostname", environment=environment, image="image", image_config=sagemaker.CfnModel.ImageConfigProperty( repository_access_mode="repositoryAccessMode", # the properties below are optional repository_auth_config=sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) ), inference_specification_name="inferenceSpecificationName", mode="mode", model_data_url="modelDataUrl", model_package_name="modelPackageName", multi_model_config=sagemaker.CfnModel.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" ) )], enable_network_isolation=False, inference_execution_config=sagemaker.CfnModel.InferenceExecutionConfigProperty( mode="mode" ), model_name="modelName", primary_container=sagemaker.CfnModel.ContainerDefinitionProperty( container_hostname="containerHostname", environment=environment, image="image", image_config=sagemaker.CfnModel.ImageConfigProperty( repository_access_mode="repositoryAccessMode", # the properties below are optional repository_auth_config=sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) ), inference_specification_name="inferenceSpecificationName", mode="mode", model_data_url="modelDataUrl", model_package_name="modelPackageName", multi_model_config=sagemaker.CfnModel.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" ) ), tags=[CfnTag( key="key", value="value" )], vpc_config=sagemaker.CfnModel.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) )
Create a new
AWS::SageMaker::Model
.- Parameters:
scope (
Construct
) –scope in which this resource is defined.
id (
str
) –scoped id of the resource.
execution_role_arn (
str
) – The Amazon Resource Name (ARN) of the IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see SageMaker Roles . .. epigraph:: To be able to pass this role to SageMaker, the caller of this API must have theiam:PassRole
permission.containers (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,ContainerDefinitionProperty
,Dict
[str
,Any
]]],None
]) – Specifies the containers in the inference pipeline.enable_network_isolation (
Union
[bool
,IResolvable
,None
]) – Isolates the model container. No inbound or outbound network calls can be made to or from the model container.inference_execution_config (
Union
[IResolvable
,InferenceExecutionConfigProperty
,Dict
[str
,Any
],None
]) – Specifies details of how containers in a multi-container endpoint are called.model_name (
Optional
[str
]) – The name of the new model.primary_container (
Union
[IResolvable
,ContainerDefinitionProperty
,Dict
[str
,Any
],None
]) – The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – A list of key-value pairs to apply to this resource. For more information, see Resource Tag and Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide .vpc_config (
Union
[IResolvable
,VpcConfigProperty
,Dict
[str
,Any
],None
]) – A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC.VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_depends_on(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
)- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
)value (
Any
)
- See:
- Return type:
None
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermdediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
).- Parameters:
policy (
Optional
[RemovalPolicy
])apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resoure, please consult that specific resource’s documentation.
- Return type:
None
- get_att(attribute_name)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
)- See:
- Return type:
Any
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) –tree inspector to collect and process attributes.
- Return type:
None
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::SageMaker::Model'
- attr_model_name
The name of the model, such as
MyModel
.- CloudformationAttribute:
ModelName
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- containers
Specifies the containers in the inference pipeline.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- enable_network_isolation
Isolates the model container.
No inbound or outbound network calls can be made to or from the model container.
- execution_role_arn
The Amazon Resource Name (ARN) of the IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs.
Deploying on ML compute instances is part of model hosting. For more information, see SageMaker Roles . .. epigraph:
To be able to pass this role to SageMaker, the caller of this API must have the ``iam:PassRole`` permission.
- inference_execution_config
Specifies details of how containers in a multi-container endpoint are called.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- model_name
The name of the new model.
- node
The construct tree node associated with this construct.
- primary_container
The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
A list of key-value pairs to apply to this resource.
For more information, see Resource Tag and Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide .
- vpc_config
A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC.
VpcConfig
is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
)- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(construct)
Check whether the given construct is a CfnResource.
- Parameters:
construct (
IConstruct
)- Return type:
bool
- classmethod is_construct(x)
Return whether the given object is a Construct.
- Parameters:
x (
Any
)- Return type:
bool
ContainerDefinitionProperty
- class CfnModel.ContainerDefinitionProperty(*, container_hostname=None, environment=None, image=None, image_config=None, inference_specification_name=None, mode=None, model_data_url=None, model_package_name=None, multi_model_config=None)
Bases:
object
Describes the container, as part of model definition.
- Parameters:
container_hostname (
Optional
[str
]) – This parameter is ignored for models that contain only aPrimaryContainer
. When aContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline . If you don’t specify a value for this parameter for aContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned based on the position of theContainerDefinition
in the pipeline. If you specify a value for theContainerHostName
for anyContainerDefinition
that is part of an inference pipeline, you must specify a value for theContainerHostName
parameter of everyContainerDefinition
in that pipeline.environment (
Optional
[Any
]) – The environment variables to set in the Docker container. Each key and value in theEnvironment
string to string map can have length of up to 1024. We support up to 16 entries in the map.image (
Optional
[str
]) – The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports bothregistry/repository[:tag]
andregistry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker . .. epigraph:: The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.image_config (
Union
[IResolvable
,ImageConfigProperty
,Dict
[str
,Any
],None
]) – Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers . .. epigraph:: The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.inference_specification_name (
Optional
[str
]) – The inference specification name in the model package version.mode (
Optional
[str
]) – Whether the container hosts a single model or multiple models.model_data_url (
Optional
[str
]) – The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters . .. epigraph:: The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating. If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide . .. epigraph:: If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts inModelDataUrl
.model_package_name (
Optional
[str
]) – The name or Amazon Resource Name (ARN) of the model package to use to create the model.multi_model_config (
Union
[IResolvable
,MultiModelConfigProperty
,Dict
[str
,Any
],None
]) – Specifies additional configuration for multi-model endpoints.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_sagemaker as sagemaker # environment: Any container_definition_property = sagemaker.CfnModel.ContainerDefinitionProperty( container_hostname="containerHostname", environment=environment, image="image", image_config=sagemaker.CfnModel.ImageConfigProperty( repository_access_mode="repositoryAccessMode", # the properties below are optional repository_auth_config=sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) ), inference_specification_name="inferenceSpecificationName", mode="mode", model_data_url="modelDataUrl", model_package_name="modelPackageName", multi_model_config=sagemaker.CfnModel.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" ) )
Attributes
- container_hostname
This parameter is ignored for models that contain only a
PrimaryContainer
.When a
ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline . If you don’t specify a value for this parameter for aContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned based on the position of theContainerDefinition
in the pipeline. If you specify a value for theContainerHostName
for anyContainerDefinition
that is part of an inference pipeline, you must specify a value for theContainerHostName
parameter of everyContainerDefinition
in that pipeline.
- environment
The environment variables to set in the Docker container.
Each key and value in the
Environment
string to string map can have length of up to 1024. We support up to 16 entries in the map.
- image
The path where inference code is stored.
This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both
registry/repository[:tag]
andregistry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker . .. epigraph:The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
- image_config
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).
For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers . .. epigraph:
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
- inference_specification_name
The inference specification name in the model package version.
- mode
Whether the container hosts a single model or multiple models.
- model_data_url
The S3 path where the model artifacts, which result from model training, are stored.
This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters . .. epigraph:
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide . .. epigraph:
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in ``ModelDataUrl`` .
- model_package_name
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
- multi_model_config
Specifies additional configuration for multi-model endpoints.
ImageConfigProperty
- class CfnModel.ImageConfigProperty(*, repository_access_mode, repository_auth_config=None)
Bases:
object
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).
- Parameters:
repository_access_mode (
str
) – Set this to one of the following values:. -Platform
- The model image is hosted in Amazon ECR. -Vpc
- The model image is hosted in a private Docker registry in your VPC.repository_auth_config (
Union
[IResolvable
,RepositoryAuthConfigProperty
,Dict
[str
,Any
],None
]) – (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specifiedVpc
as the value for theRepositoryAccessMode
field, and the private Docker registry where the model image is hosted requires authentication.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_sagemaker as sagemaker image_config_property = sagemaker.CfnModel.ImageConfigProperty( repository_access_mode="repositoryAccessMode", # the properties below are optional repository_auth_config=sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) )
Attributes
- repository_access_mode
.
Platform
- The model image is hosted in Amazon ECR.Vpc
- The model image is hosted in a private Docker registry in your VPC.
- Link:
- Type:
Set this to one of the following values
- repository_auth_config
(Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted.
Specify a value for this property only if you specified
Vpc
as the value for theRepositoryAccessMode
field, and the private Docker registry where the model image is hosted requires authentication.
InferenceExecutionConfigProperty
- class CfnModel.InferenceExecutionConfigProperty(*, mode)
Bases:
object
Specifies details about how containers in a multi-container endpoint are run.
- Parameters:
mode (
str
) – How containers in a multi-container are run. The following values are valid. -Serial
- Containers run as a serial pipeline. -Direct
- Only the individual container that you specify is run.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_sagemaker as sagemaker inference_execution_config_property = sagemaker.CfnModel.InferenceExecutionConfigProperty( mode="mode" )
Attributes
- mode
How containers in a multi-container are run. The following values are valid.
Serial
- Containers run as a serial pipeline.Direct
- Only the individual container that you specify is run.
MultiModelConfigProperty
- class CfnModel.MultiModelConfigProperty(*, model_cache_setting=None)
Bases:
object
Specifies additional configuration for hosting multi-model endpoints.
- Parameters:
model_cache_setting (
Optional
[str
]) – Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_sagemaker as sagemaker multi_model_config_property = sagemaker.CfnModel.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" )
Attributes
- model_cache_setting
Whether to cache models for a multi-model endpoint.
By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.
RepositoryAuthConfigProperty
- class CfnModel.RepositoryAuthConfigProperty(*, repository_credentials_provider_arn)
Bases:
object
Specifies an authentication configuration for the private docker registry where your model image is hosted.
Specify a value for this property only if you specified
Vpc
as the value for theRepositoryAccessMode
field of theImageConfig
object that you passed to a call toCreateModel
and the private Docker registry where the model image is hosted requires authentication.- Parameters:
repository_credentials_provider_arn (
str
) – The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide .- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_sagemaker as sagemaker repository_auth_config_property = sagemaker.CfnModel.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" )
Attributes
- repository_credentials_provider_arn
The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted.
For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide .
VpcConfigProperty
- class CfnModel.VpcConfigProperty(*, security_group_ids, subnets)
Bases:
object
Specifies a VPC that your training jobs and hosted models have access to.
Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud .
- Parameters:
security_group_ids (
Sequence
[str
]) – The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in theSubnets
field.subnets (
Sequence
[str
]) – The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_sagemaker as sagemaker vpc_config_property = sagemaker.CfnModel.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] )
Attributes
- security_group_ids
The VPC security group IDs, in the form sg-xxxxxxxx.
Specify the security groups for the VPC that is specified in the
Subnets
field.
- subnets
The ID of the subnets in the VPC to which you want to connect your training job or model.
For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .