CreateInferenceComponent
Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint. In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.
Request Syntax
{
"EndpointName": "string
",
"InferenceComponentName": "string
",
"RuntimeConfig": {
"CopyCount": number
},
"Specification": {
"BaseInferenceComponentName": "string
",
"ComputeResourceRequirements": {
"MaxMemoryRequiredInMb": number
,
"MinMemoryRequiredInMb": number
,
"NumberOfAcceleratorDevicesRequired": number
,
"NumberOfCpuCoresRequired": number
},
"Container": {
"ArtifactUrl": "string
",
"Environment": {
"string
" : "string
"
},
"Image": "string
"
},
"ModelName": "string
",
"StartupParameters": {
"ContainerStartupHealthCheckTimeoutInSeconds": number
,
"ModelDataDownloadTimeoutInSeconds": number
}
},
"Tags": [
{
"Key": "string
",
"Value": "string
"
}
],
"VariantName": "string
"
}
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- EndpointName
-
The name of an existing endpoint where you host the inference component.
Type: String
Length Constraints: Maximum length of 63.
Pattern:
^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}
Required: Yes
- InferenceComponentName
-
A unique name to assign to the inference component.
Type: String
Length Constraints: Maximum length of 63.
Pattern:
^[a-zA-Z0-9]([\-a-zA-Z0-9]*[a-zA-Z0-9])?$
Required: Yes
- RuntimeConfig
-
Runtime settings for a model that is deployed with an inference component.
Type: InferenceComponentRuntimeConfig object
Required: No
- Specification
-
Details about the resources to deploy with this inference component, including the model, container, and compute resources.
Type: InferenceComponentSpecification object
Required: Yes
- Tags
-
A list of key-value pairs associated with the model. For more information, see Tagging AWS resources in the AWS General Reference.
Type: Array of Tag objects
Array Members: Minimum number of 0 items. Maximum number of 50 items.
Required: No
- VariantName
-
The name of an existing production variant where you host the inference component.
Type: String
Length Constraints: Maximum length of 63.
Pattern:
^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}
Required: No
Response Syntax
{
"InferenceComponentArn": "string"
}
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- InferenceComponentArn
-
The Amazon Resource Name (ARN) of the inference component.
Type: String
Length Constraints: Minimum length of 20. Maximum length of 2048.
Errors
For information about the errors that are common to all actions, see Common Errors.
- ResourceLimitExceeded
-
You have exceeded an SageMaker resource limit. For example, you might have too many training jobs created.
HTTP Status Code: 400
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: