CreateOptimizationJob
Creates a job that optimizes a model for inference performance. To create the job, you provide the location of a source model, and you provide the settings for the optimization techniques that you want the job to apply. When the job completes successfully, SageMaker uploads the new optimized model to the output destination that you specify.
For more information about how to use this action, and about the supported optimization techniques, see Optimize model inference with Amazon SageMaker.
Request Syntax
{
"DeploymentInstanceType": "string
",
"ModelSource": {
"S3": {
"ModelAccessConfig": {
"AcceptEula": boolean
},
"S3Uri": "string
"
}
},
"OptimizationConfigs": [
{ ... }
],
"OptimizationEnvironment": {
"string
" : "string
"
},
"OptimizationJobName": "string
",
"OutputConfig": {
"KmsKeyId": "string
",
"S3OutputLocation": "string
"
},
"RoleArn": "string
",
"StoppingCondition": {
"MaxPendingTimeInSeconds": number
,
"MaxRuntimeInSeconds": number
,
"MaxWaitTimeInSeconds": number
},
"Tags": [
{
"Key": "string
",
"Value": "string
"
}
],
"VpcConfig": {
"SecurityGroupIds": [ "string
" ],
"Subnets": [ "string
" ]
}
}
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- DeploymentInstanceType
-
The type of instance that hosts the optimized model that you create with the optimization job.
Type: String
Valid Values:
ml.p4d.24xlarge | ml.p4de.24xlarge | ml.p5.48xlarge | ml.g5.xlarge | ml.g5.2xlarge | ml.g5.4xlarge | ml.g5.8xlarge | ml.g5.12xlarge | ml.g5.16xlarge | ml.g5.24xlarge | ml.g5.48xlarge | ml.g6.xlarge | ml.g6.2xlarge | ml.g6.4xlarge | ml.g6.8xlarge | ml.g6.12xlarge | ml.g6.16xlarge | ml.g6.24xlarge | ml.g6.48xlarge | ml.inf2.xlarge | ml.inf2.8xlarge | ml.inf2.24xlarge | ml.inf2.48xlarge | ml.trn1.2xlarge | ml.trn1.32xlarge | ml.trn1n.32xlarge
Required: Yes
- ModelSource
-
The location of the source model to optimize with an optimization job.
Type: OptimizationJobModelSource object
Required: Yes
- OptimizationConfigs
-
Settings for each of the optimization techniques that the job applies.
Type: Array of OptimizationConfig objects
Array Members: Maximum number of 10 items.
Required: Yes
- OptimizationEnvironment
-
The environment variables to set in the model container.
Type: String to string map
Map Entries: Maximum number of 25 items.
Key Length Constraints: Maximum length of 256.
Key Pattern:
^(?!\s*$).+
Value Length Constraints: Maximum length of 256.
Required: No
- OptimizationJobName
-
A custom name for the new optimization job.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 63.
Pattern:
^[a-zA-Z0-9](-*[a-zA-Z0-9]){0,62}$
Required: Yes
- OutputConfig
-
Details for where to store the optimized model that you create with the optimization job.
Type: OptimizationJobOutputConfig object
Required: Yes
- RoleArn
-
The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.
During model optimization, Amazon SageMaker needs your permission to:
-
Read input data from an S3 bucket
-
Write model artifacts to an S3 bucket
-
Write logs to Amazon CloudWatch Logs
-
Publish metrics to Amazon CloudWatch
You grant permissions for all of these tasks to an IAM role. To pass this role to Amazon SageMaker, the caller of this API must have the
iam:PassRole
permission. For more information, see Amazon SageMaker Roles.Type: String
Length Constraints: Minimum length of 20. Maximum length of 2048.
Pattern:
^arn:aws[a-z\-]*:iam::\d{12}:role/?[a-zA-Z_0-9+=,.@\-_/]+$
Required: Yes
-
- StoppingCondition
-
Specifies a limit to how long a job can run. When the job reaches the time limit, SageMaker ends the job. Use this API to cap costs.
To stop a training job, SageMaker sends the algorithm the
SIGTERM
signal, which delays job termination for 120 seconds. Algorithms can use this 120-second window to save the model artifacts, so the results of training are not lost.The training algorithms provided by SageMaker automatically save the intermediate results of a model training job when possible. This attempt to save artifacts is only a best effort case as model might not be in a state from which it can be saved. For example, if training has just started, the model might not be ready to save. When saved, this intermediate data is a valid model artifact. You can use it to create a model with
CreateModel
.Note
The Neural Topic Model (NTM) currently does not support saving intermediate model artifacts. When training NTMs, make sure that the maximum runtime is sufficient for the training job to complete.
Type: StoppingCondition object
Required: Yes
- Tags
-
A list of key-value pairs associated with the optimization job. For more information, see Tagging AWS resources in the AWS General Reference Guide.
Type: Array of Tag objects
Array Members: Minimum number of 0 items. Maximum number of 50 items.
Required: No
- VpcConfig
-
A VPC in Amazon VPC that your optimized model has access to.
Type: OptimizationVpcConfig object
Required: No
Response Syntax
{
"OptimizationJobArn": "string"
}
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- OptimizationJobArn
-
The Amazon Resource Name (ARN) of the optimization job.
Type: String
Length Constraints: Maximum length of 256.
Pattern:
arn:aws[a-z\-]*:sagemaker:[a-z0-9\-]*:[0-9]{12}:optimization-job/.*
Errors
For information about the errors that are common to all actions, see Common Errors.
- ResourceInUse
-
Resource being accessed is in use.
HTTP Status Code: 400
- ResourceLimitExceeded
-
You have exceeded an SageMaker resource limit. For example, you might have too many training jobs created.
HTTP Status Code: 400
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: