SageMakerClient

Provides APIs for creating and managing SageMaker resources.

Other Resources:

Installation

NPM
npm install @aws-sdk/client-sagemaker
Yarn
yarn add @aws-sdk/client-sagemaker
pnpm
pnpm add @aws-sdk/client-sagemaker

SageMakerClient Operations

Command
Summary
AddAssociationCommand

Creates an association between the source and the destination. A source can be associated with multiple destinations, and a destination can be associated with multiple sources. An association is a lineage tracking entity. For more information, see Amazon SageMaker ML Lineage Tracking .

AddTagsCommand

Adds or overwrites one or more tags for the specified SageMaker resource. You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints.

Each tag consists of a key and an optional value. Tag keys must be unique per resource. For more information about tags, see For more information, see Amazon Web Services Tagging Strategies .

Tags that you add to a hyperparameter tuning job by calling this API are also added to any training jobs that the hyperparameter tuning job launches after you call this API, but not to training jobs that the hyperparameter tuning job launched before you called this API. To make sure that the tags associated with a hyperparameter tuning job are also added to all training jobs that the hyperparameter tuning job launches, add the tags when you first create the tuning job by specifying them in the Tags parameter of CreateHyperParameterTuningJob 

Tags that you add to a SageMaker Domain or User Profile by calling this API are also added to any Apps that the Domain or User Profile launches after you call this API, but not to Apps that the Domain or User Profile launched before you called this API. To make sure that the tags associated with a Domain or User Profile are also added to all Apps that the Domain or User Profile launches, add the tags when you first create the Domain or User Profile by specifying them in the Tags parameter of CreateDomain  or CreateUserProfile .

AssociateTrialComponentCommand

Associates a trial component with a trial. A trial component can be associated with multiple trials. To disassociate a trial component from a trial, call the DisassociateTrialComponent  API.

BatchDeleteClusterNodesCommand

Deletes specific nodes within a SageMaker HyperPod cluster. BatchDeleteClusterNodes accepts a cluster name and a list of node IDs.

BatchDescribeModelPackageCommand

This action batch describes a list of versioned model packages

CreateActionCommand

Creates an action. An action is a lineage tracking entity that represents an action or activity. For example, a model deployment or an HPO job. Generally, an action involves at least one input or output artifact. For more information, see Amazon SageMaker ML Lineage Tracking .

CreateAlgorithmCommand

Create a machine learning algorithm that you can use in SageMaker and list in the Amazon Web Services Marketplace.

CreateAppCommand

Creates a running app for the specified UserProfile. This operation is automatically invoked by Amazon SageMaker AI upon access to the associated Domain, and when new kernel configurations are selected by the user. A user may have multiple Apps active simultaneously.

CreateAppImageConfigCommand

Creates a configuration for running a SageMaker AI image as a KernelGateway app. The configuration specifies the Amazon Elastic File System storage volume on the image, and a list of the kernels in the image.

CreateArtifactCommand

Creates an artifact. An artifact is a lineage tracking entity that represents a URI addressable object or data. Some examples are the S3 URI of a dataset and the ECR registry path of an image. For more information, see Amazon SageMaker ML Lineage Tracking .

CreateAutoMLJobCommand

Creates an Autopilot job also referred to as Autopilot experiment or AutoML job.

An AutoML job in SageMaker AI is a fully automated process that allows you to build machine learning models with minimal effort and machine learning expertise. When initiating an AutoML job, you provide your data and optionally specify parameters tailored to your use case. SageMaker AI then automates the entire model development lifecycle, including data preprocessing, model training, tuning, and evaluation. AutoML jobs are designed to simplify and accelerate the model building process by automating various tasks and exploring different combinations of machine learning algorithms, data preprocessing techniques, and hyperparameter values. The output of an AutoML job comprises one or more trained models ready for deployment and inference. Additionally, SageMaker AI AutoML jobs generate a candidate model leaderboard, allowing you to select the best-performing model for deployment.

For more information about AutoML jobs, see https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development.html  in the SageMaker AI developer guide.

We recommend using the new versions CreateAutoMLJobV2  and DescribeAutoMLJobV2 , which offer backward compatibility.

CreateAutoMLJobV2 can manage tabular problem types identical to those of its previous version CreateAutoMLJob, as well as time-series forecasting, non-tabular problem types such as image or text classification, and text generation (LLMs fine-tuning).

Find guidelines about how to migrate a CreateAutoMLJob to CreateAutoMLJobV2 in Migrate a CreateAutoMLJob to CreateAutoMLJobV2 .

You can find the best-performing model after you run an AutoML job by calling DescribeAutoMLJobV2  (recommended) or DescribeAutoMLJob .

CreateAutoMLJobV2Command

Creates an Autopilot job also referred to as Autopilot experiment or AutoML job V2.

An AutoML job in SageMaker AI is a fully automated process that allows you to build machine learning models with minimal effort and machine learning expertise. When initiating an AutoML job, you provide your data and optionally specify parameters tailored to your use case. SageMaker AI then automates the entire model development lifecycle, including data preprocessing, model training, tuning, and evaluation. AutoML jobs are designed to simplify and accelerate the model building process by automating various tasks and exploring different combinations of machine learning algorithms, data preprocessing techniques, and hyperparameter values. The output of an AutoML job comprises one or more trained models ready for deployment and inference. Additionally, SageMaker AI AutoML jobs generate a candidate model leaderboard, allowing you to select the best-performing model for deployment.

For more information about AutoML jobs, see https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development.html  in the SageMaker AI developer guide.

AutoML jobs V2 support various problem types such as regression, binary, and multiclass classification with tabular data, text and image classification, time-series forecasting, and fine-tuning of large language models (LLMs) for text generation.

CreateAutoMLJobV2  and DescribeAutoMLJobV2  are new versions of CreateAutoMLJob  and DescribeAutoMLJob  which offer backward compatibility.

CreateAutoMLJobV2 can manage tabular problem types identical to those of its previous version CreateAutoMLJob, as well as time-series forecasting, non-tabular problem types such as image or text classification, and text generation (LLMs fine-tuning).

Find guidelines about how to migrate a CreateAutoMLJob to CreateAutoMLJobV2 in Migrate a CreateAutoMLJob to CreateAutoMLJobV2 .

For the list of available problem types supported by CreateAutoMLJobV2, see AutoMLProblemTypeConfig .

You can find the best-performing model after you run an AutoML job V2 by calling DescribeAutoMLJobV2 .

CreateClusterCommand

Creates a SageMaker HyperPod cluster. SageMaker HyperPod is a capability of SageMaker for creating and managing persistent clusters for developing large machine learning models, such as large language models (LLMs) and diffusion models. To learn more, see Amazon SageMaker HyperPod  in the Amazon SageMaker Developer Guide.

CreateClusterSchedulerConfigCommand

Create cluster policy configuration. This policy is used for task prioritization and fair-share allocation of idle compute. This helps prioritize critical workloads and distributes idle compute across entities.

CreateCodeRepositoryCommand

Creates a Git repository as a resource in your SageMaker AI account. You can associate the repository with notebook instances so that you can use Git source control for the notebooks you create. The Git repository is a resource in your SageMaker AI account, so it can be associated with more than one notebook instance, and it persists independently from the lifecycle of any notebook instances it is associated with.

The repository can be hosted either in Amazon Web Services CodeCommit  or in any other Git repository.

CreateCompilationJobCommand

Starts a model compilation job. After the model has been compiled, Amazon SageMaker AI saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify.

If you choose to host your model using Amazon SageMaker AI hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with Amazon Web Services IoT Greengrass. In that case, deploy them as an ML resource.

In the request body, you provide the following:

  • A name for the compilation job

  • Information about the input model artifacts

  • The output location for the compiled model and the device (target) that the model runs on

  • The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker AI assumes to perform the model compilation job.

You can also provide a Tag to track the model compilation job's resource use and costs. The response body contains the CompilationJobArn for the compiled job.

To stop a model compilation job, use StopCompilationJob . To get information about a particular model compilation job, use DescribeCompilationJob . To get information about multiple model compilation jobs, use ListCompilationJobs .

CreateComputeQuotaCommand

Create compute allocation definition. This defines how compute is allocated, shared, and borrowed for specified entities. Specifically, how to lend and borrow idle compute and assign a fair-share weight to the specified entities.

CreateContextCommand

Creates a context. A context is a lineage tracking entity that represents a logical grouping of other tracking or experiment entities. Some examples are an endpoint and a model package. For more information, see Amazon SageMaker ML Lineage Tracking .

CreateDataQualityJobDefinitionCommand

Creates a definition for a job that monitors data quality and drift. For information about model monitor, see Amazon SageMaker AI Model Monitor .

CreateDeviceFleetCommand

Creates a device fleet.

CreateDomainCommand

Creates a Domain. A domain consists of an associated Amazon Elastic File System volume, a list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud (VPC) configurations. Users within a domain can share notebook files and other artifacts with each other.

EFS storage

When a domain is created, an EFS volume is created for use by all of the users within the domain. Each user receives a private home directory within the EFS volume for notebooks, Git repositories, and data files.

SageMaker AI uses the Amazon Web Services Key Management Service (Amazon Web Services KMS) to encrypt the EFS volume attached to the domain with an Amazon Web Services managed key by default. For more control, you can specify a customer managed key. For more information, see Protect Data at Rest Using Encryption .

VPC configuration

All traffic between the domain and the Amazon EFS volume is through the specified VPC and subnets. For other traffic, you can specify the AppNetworkAccessType parameter. AppNetworkAccessType corresponds to the network access type that you choose when you onboard to the domain. The following options are available:

  • PublicInternetOnly - Non-EFS traffic goes through a VPC managed by Amazon SageMaker AI, which allows internet access. This is the default value.

  • VpcOnly - All traffic is through the specified VPC and subnets. Internet access is disabled by default. To allow internet access, you must specify a NAT gateway.

    When internet access is disabled, you won't be able to run a Amazon SageMaker AI Studio notebook or to train or host models unless your VPC has an interface endpoint to the SageMaker AI API and runtime or a NAT gateway and your security groups allow outbound connections.

NFS traffic over TCP on port 2049 needs to be allowed in both inbound and outbound rules in order to launch a Amazon SageMaker AI Studio app successfully.

CreateEdgeDeploymentPlanCommand

Creates an edge deployment plan, consisting of multiple stages. Each stage may have a different deployment configuration and devices.

CreateEdgeDeploymentStageCommand

Creates a new stage in an existing edge deployment plan.

CreateEdgePackagingJobCommand

Starts a SageMaker Edge Manager model packaging job. Edge Manager will use the model artifacts from the Amazon Simple Storage Service bucket that you specify. After the model has been packaged, Amazon SageMaker saves the resulting artifacts to an S3 bucket that you specify.

CreateEndpointCommand

Creates an endpoint using the endpoint configuration specified in the request. SageMaker uses the endpoint to provision resources and deploy models. You create the endpoint configuration with the CreateEndpointConfig  API.

Use this API to deploy models using SageMaker hosting services.

You must not delete an EndpointConfig that is in use by an endpoint that is live or while the UpdateEndpoint or CreateEndpoint operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig.

The endpoint name must be unique within an Amazon Web Services Region in your Amazon Web Services account.

When it receives the request, SageMaker creates the endpoint, launches the resources (ML compute instances), and deploys the model(s) on them.

When you call CreateEndpoint , a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting Eventually Consistent Reads  , the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call DescribeEndpointConfig  before calling CreateEndpoint  to minimize the potential impact of a DynamoDB eventually consistent read.

When SageMaker receives the request, it sets the endpoint status to Creating. After it creates the endpoint, it sets the status to InService. SageMaker can then process incoming requests for inferences. To check the status of an endpoint, use the DescribeEndpoint  API.

If any of the models hosted at this endpoint get model data from an Amazon S3 location, SageMaker uses Amazon Web Services Security Token Service to download model artifacts from the S3 path you provided. Amazon Web Services STS is activated in your Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region  in the Amazon Web Services Identity and Access Management User Guide.

To add the IAM role policies for using this API operation, go to the IAM console , and choose Roles in the left navigation pane. Search the IAM role that you want to grant access to use the CreateEndpoint  and CreateEndpointConfig  API operations, add the following policies to the role.

  • Option 1: For a full SageMaker access, search and attach the AmazonSageMakerFullAccess policy.

  • Option 2: For granting a limited access to an IAM role, paste the following Action elements manually into the JSON file of the IAM role:

    "Action": ["sagemaker:CreateEndpoint", "sagemaker:CreateEndpointConfig"]

    "Resource": [

    "arn:aws:sagemaker:region:account-id:endpoint/endpointName"

    "arn:aws:sagemaker:region:account-id:endpoint-config/endpointConfigName"

    ]

    For more information, see SageMaker API Permissions: Actions, Permissions, and Resources Reference .

CreateEndpointConfigCommand

Creates an endpoint configuration that SageMaker hosting services uses to deploy models. In the configuration, you identify one or more models, created using the CreateModel API, to deploy and the resources that you want SageMaker to provision. Then you call the CreateEndpoint  API.

Use this API if you want to use SageMaker hosting services to deploy models into production.

In the request, you define a ProductionVariant, for each model that you want to deploy. Each ProductionVariant parameter also describes the resources that you want SageMaker to provision. This includes the number and type of ML compute instances to deploy.

If you are hosting multiple models, you also assign a VariantWeight to specify how much traffic you want to allocate to each model. For example, suppose that you want to host two models, A and B, and you assign traffic weight 2 for model A and 1 for model B. SageMaker distributes two-thirds of the traffic to Model A, and one-third to model B.

When you call CreateEndpoint , a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting Eventually Consistent Reads  , the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call DescribeEndpointConfig  before calling CreateEndpoint  to minimize the potential impact of a DynamoDB eventually consistent read.

CreateExperimentCommand

Creates a SageMaker experiment. An experiment is a collection of trials that are observed, compared and evaluated as a group. A trial is a set of steps, called trial components, that produce a machine learning model.

In the Studio UI, trials are referred to as run groups and trial components are referred to as runs.

The goal of an experiment is to determine the components that produce the best model. Multiple trials are performed, each one isolating and measuring the impact of a change to one or more inputs, while keeping the remaining inputs constant.

When you use SageMaker Studio or the SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the Amazon Web Services SDK for Python (Boto), you must use the logging APIs provided by the SDK.

You can add tags to experiments, trials, trial components and then use the Search  API to search for the tags.

To add a description to an experiment, specify the optional Description parameter. To add a description later, or to change the description, call the UpdateExperiment  API.

To get a list of all your experiments, call the ListExperiments  API. To view an experiment's properties, call the DescribeExperiment  API. To get a list of all the trials associated with an experiment, call the ListTrials  API. To create a trial call the CreateTrial  API.

CreateFeatureGroupCommand

Create a new FeatureGroup. A FeatureGroup is a group of Features defined in the FeatureStore to describe a Record.

The FeatureGroup defines the schema and features contained in the FeatureGroup. A FeatureGroup definition is composed of a list of Features, a RecordIdentifierFeatureName, an EventTimeFeatureName and configurations for its OnlineStore and OfflineStore. Check Amazon Web Services service quotas  to see the FeatureGroups quota for your Amazon Web Services account.

Note that it can take approximately 10-15 minutes to provision an OnlineStore FeatureGroup with the InMemory StorageType.

You must include at least one of OnlineStoreConfig and OfflineStoreConfig to create a FeatureGroup.

CreateFlowDefinitionCommand

Creates a flow definition.

CreateHubCommand

Create a hub.

CreateHubContentReferenceCommand

Create a hub content reference in order to add a model in the JumpStart public hub to a private hub.

CreateHumanTaskUiCommand

Defines the settings you will use for the human review workflow user interface. Reviewers will see a three-panel interface with an instruction area, the item to review, and an input area.

CreateHyperParameterTuningJobCommand

Starts a hyperparameter tuning job. A hyperparameter tuning job finds the best version of a model by running many training jobs on your dataset using the algorithm you choose and values for hyperparameters within ranges that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by an objective metric that you choose.

A hyperparameter tuning job automatically creates Amazon SageMaker experiments, trials, and trial components for each training job that it runs. You can view these entities in Amazon SageMaker Studio. For more information, see View Experiments, Trials, and Trial Components .

Do not include any security-sensitive information including account access IDs, secrets or tokens in any hyperparameter field. If the use of security-sensitive credentials are detected, SageMaker will reject your training job request and return an exception error.

CreateImageCommand

Creates a custom SageMaker AI image. A SageMaker AI image is a set of image versions. Each image version represents a container image stored in Amazon ECR. For more information, see Bring your own SageMaker AI image .

CreateImageVersionCommand

Creates a version of the SageMaker AI image specified by ImageName. The version represents the Amazon ECR container image specified by BaseImage.

CreateInferenceComponentCommand

Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint. In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

CreateInferenceExperimentCommand

Creates an inference experiment using the configurations specified in the request.

Use this API to setup and schedule an experiment to compare model variants on a Amazon SageMaker inference endpoint. For more information about inference experiments, see Shadow tests .

Amazon SageMaker begins your experiment at the scheduled time and routes traffic to your endpoint's model variants based on your specified configuration.

While the experiment is in progress or after it has concluded, you can view metrics that compare your model variants. For more information, see View, monitor, and edit shadow tests .

CreateInferenceRecommendationsJobCommand

Starts a recommendation job. You can create either an instance recommendation or load test job.

CreateLabelingJobCommand

Creates a job that uses workers to label the data objects in your input dataset. You can use the labeled data to train machine learning models.

You can select your workforce from one of three providers:

  • A private workforce that you create. It can include employees, contractors, and outside experts. Use a private workforce when want the data to stay within your organization or when a specific set of skills is required.

  • One or more vendors that you select from the Amazon Web Services Marketplace. Vendors provide expertise in specific areas.

  • The Amazon Mechanical Turk workforce. This is the largest workforce, but it should only be used for public data or data that has been stripped of any personally identifiable information.

You can also use automated data labeling to reduce the number of data objects that need to be labeled by a human. Automated data labeling uses active learning to determine if a data object can be labeled by machine or if it needs to be sent to a human worker. For more information, see Using Automated Data Labeling .

The data objects to be labeled are contained in an Amazon S3 bucket. You create a manifest file that describes the location of each object. For more information, see Using Input and Output Data .

The output can be used as the manifest file for another labeling job or as training data for your machine learning models.

You can use this operation to create a static labeling job or a streaming labeling job. A static labeling job stops if all data objects in the input manifest file identified in ManifestS3Uri have been labeled. A streaming labeling job runs perpetually until it is manually stopped, or remains idle for 10 days. You can send new data objects to an active (InProgress) streaming labeling job in real time. To learn how to create a static labeling job, see Create a Labeling Job (API)   in the Amazon SageMaker Developer Guide. To learn how to create a streaming labeling job, see Create a Streaming Labeling Job .

CreateMlflowTrackingServerCommand

Creates an MLflow Tracking Server using a general purpose Amazon S3 bucket as the artifact store. For more information, see Create an MLflow Tracking Server .

CreateModelBiasJobDefinitionCommand

Creates the definition for a model bias job.

CreateModelCardCommand

Creates an Amazon SageMaker Model Card.

For information about how to use model cards, see Amazon SageMaker Model Card .

CreateModelCardExportJobCommand

Creates an Amazon SageMaker Model Card export job.

CreateModelCommand

Creates a model in SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions.

Use this API to create a model if you want to use SageMaker hosting services or run a batch transform job.

To host your model, you create an endpoint configuration with the CreateEndpointConfig API, and then create an endpoint with the CreateEndpoint API. SageMaker then deploys all of the containers that you defined for the model in the hosting environment.

To run a batch transform using your model, you start a job with the CreateTransformJob API. SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location.

In the request, you also provide an IAM role that SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other Amazon Web Services resources, you grant necessary permissions via this role.

CreateModelExplainabilityJobDefinitionCommand

Creates the definition for a model explainability job.

CreateModelPackageCommand

Creates a model package that you can use to create SageMaker models or list on Amazon Web Services Marketplace, or a versioned model that is part of a model group. Buyers can subscribe to model packages listed on Amazon Web Services Marketplace to create models in SageMaker.

To create a model package by specifying a Docker container that contains your inference code and the Amazon S3 location of your model artifacts, provide values for InferenceSpecification. To create a model from an algorithm resource that you created or subscribed to in Amazon Web Services Marketplace, provide a value for SourceAlgorithmSpecification.

There are two types of model packages:

  • Versioned - a model that is part of a model group in the model registry.

  • Unversioned - a model package that is not part of a model group.

CreateModelPackageGroupCommand

Creates a model group. A model group contains a group of model versions.

CreateModelQualityJobDefinitionCommand

Creates a definition for a job that monitors model quality and drift. For information about model monitor, see Amazon SageMaker AI Model Monitor .

CreateMonitoringScheduleCommand

Creates a schedule that regularly starts Amazon SageMaker AI Processing Jobs to monitor the data captured for an Amazon SageMaker AI Endpoint.

CreateNotebookInstanceCommand

Creates an SageMaker AI notebook instance. A notebook instance is a machine learning (ML) compute instance running on a Jupyter notebook.

In a CreateNotebookInstance request, specify the type of ML compute instance that you want to run. SageMaker AI launches the instance, installs common libraries that you can use to explore datasets for model training, and attaches an ML storage volume to the notebook instance.

SageMaker AI also provides a set of example notebooks. Each notebook demonstrates how to use SageMaker AI with a specific algorithm or with a machine learning framework.

After receiving the request, SageMaker AI does the following:

  1. Creates a network interface in the SageMaker AI VPC.

  2. (Option) If you specified SubnetId, SageMaker AI creates a network interface in your own VPC, which is inferred from the subnet ID that you provide in the input. When creating this network interface, SageMaker AI attaches the security group that you specified in the request to the network interface that it creates in your VPC.

  3. Launches an EC2 instance of the type specified in the request in the SageMaker AI VPC. If you specified SubnetId of your VPC, SageMaker AI specifies both network interfaces when launching this instance. This enables inbound traffic from your own VPC to the notebook instance, assuming that the security groups allow it.

After creating the notebook instance, SageMaker AI returns its Amazon Resource Name (ARN). You can't change the name of a notebook instance after you create it.

After SageMaker AI creates the notebook instance, you can connect to the Jupyter server and work in Jupyter notebooks. For example, you can write code to explore a dataset that you can use for model training, train a model, host models by creating SageMaker AI endpoints, and validate hosted models.

For more information, see How It Works .

CreateNotebookInstanceLifecycleConfigCommand

Creates a lifecycle configuration that you can associate with a notebook instance. A lifecycle configuration is a collection of shell scripts that run when you create or start a notebook instance.

Each lifecycle configuration script has a limit of 16384 characters.

The value of the $PATH environment variable that is available to both scripts is /sbin:bin:/usr/sbin:/usr/bin.

View Amazon CloudWatch Logs for notebook instance lifecycle configurations in log group /aws/sagemaker/NotebookInstances in log stream [notebook-instance-name]/[LifecycleConfigHook].

Lifecycle configuration scripts cannot run for longer than 5 minutes. If a script runs for longer than 5 minutes, it fails and the notebook instance is not created or started.

For information about notebook instance lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance .

CreateOptimizationJobCommand

Creates a job that optimizes a model for inference performance. To create the job, you provide the location of a source model, and you provide the settings for the optimization techniques that you want the job to apply. When the job completes successfully, SageMaker uploads the new optimized model to the output destination that you specify.

For more information about how to use this action, and about the supported optimization techniques, see Optimize model inference with Amazon SageMaker .

CreatePartnerAppCommand

Creates an Amazon SageMaker Partner AI App.

CreatePartnerAppPresignedUrlCommand

Creates a presigned URL to access an Amazon SageMaker Partner AI App.

CreatePipelineCommand

Creates a pipeline using a JSON pipeline definition.

CreatePresignedDomainUrlCommand

Creates a URL for a specified UserProfile in a Domain. When accessed in a web browser, the user will be automatically signed in to the domain, and granted access to all of the Apps and files associated with the Domain's Amazon Elastic File System volume. This operation can only be called when the authentication mode equals IAM.

The IAM role or user passed to this API defines the permissions to access the app. Once the presigned URL is created, no additional permission is required to access this URL. IAM authorization policies for this API are also enforced for every HTTP request and WebSocket frame that attempts to connect to the app.

You can restrict access to this API and to the URL that it returns to a list of IP addresses, Amazon VPCs or Amazon VPC Endpoints that you specify. For more information, see Connect to Amazon SageMaker AI Studio Through an Interface VPC Endpoint  .

  • The URL that you get from a call to CreatePresignedDomainUrl has a default timeout of 5 minutes. You can configure this value using ExpiresInSeconds. If you try to use the URL after the timeout limit expires, you are directed to the Amazon Web Services console sign-in page.

  • The JupyterLab session default expiration time is 12 hours. You can configure this value using SessionExpirationDurationInSeconds.

CreatePresignedMlflowTrackingServerUrlCommand

Returns a presigned URL that you can use to connect to the MLflow UI attached to your tracking server. For more information, see Launch the MLflow UI using a presigned URL .

CreatePresignedNotebookInstanceUrlCommand

Returns a URL that you can use to connect to the Jupyter server from a notebook instance. In the SageMaker AI console, when you choose Open next to a notebook instance, SageMaker AI opens a new tab showing the Jupyter server home page from the notebook instance. The console uses this API to get the URL and show the page.

The IAM role or user used to call this API defines the permissions to access the notebook instance. Once the presigned URL is created, no additional permission is required to access this URL. IAM authorization policies for this API are also enforced for every HTTP request and WebSocket frame that attempts to connect to the notebook instance.

You can restrict access to this API and to the URL that it returns to a list of IP addresses that you specify. Use the NotIpAddress condition operator and the aws:SourceIP condition context key to specify the list of IP addresses that you want to have access to the notebook instance. For more information, see Limit Access to a Notebook Instance by IP Address .

The URL that you get from a call to CreatePresignedNotebookInstanceUrl  is valid only for 5 minutes. If you try to use the URL after the 5-minute limit expires, you are directed to the Amazon Web Services console sign-in page.

CreateProcessingJobCommand

Creates a processing job.

CreateProjectCommand

Creates a machine learning (ML) project that can contain one or more templates that set up an ML pipeline from training to deploying an approved model.

CreateSpaceCommand

Creates a private space or a space used for real time collaboration in a domain.

CreateStudioLifecycleConfigCommand

Creates a new Amazon SageMaker AI Studio Lifecycle Configuration.

CreateTrainingJobCommand

Starts a model training job. After training completes, SageMaker saves the resulting model artifacts to an Amazon S3 location that you specify.

If you choose to host your model using SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts in a machine learning service other than SageMaker, provided that you know how to use them for inference.

In the request body, you provide the following:

  • AlgorithmSpecification - Identifies the training algorithm to use.

  • HyperParameters - Specify these algorithm-specific parameters to enable the estimation of model parameters during training. Hyperparameters can be tuned to optimize this learning process. For a list of hyperparameters for each training algorithm provided by SageMaker, see Algorithms .

    Do not include any security-sensitive information including account access IDs, secrets or tokens in any hyperparameter field. If the use of security-sensitive credentials are detected, SageMaker will reject your training job request and return an exception error.

  • InputDataConfig - Describes the input required by the training job and the Amazon S3, EFS, or FSx location where it is stored.

  • OutputDataConfig - Identifies the Amazon S3 bucket where you want SageMaker to save the results of model training.

  • ResourceConfig - Identifies the resources, ML compute instances, and ML storage volumes to deploy for model training. In distributed training, you specify more than one instance.

  • EnableManagedSpotTraining - Optimize the cost of training machine learning models by up to 80% by using Amazon EC2 Spot instances. For more information, see Managed Spot Training .

  • RoleArn - The Amazon Resource Name (ARN) that SageMaker assumes to perform tasks on your behalf during model training.You must grant this role the necessary permissions so that SageMaker can successfully complete model training.

  • StoppingCondition - To help cap training costs, use MaxRuntimeInSeconds to set a time limit for training. Use MaxWaitTimeInSeconds to specify how long a managed spot training job has to complete.

  • Environment - The environment variables to set in the Docker container.

  • RetryStrategy - The number of times to retry the job when the job fails due to an InternalServerError.

For more information about SageMaker, see How It Works .

CreateTrainingPlanCommand

Creates a new training plan in SageMaker to reserve compute capacity.

Amazon SageMaker Training Plan is a capability within SageMaker that allows customers to reserve and manage GPU capacity for large-scale AI model training. It provides a way to secure predictable access to computational resources within specific timelines and budgets, without the need to manage underlying infrastructure.

How it works

Plans can be created for specific resources such as SageMaker Training Jobs or SageMaker HyperPod clusters, automatically provisioning resources, setting up infrastructure, executing workloads, and handling infrastructure failures.

Plan creation workflow

  • Users search for available plan offerings based on their requirements (e.g., instance type, count, start time, duration) using the SearchTrainingPlanOfferings  API operation.

  • They create a plan that best matches their needs using the ID of the plan offering they want to use.

  • After successful upfront payment, the plan's status becomes Scheduled.

  • The plan can be used to:

    • Queue training jobs.

    • Allocate to an instance group of a SageMaker HyperPod cluster.

  • When the plan start date arrives, it becomes Active. Based on available reserved capacity:

    • Training jobs are launched.

    • Instance groups are provisioned.

Plan composition

A plan can consist of one or more Reserved Capacities, each defined by a specific instance type, quantity, Availability Zone, duration, and start and end times. For more information about Reserved Capacity, see ReservedCapacitySummary  .

CreateTransformJobCommand

Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify.

To perform batch transformations, you create a transform job and use the data that you have readily available.

In the request body, you provide the following:

  • TransformJobName - Identifies the transform job. The name must be unique within an Amazon Web Services Region in an Amazon Web Services account.

  • ModelName - Identifies the model to use. ModelName must be the name of an existing Amazon SageMaker model in the same Amazon Web Services Region and Amazon Web Services account. For information on creating a model, see CreateModel .

  • TransformInput - Describes the dataset to be transformed and the Amazon S3 location where it is stored.

  • TransformOutput - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.

  • TransformResources - Identifies the ML compute instances and AMI image versions for the transform job.

For more information about how batch transformation works, see Batch Transform .

CreateTrialCommand

Creates an SageMaker trial. A trial is a set of steps called trial components that produce a machine learning model. A trial is part of a single SageMaker experiment.

When you use SageMaker Studio or the SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the Amazon Web Services SDK for Python (Boto), you must use the logging APIs provided by the SDK.

You can add tags to a trial and then use the Search  API to search for the tags.

To get a list of all your trials, call the ListTrials  API. To view a trial's properties, call the DescribeTrial  API. To create a trial component, call the CreateTrialComponent  API.

CreateTrialComponentCommand

Creates a trial component, which is a stage of a machine learning trial. A trial is composed of one or more trial components. A trial component can be used in multiple trials.

Trial components include pre-processing jobs, training jobs, and batch transform jobs.

When you use SageMaker Studio or the SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the Amazon Web Services SDK for Python (Boto), you must use the logging APIs provided by the SDK.

You can add tags to a trial component and then use the Search  API to search for the tags.

CreateUserProfileCommand

Creates a user profile. A user profile represents a single user within a domain, and is the main way to reference a "person" for the purposes of sharing, reporting, and other user-oriented features. This entity is created when a user onboards to a domain. If an administrator invites a person by email or imports them from IAM Identity Center, a user profile is automatically created. A user profile is the primary holder of settings for an individual user and has a reference to the user's private Amazon Elastic File System home directory.

CreateWorkforceCommand

Use this operation to create a workforce. This operation will return an error if a workforce already exists in the Amazon Web Services Region that you specify. You can only create one workforce in each Amazon Web Services Region per Amazon Web Services account.

If you want to create a new workforce in an Amazon Web Services Region where a workforce already exists, use the DeleteWorkforce  API operation to delete the existing workforce and then use CreateWorkforce to create a new workforce.

To create a private workforce using Amazon Cognito, you must specify a Cognito user pool in CognitoConfig. You can also create an Amazon Cognito workforce using the Amazon SageMaker console. For more information, see Create a Private Workforce (Amazon Cognito) .

To create a private workforce using your own OIDC Identity Provider (IdP), specify your IdP configuration in OidcConfig. Your OIDC IdP must support groups because groups are used by Ground Truth and Amazon A2I to create work teams. For more information, see Create a Private Workforce (OIDC IdP) .

CreateWorkteamCommand

Creates a new work team for labeling your data. A work team is defined by one or more Amazon Cognito user pools. You must first create the user pools before you can create a work team.

You cannot create more than 25 work teams in an account and region.

DeleteActionCommand

Deletes an action.

DeleteAlgorithmCommand

Removes the specified algorithm from your account.

DeleteAppCommand

Used to stop and delete an app.

DeleteAppImageConfigCommand

Deletes an AppImageConfig.

DeleteArtifactCommand

Deletes an artifact. Either ArtifactArn or Source must be specified.

DeleteAssociationCommand

Deletes an association.

DeleteClusterCommand

Delete a SageMaker HyperPod cluster.

DeleteClusterSchedulerConfigCommand

Deletes the cluster policy of the cluster.

DeleteCodeRepositoryCommand

Deletes the specified Git repository from your account.

DeleteCompilationJobCommand

Deletes the specified compilation job. This action deletes only the compilation job resource in Amazon SageMaker AI. It doesn't delete other resources that are related to that job, such as the model artifacts that the job creates, the compilation logs in CloudWatch, the compiled model, or the IAM role.

You can delete a compilation job only if its current status is COMPLETED, FAILED, or STOPPED. If the job status is STARTING or INPROGRESS, stop the job, and then delete it after its status becomes STOPPED.

DeleteComputeQuotaCommand

Deletes the compute allocation from the cluster.

DeleteContextCommand

Deletes an context.

DeleteDataQualityJobDefinitionCommand

Deletes a data quality monitoring job definition.

DeleteDeviceFleetCommand

Deletes a fleet.

DeleteDomainCommand

Used to delete a domain. If you onboarded with IAM mode, you will need to delete your domain to onboard again using IAM Identity Center. Use with caution. All of the members of the domain will lose access to their EFS volume, including data, notebooks, and other artifacts.

DeleteEdgeDeploymentPlanCommand

Deletes an edge deployment plan if (and only if) all the stages in the plan are inactive or there are no stages in the plan.

DeleteEdgeDeploymentStageCommand

Delete a stage in an edge deployment plan if (and only if) the stage is inactive.

DeleteEndpointCommand

Deletes an endpoint. SageMaker frees up all of the resources that were deployed when the endpoint was created.

SageMaker retires any custom KMS key grants associated with the endpoint, meaning you don't need to use the RevokeGrant  API call.

When you delete your endpoint, SageMaker asynchronously deletes associated endpoint resources such as KMS key grants. You might still see these resources in your account for a few minutes after deleting your endpoint. Do not delete or revoke the permissions for your ExecutionRoleArn  , otherwise SageMaker cannot delete these resources.

DeleteEndpointConfigCommand

Deletes an endpoint configuration. The DeleteEndpointConfig API deletes only the specified configuration. It does not delete endpoints created using the configuration.

You must not delete an EndpointConfig in use by an endpoint that is live or while the UpdateEndpoint or CreateEndpoint operations are being performed on the endpoint. If you delete the EndpointConfig of an endpoint that is active or being created or updated you may lose visibility into the instance type the endpoint is using. The endpoint must be deleted in order to stop incurring charges.

DeleteExperimentCommand

Deletes an SageMaker experiment. All trials associated with the experiment must be deleted first. Use the ListTrials  API to get a list of the trials associated with the experiment.

DeleteFeatureGroupCommand

Delete the FeatureGroup and any data that was written to the OnlineStore of the FeatureGroup. Data cannot be accessed from the OnlineStore immediately after DeleteFeatureGroup is called.

Data written into the OfflineStore will not be deleted. The Amazon Web Services Glue database and tables that are automatically created for your OfflineStore are not deleted.

Note that it can take approximately 10-15 minutes to delete an OnlineStore FeatureGroup with the InMemory StorageType.

DeleteFlowDefinitionCommand

Deletes the specified flow definition.

DeleteHubCommand

Delete a hub.

DeleteHubContentCommand

Delete the contents of a hub.

DeleteHubContentReferenceCommand

Delete a hub content reference in order to remove a model from a private hub.

DeleteHumanTaskUiCommand

Use this operation to delete a human task user interface (worker task template).

To see a list of human task user interfaces (work task templates) in your account, use ListHumanTaskUis . When you delete a worker task template, it no longer appears when you call ListHumanTaskUis.

DeleteHyperParameterTuningJobCommand

Deletes a hyperparameter tuning job. The DeleteHyperParameterTuningJob API deletes only the tuning job entry that was created in SageMaker when you called the CreateHyperParameterTuningJob API. It does not delete training jobs, artifacts, or the IAM role that you specified when creating the model.

DeleteImageCommand

Deletes a SageMaker AI image and all versions of the image. The container images aren't deleted.

DeleteImageVersionCommand

Deletes a version of a SageMaker AI image. The container image the version represents isn't deleted.

DeleteInferenceComponentCommand

Deletes an inference component.

DeleteInferenceExperimentCommand

Deletes an inference experiment.

This operation does not delete your endpoint, variants, or any underlying resources. This operation only deletes the metadata of your experiment.

DeleteMlflowTrackingServerCommand

Deletes an MLflow Tracking Server. For more information, see Clean up MLflow resources .

DeleteModelBiasJobDefinitionCommand

Deletes an Amazon SageMaker AI model bias job definition.

DeleteModelCardCommand

Deletes an Amazon SageMaker Model Card.

DeleteModelCommand

Deletes a model. The DeleteModel API deletes only the model entry that was created in SageMaker when you called the CreateModel API. It does not delete model artifacts, inference code, or the IAM role that you specified when creating the model.

DeleteModelExplainabilityJobDefinitionCommand

Deletes an Amazon SageMaker AI model explainability job definition.

DeleteModelPackageCommand

Deletes a model package.

A model package is used to create SageMaker models or list on Amazon Web Services Marketplace. Buyers can subscribe to model packages listed on Amazon Web Services Marketplace to create models in SageMaker.

DeleteModelPackageGroupCommand

Deletes the specified model group.

DeleteModelPackageGroupPolicyCommand

Deletes a model group resource policy.

DeleteModelQualityJobDefinitionCommand

Deletes the secified model quality monitoring job definition.

DeleteMonitoringScheduleCommand

Deletes a monitoring schedule. Also stops the schedule had not already been stopped. This does not delete the job execution history of the monitoring schedule.

DeleteNotebookInstanceCommand

Deletes an SageMaker AI notebook instance. Before you can delete a notebook instance, you must call the StopNotebookInstance API.

When you delete a notebook instance, you lose all of your data. SageMaker AI removes the ML compute instance, and deletes the ML storage volume and the network interface associated with the notebook instance.

DeleteNotebookInstanceLifecycleConfigCommand

Deletes a notebook instance lifecycle configuration.

DeleteOptimizationJobCommand

Deletes an optimization job.

DeletePartnerAppCommand

Deletes a SageMaker Partner AI App.

DeletePipelineCommand

Deletes a pipeline if there are no running instances of the pipeline. To delete a pipeline, you must stop all running instances of the pipeline using the StopPipelineExecution API. When you delete a pipeline, all instances of the pipeline are deleted.

DeleteProjectCommand

Delete the specified project.

DeleteSpaceCommand

Used to delete a space.

DeleteStudioLifecycleConfigCommand

Deletes the Amazon SageMaker AI Studio Lifecycle Configuration. In order to delete the Lifecycle Configuration, there must be no running apps using the Lifecycle Configuration. You must also remove the Lifecycle Configuration from UserSettings in all Domains and UserProfiles.

DeleteTagsCommand

Deletes the specified tags from an SageMaker resource.

To list a resource's tags, use the ListTags API.

When you call this API to delete tags from a hyperparameter tuning job, the deleted tags are not removed from training jobs that the hyperparameter tuning job launched before you called this API.

When you call this API to delete tags from a SageMaker Domain or User Profile, the deleted tags are not removed from Apps that the SageMaker Domain or User Profile launched before you called this API.

DeleteTrialCommand

Deletes the specified trial. All trial components that make up the trial must be deleted first. Use the DescribeTrialComponent  API to get the list of trial components.

DeleteTrialComponentCommand

Deletes the specified trial component. A trial component must be disassociated from all trials before the trial component can be deleted. To disassociate a trial component from a trial, call the DisassociateTrialComponent  API.

DeleteUserProfileCommand

Deletes a user profile. When a user profile is deleted, the user loses access to their EFS volume, including data, notebooks, and other artifacts.

DeleteWorkforceCommand

Use this operation to delete a workforce.

If you want to create a new workforce in an Amazon Web Services Region where a workforce already exists, use this operation to delete the existing workforce and then use CreateWorkforce  to create a new workforce.

If a private workforce contains one or more work teams, you must use the DeleteWorkteam  operation to delete all work teams before you delete the workforce. If you try to delete a workforce that contains one or more work teams, you will receive a ResourceInUse error.

DeleteWorkteamCommand

Deletes an existing work team. This operation can't be undone.

DeregisterDevicesCommand

Deregisters the specified devices. After you deregister a device, you will need to re-register the devices.

DescribeActionCommand

Describes an action.

DescribeAlgorithmCommand

Returns a description of the specified algorithm that is in your account.

DescribeAppCommand

Describes the app.

DescribeAppImageConfigCommand

Describes an AppImageConfig.

DescribeArtifactCommand

Describes an artifact.

DescribeAutoMLJobCommand

Returns information about an AutoML job created by calling CreateAutoMLJob .

AutoML jobs created by calling CreateAutoMLJobV2  cannot be described by DescribeAutoMLJob.

DescribeAutoMLJobV2Command

Returns information about an AutoML job created by calling CreateAutoMLJobV2  or CreateAutoMLJob .

DescribeClusterCommand

Retrieves information of a SageMaker HyperPod cluster.

DescribeClusterNodeCommand

Retrieves information of a node (also called a instance interchangeably) of a SageMaker HyperPod cluster.

DescribeClusterSchedulerConfigCommand

Description of the cluster policy. This policy is used for task prioritization and fair-share allocation. This helps prioritize critical workloads and distributes idle compute across entities.

DescribeCodeRepositoryCommand

Gets details about the specified Git repository.

DescribeCompilationJobCommand

Returns information about a model compilation job.

To create a model compilation job, use CreateCompilationJob . To get information about multiple model compilation jobs, use ListCompilationJobs .

DescribeComputeQuotaCommand

Description of the compute allocation definition.

DescribeContextCommand

Describes a context.

DescribeDataQualityJobDefinitionCommand

Gets the details of a data quality monitoring job definition.

DescribeDeviceCommand

Describes the device.

DescribeDeviceFleetCommand

A description of the fleet the device belongs to.

DescribeDomainCommand

The description of the domain.

DescribeEdgeDeploymentPlanCommand

Describes an edge deployment plan with deployment status per stage.

DescribeEdgePackagingJobCommand

A description of edge packaging jobs.

DescribeEndpointCommand

Returns the description of an endpoint.

DescribeEndpointConfigCommand

Returns the description of an endpoint configuration created using the CreateEndpointConfig API.

DescribeExperimentCommand

Provides a list of an experiment's properties.

DescribeFeatureGroupCommand

Use this operation to describe a FeatureGroup. The response includes information on the creation time, FeatureGroup name, the unique identifier for each FeatureGroup, and more.

DescribeFeatureMetadataCommand

Shows the metadata for a feature within a feature group.

DescribeFlowDefinitionCommand

Returns information about the specified flow definition.

DescribeHubCommand

Describes a hub.

DescribeHubContentCommand

Describe the content of a hub.

DescribeHumanTaskUiCommand

Returns information about the requested human task user interface (worker task template).

DescribeHyperParameterTuningJobCommand

Returns a description of a hyperparameter tuning job, depending on the fields selected. These fields can include the name, Amazon Resource Name (ARN), job status of your tuning job and more.

DescribeImageCommand

Describes a SageMaker AI image.

DescribeImageVersionCommand

Describes a version of a SageMaker AI image.

DescribeInferenceComponentCommand

Returns information about an inference component.

DescribeInferenceExperimentCommand

Returns details about an inference experiment.

DescribeInferenceRecommendationsJobCommand

Provides the results of the Inference Recommender job. One or more recommendation jobs are returned.

DescribeLabelingJobCommand

Gets information about a labeling job.

DescribeLineageGroupCommand

Provides a list of properties for the requested lineage group. For more information, see Cross-Account Lineage Tracking   in the Amazon SageMaker Developer Guide.

DescribeMlflowTrackingServerCommand

Returns information about an MLflow Tracking Server.

DescribeModelBiasJobDefinitionCommand

Returns a description of a model bias job definition.

DescribeModelCardCommand

Describes the content, creation time, and security configuration of an Amazon SageMaker Model Card.

DescribeModelCardExportJobCommand

Describes an Amazon SageMaker Model Card export job.

DescribeModelCommand

Describes a model that you created using the CreateModel API.

DescribeModelExplainabilityJobDefinitionCommand

Returns a description of a model explainability job definition.

DescribeModelPackageCommand

Returns a description of the specified model package, which is used to create SageMaker models or list them on Amazon Web Services Marketplace.

If you provided a KMS Key ID when you created your model package, you will see the KMS Decrypt  API call in your CloudTrail logs when you use this API.

To create models in SageMaker, buyers can subscribe to model packages listed on Amazon Web Services Marketplace.

DescribeModelPackageGroupCommand

Gets a description for the specified model group.

DescribeModelQualityJobDefinitionCommand

Returns a description of a model quality job definition.

DescribeMonitoringScheduleCommand

Describes the schedule for a monitoring job.

DescribeNotebookInstanceCommand

Returns information about a notebook instance.

DescribeNotebookInstanceLifecycleConfigCommand

Returns a description of a notebook instance lifecycle configuration.

For information about notebook instance lifestyle configurations, see Step 2.1: (Optional) Customize a Notebook Instance .

DescribeOptimizationJobCommand

Provides the properties of the specified optimization job.

DescribePartnerAppCommand

Gets information about a SageMaker Partner AI App.

DescribePipelineCommand

Describes the details of a pipeline.

DescribePipelineDefinitionForExecutionCommand

Describes the details of an execution's pipeline definition.

DescribePipelineExecutionCommand

Describes the details of a pipeline execution.

DescribeProcessingJobCommand

Returns a description of a processing job.

DescribeProjectCommand

Describes the details of a project.

DescribeSpaceCommand

Describes the space.

DescribeStudioLifecycleConfigCommand

Describes the Amazon SageMaker AI Studio Lifecycle Configuration.

DescribeSubscribedWorkteamCommand

Gets information about a work team provided by a vendor. It returns details about the subscription with a vendor in the Amazon Web Services Marketplace.

DescribeTrainingJobCommand

Returns information about a training job.

Some of the attributes below only appear if the training job successfully starts. If the training job fails, TrainingJobStatus is Failed and, depending on the FailureReason, attributes like TrainingStartTime, TrainingTimeInSeconds, TrainingEndTime, and BillableTimeInSeconds may not be present in the response.

DescribeTrainingPlanCommand

Retrieves detailed information about a specific training plan.

DescribeTransformJobCommand

Returns information about a transform job.

DescribeTrialCommand

Provides a list of a trial's properties.

DescribeTrialComponentCommand

Provides a list of a trials component's properties.

DescribeUserProfileCommand

Describes a user profile. For more information, see CreateUserProfile.

DescribeWorkforceCommand

Lists private workforce information, including workforce name, Amazon Resource Name (ARN), and, if applicable, allowed IP address ranges (CIDRs ). Allowable IP address ranges are the IP addresses that workers can use to access tasks.

This operation applies only to private workforces.

DescribeWorkteamCommand

Gets information about a specific work team. You can see information such as the creation date, the last updated date, membership information, and the work team's Amazon Resource Name (ARN).

DisableSagemakerServicecatalogPortfolioCommand

Disables using Service Catalog in SageMaker. Service Catalog is used to create SageMaker projects.

DisassociateTrialComponentCommand

Disassociates a trial component from a trial. This doesn't effect other trials the component is associated with. Before you can delete a component, you must disassociate the component from all trials it is associated with. To associate a trial component with a trial, call the AssociateTrialComponent  API.

To get a list of the trials a component is associated with, use the Search  API. Specify ExperimentTrialComponent for the Resource parameter. The list appears in the response under Results.TrialComponent.Parents.

EnableSagemakerServicecatalogPortfolioCommand

Enables using Service Catalog in SageMaker. Service Catalog is used to create SageMaker projects.

GetDeviceFleetReportCommand

Describes a fleet.

GetLineageGroupPolicyCommand

The resource policy for the lineage group.

GetModelPackageGroupPolicyCommand

Gets a resource policy that manages access for a model group. For information about resource policies, see Identity-based policies and resource-based policies  in the Amazon Web Services Identity and Access Management User Guide..

GetSagemakerServicecatalogPortfolioStatusCommand

Gets the status of Service Catalog in SageMaker. Service Catalog is used to create SageMaker projects.

GetScalingConfigurationRecommendationCommand

Starts an Amazon SageMaker Inference Recommender autoscaling recommendation job. Returns recommendations for autoscaling policies that you can apply to your SageMaker endpoint.

GetSearchSuggestionsCommand

An auto-complete API for the search functionality in the SageMaker console. It returns suggestions of possible matches for the property name to use in Search queries. Provides suggestions for HyperParameters, Tags, and Metrics.

ImportHubContentCommand

Import hub content.

ListActionsCommand

Lists the actions in your account and their properties.

ListAlgorithmsCommand

Lists the machine learning algorithms that have been created.

ListAliasesCommand

Lists the aliases of a specified image or image version.

ListAppImageConfigsCommand

Lists the AppImageConfigs in your account and their properties. The list can be filtered by creation time or modified time, and whether the AppImageConfig name contains a specified string.

ListAppsCommand

Lists apps.

ListArtifactsCommand

Lists the artifacts in your account and their properties.

ListAssociationsCommand

Lists the associations in your account and their properties.

ListAutoMLJobsCommand

Request a list of jobs.

ListCandidatesForAutoMLJobCommand

List the candidates created for the job.

ListClusterNodesCommand

Retrieves the list of instances (also called nodes interchangeably) in a SageMaker HyperPod cluster.

ListClusterSchedulerConfigsCommand

List the cluster policy configurations.

ListClustersCommand

Retrieves the list of SageMaker HyperPod clusters.

ListCodeRepositoriesCommand

Gets a list of the Git repositories in your account.

ListCompilationJobsCommand

Lists model compilation jobs that satisfy various filters.

To create a model compilation job, use CreateCompilationJob . To get information about a particular model compilation job you have created, use DescribeCompilationJob .

ListComputeQuotasCommand

List the resource allocation definitions.

ListContextsCommand

Lists the contexts in your account and their properties.

ListDataQualityJobDefinitionsCommand

Lists the data quality job definitions in your account.

ListDeviceFleetsCommand

Returns a list of devices in the fleet.

ListDevicesCommand

A list of devices.

ListDomainsCommand

Lists the domains.

ListEdgeDeploymentPlansCommand

Lists all edge deployment plans.

ListEdgePackagingJobsCommand

Returns a list of edge packaging jobs.

ListEndpointConfigsCommand

Lists endpoint configurations.

ListEndpointsCommand

Lists endpoints.

ListExperimentsCommand

Lists all the experiments in your account. The list can be filtered to show only experiments that were created in a specific time range. The list can be sorted by experiment name or creation time.

ListFeatureGroupsCommand

List FeatureGroups based on given filter and order.

ListFlowDefinitionsCommand

Returns information about the flow definitions in your account.

ListHubContentVersionsCommand

List hub content versions.

ListHubContentsCommand

List the contents of a hub.

ListHubsCommand

List all existing hubs.

ListHumanTaskUisCommand

Returns information about the human task user interfaces in your account.

ListHyperParameterTuningJobsCommand

Gets a list of HyperParameterTuningJobSummary  objects that describe the hyperparameter tuning jobs launched in your account.

ListImageVersionsCommand

Lists the versions of a specified image and their properties. The list can be filtered by creation time or modified time.

ListImagesCommand

Lists the images in your account and their properties. The list can be filtered by creation time or modified time, and whether the image name contains a specified string.

ListInferenceComponentsCommand

Lists the inference components in your account and their properties.

ListInferenceExperimentsCommand

Returns the list of all inference experiments.

ListInferenceRecommendationsJobStepsCommand

Returns a list of the subtasks for an Inference Recommender job.

The supported subtasks are benchmarks, which evaluate the performance of your model on different instance types.

ListInferenceRecommendationsJobsCommand

Lists recommendation jobs that satisfy various filters.

ListLabelingJobsCommand

Gets a list of labeling jobs.

ListLabelingJobsForWorkteamCommand

Gets a list of labeling jobs assigned to a specified work team.

ListLineageGroupsCommand

A list of lineage groups shared with your Amazon Web Services account. For more information, see Cross-Account Lineage Tracking   in the Amazon SageMaker Developer Guide.

ListMlflowTrackingServersCommand

Lists all MLflow Tracking Servers.

ListModelBiasJobDefinitionsCommand

Lists model bias jobs definitions that satisfy various filters.

ListModelCardExportJobsCommand

List the export jobs for the Amazon SageMaker Model Card.

ListModelCardVersionsCommand

List existing versions of an Amazon SageMaker Model Card.

ListModelCardsCommand

List existing model cards.

ListModelExplainabilityJobDefinitionsCommand

Lists model explainability job definitions that satisfy various filters.

ListModelMetadataCommand

Lists the domain, framework, task, and model name of standard machine learning models found in common model zoos.

ListModelPackageGroupsCommand

Gets a list of the model groups in your Amazon Web Services account.

ListModelPackagesCommand

Lists the model packages that have been created.

ListModelQualityJobDefinitionsCommand

Gets a list of model quality monitoring job definitions in your account.

ListModelsCommand

Lists models created with the CreateModel API.

ListMonitoringAlertHistoryCommand

Gets a list of past alerts in a model monitoring schedule.

ListMonitoringAlertsCommand

Gets the alerts for a single monitoring schedule.

ListMonitoringExecutionsCommand

Returns list of all monitoring job executions.

ListMonitoringSchedulesCommand

Returns list of all monitoring schedules.

ListNotebookInstanceLifecycleConfigsCommand

Lists notebook instance lifestyle configurations created with the CreateNotebookInstanceLifecycleConfig  API.

ListNotebookInstancesCommand

Returns a list of the SageMaker AI notebook instances in the requester's account in an Amazon Web Services Region.

ListOptimizationJobsCommand

Lists the optimization jobs in your account and their properties.

ListPartnerAppsCommand

Lists all of the SageMaker Partner AI Apps in an account.

ListPipelineExecutionStepsCommand

Gets a list of PipeLineExecutionStep objects.

ListPipelineExecutionsCommand

Gets a list of the pipeline executions.

ListPipelineParametersForExecutionCommand

Gets a list of parameters for a pipeline execution.

ListPipelinesCommand

Gets a list of pipelines.

ListProcessingJobsCommand

Lists processing jobs that satisfy various filters.

ListProjectsCommand

Gets a list of the projects in an Amazon Web Services account.

ListResourceCatalogsCommand

Lists Amazon SageMaker Catalogs based on given filters and orders. The maximum number of ResourceCatalogs viewable is 1000.

ListSpacesCommand

Lists spaces.

ListStageDevicesCommand

Lists devices allocated to the stage, containing detailed device information and deployment status.

ListStudioLifecycleConfigsCommand

Lists the Amazon SageMaker AI Studio Lifecycle Configurations in your Amazon Web Services Account.

ListSubscribedWorkteamsCommand

Gets a list of the work teams that you are subscribed to in the Amazon Web Services Marketplace. The list may be empty if no work team satisfies the filter specified in the NameContains parameter.

ListTagsCommand

Returns the tags for the specified SageMaker resource.

ListTrainingJobsCommand

Lists training jobs.

When StatusEquals and MaxResults are set at the same time, the MaxResults number of training jobs are first retrieved ignoring the StatusEquals parameter and then they are filtered by the StatusEquals parameter, which is returned as a response.

For example, if ListTrainingJobs is invoked with the following parameters:

{ ... MaxResults: 100, StatusEquals: InProgress ... }

First, 100 trainings jobs with any status, including those other than InProgress, are selected (sorted according to the creation time, from the most current to the oldest). Next, those with a status of InProgress are returned.

You can quickly test the API using the following Amazon Web Services CLI code.

aws sagemaker list-training-jobs --max-results 100 --status-equals InProgress

ListTrainingJobsForHyperParameterTuningJobCommand

Gets a list of TrainingJobSummary  objects that describe the training jobs that a hyperparameter tuning job launched.

ListTrainingPlansCommand

Retrieves a list of training plans for the current account.

ListTransformJobsCommand

Lists transform jobs.

ListTrialComponentsCommand

Lists the trial components in your account. You can sort the list by trial component name or creation time. You can filter the list to show only components that were created in a specific time range. You can also filter on one of the following:

  • ExperimentName

  • SourceArn

  • TrialName

ListTrialsCommand

Lists the trials in your account. Specify an experiment name to limit the list to the trials that are part of that experiment. Specify a trial component name to limit the list to the trials that associated with that trial component. The list can be filtered to show only trials that were created in a specific time range. The list can be sorted by trial name or creation time.

ListUserProfilesCommand

Lists user profiles.

ListWorkforcesCommand

Use this operation to list all private and vendor workforces in an Amazon Web Services Region. Note that you can only have one private workforce per Amazon Web Services Region.

ListWorkteamsCommand

Gets a list of private work teams that you have defined in a region. The list may be empty if no work team satisfies the filter specified in the NameContains parameter.

PutModelPackageGroupPolicyCommand

Adds a resouce policy to control access to a model group. For information about resoure policies, see Identity-based policies and resource-based policies  in the Amazon Web Services Identity and Access Management User Guide..

QueryLineageCommand

Use this action to inspect your lineage and discover relationships between entities. For more information, see Querying Lineage Entities  in the Amazon SageMaker Developer Guide.

RegisterDevicesCommand

Register devices.

RenderUiTemplateCommand

Renders the UI template so that you can preview the worker's experience.

RetryPipelineExecutionCommand

Retry the execution of the pipeline.

SearchCommand

Finds SageMaker resources that match a search query. Matching resources are returned as a list of SearchRecord objects in the response. You can sort the search results by any resource property in a ascending or descending order.

You can query against the following value types: numeric, text, Boolean, and timestamp.

The Search API may provide access to otherwise restricted data. See Amazon SageMaker API Permissions: Actions, Permissions, and Resources Reference  for more information.

SearchTrainingPlanOfferingsCommand

Searches for available training plan offerings based on specified criteria.

  • Users search for available plan offerings based on their requirements (e.g., instance type, count, start time, duration).

  • And then, they create a plan that best matches their needs using the ID of the plan offering they want to use.

For more information about how to reserve GPU capacity for your SageMaker training jobs or SageMaker HyperPod clusters using Amazon SageMaker Training Plan , see CreateTrainingPlan  .

SendPipelineExecutionStepFailureCommand

Notifies the pipeline that the execution of a callback step failed, along with a message describing why. When a callback step is run, the pipeline generates a callback token and includes the token in a message sent to Amazon Simple Queue Service (Amazon SQS).

SendPipelineExecutionStepSuccessCommand

Notifies the pipeline that the execution of a callback step succeeded and provides a list of the step's output parameters. When a callback step is run, the pipeline generates a callback token and includes the token in a message sent to Amazon Simple Queue Service (Amazon SQS).

StartEdgeDeploymentStageCommand

Starts a stage in an edge deployment plan.

StartInferenceExperimentCommand

Starts an inference experiment.

StartMlflowTrackingServerCommand

Programmatically start an MLflow Tracking Server.

StartMonitoringScheduleCommand

Starts a previously stopped monitoring schedule.

By default, when you successfully create a new schedule, the status of a monitoring schedule is scheduled.

StartNotebookInstanceCommand

Launches an ML compute instance with the latest version of the libraries and attaches your ML storage volume. After configuring the notebook instance, SageMaker AI sets the notebook instance status to InService. A notebook instance's status must be InService before you can connect to your Jupyter notebook.

StartPipelineExecutionCommand

Starts a pipeline execution.

StopAutoMLJobCommand

A method for forcing a running job to shut down.

StopCompilationJobCommand

Stops a model compilation job.

To stop a job, Amazon SageMaker AI sends the algorithm the SIGTERM signal. This gracefully shuts the job down. If the job hasn't stopped, it sends the SIGKILL signal.

When it receives a StopCompilationJob request, Amazon SageMaker AI changes the CompilationJobStatus of the job to Stopping. After Amazon SageMaker stops the job, it sets the CompilationJobStatus to Stopped.

StopEdgeDeploymentStageCommand

Stops a stage in an edge deployment plan.

StopEdgePackagingJobCommand

Request to stop an edge packaging job.

StopHyperParameterTuningJobCommand

Stops a running hyperparameter tuning job and all running training jobs that the tuning job launched.

All model artifacts output from the training jobs are stored in Amazon Simple Storage Service (Amazon S3). All data that the training jobs write to Amazon CloudWatch Logs are still available in CloudWatch. After the tuning job moves to the Stopped state, it releases all reserved resources for the tuning job.

StopInferenceExperimentCommand

Stops an inference experiment.

StopInferenceRecommendationsJobCommand

Stops an Inference Recommender job.

StopLabelingJobCommand

Stops a running labeling job. A job that is stopped cannot be restarted. Any results obtained before the job is stopped are placed in the Amazon S3 output bucket.

StopMlflowTrackingServerCommand

Programmatically stop an MLflow Tracking Server.

StopMonitoringScheduleCommand

Stops a previously started monitoring schedule.

StopNotebookInstanceCommand

Terminates the ML compute instance. Before terminating the instance, SageMaker AI disconnects the ML storage volume from it. SageMaker AI preserves the ML storage volume. SageMaker AI stops charging you for the ML compute instance when you call StopNotebookInstance.

To access data on the ML storage volume for a notebook instance that has been terminated, call the StartNotebookInstance API. StartNotebookInstance launches another ML compute instance, configures it, and attaches the preserved ML storage volume so you can continue your work.

StopOptimizationJobCommand

Ends a running inference optimization job.

StopPipelineExecutionCommand

Stops a pipeline execution.

Callback Step

A pipeline execution won't stop while a callback step is running. When you call StopPipelineExecution on a pipeline execution with a running callback step, SageMaker Pipelines sends an additional Amazon SQS message to the specified SQS queue. The body of the SQS message contains a "Status" field which is set to "Stopping".

You should add logic to your Amazon SQS message consumer to take any needed action (for example, resource cleanup) upon receipt of the message followed by a call to SendPipelineExecutionStepSuccess or SendPipelineExecutionStepFailure.

Only when SageMaker Pipelines receives one of these calls will it stop the pipeline execution.

Lambda Step

A pipeline execution can't be stopped while a lambda step is running because the Lambda function invoked by the lambda step can't be stopped. If you attempt to stop the execution while the Lambda function is running, the pipeline waits for the Lambda function to finish or until the timeout is hit, whichever occurs first, and then stops. If the Lambda function finishes, the pipeline execution status is Stopped. If the timeout is hit the pipeline execution status is Failed.

StopProcessingJobCommand

Stops a processing job.

StopTrainingJobCommand

Stops a training job. To stop a job, SageMaker sends the algorithm the SIGTERM signal, which delays job termination for 120 seconds. Algorithms might use this 120-second window to save the model artifacts, so the results of the training is not lost.

When it receives a StopTrainingJob request, SageMaker changes the status of the job to Stopping. After SageMaker stops the job, it sets the status to Stopped.

StopTransformJobCommand

Stops a batch transform job.

When Amazon SageMaker receives a StopTransformJob request, the status of the job changes to Stopping. After Amazon SageMaker stops the job, the status is set to Stopped. When you stop a batch transform job before it is completed, Amazon SageMaker doesn't store the job's output in Amazon S3.

UpdateActionCommand

Updates an action.

UpdateAppImageConfigCommand

Updates the properties of an AppImageConfig.

UpdateArtifactCommand

Updates an artifact.

UpdateClusterCommand

Updates a SageMaker HyperPod cluster.

UpdateClusterSchedulerConfigCommand

Update the cluster policy configuration.

UpdateClusterSoftwareCommand

Updates the platform software of a SageMaker HyperPod cluster for security patching. To learn how to use this API, see Update the SageMaker HyperPod platform software of a cluster .

The UpgradeClusterSoftware API call may impact your SageMaker HyperPod cluster uptime and availability. Plan accordingly to mitigate potential disruptions to your workloads.

UpdateCodeRepositoryCommand

Updates the specified Git repository with the specified values.

UpdateComputeQuotaCommand

Update the compute allocation definition.

UpdateContextCommand

Updates a context.

UpdateDeviceFleetCommand

Updates a fleet of devices.

UpdateDevicesCommand

Updates one or more devices in a fleet.

UpdateDomainCommand

Updates the default settings for new user profiles in the domain.

UpdateEndpointCommand

Deploys the EndpointConfig specified in the request to a new fleet of instances. SageMaker shifts endpoint traffic to the new instances with the updated endpoint configuration and then deletes the old instances using the previous EndpointConfig (there is no availability loss). For more information about how to control the update and traffic shifting process, see Update models in production .

When SageMaker receives the request, it sets the endpoint status to Updating. After updating the endpoint, it sets the status to InService. To check the status of an endpoint, use the DescribeEndpoint  API.

You must not delete an EndpointConfig in use by an endpoint that is live or while the UpdateEndpoint or CreateEndpoint operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig.

If you delete the EndpointConfig of an endpoint that is active or being created or updated you may lose visibility into the instance type the endpoint is using. The endpoint must be deleted in order to stop incurring charges.

UpdateEndpointWeightsAndCapacitiesCommand

Updates variant weight of one or more variants associated with an existing endpoint, or capacity of one variant associated with an existing endpoint. When it receives the request, SageMaker sets the endpoint status to Updating. After updating the endpoint, it sets the status to InService. To check the status of an endpoint, use the DescribeEndpoint  API.

UpdateExperimentCommand

Adds, updates, or removes the description of an experiment. Updates the display name of an experiment.

UpdateFeatureGroupCommand

Updates the feature group by either adding features or updating the online store configuration. Use one of the following request parameters at a time while using the UpdateFeatureGroup API.

You can add features for your feature group using the FeatureAdditions request parameter. Features cannot be removed from a feature group.

You can update the online store configuration by using the OnlineStoreConfig request parameter. If a TtlDuration is specified, the default TtlDuration applies for all records added to the feature group after the feature group is updated. If a record level TtlDuration exists from using the PutRecord API, the record level TtlDuration applies to that record instead of the default TtlDuration. To remove the default TtlDuration from an existing feature group, use the UpdateFeatureGroup API and set the TtlDuration Unit and Value to null.

UpdateFeatureMetadataCommand

Updates the description and parameters of the feature group.

UpdateHubCommand

Update a hub.

UpdateHubContentCommand

Updates SageMaker hub content (either a Model or Notebook resource).

You can update the metadata that describes the resource. In addition to the required request fields, specify at least one of the following fields to update:

  • HubContentDescription

  • HubContentDisplayName

  • HubContentMarkdown

  • HubContentSearchKeywords

  • SupportStatus

If you want to update a ModelReference resource in your hub, use the UpdateHubContentResource API instead.

UpdateHubContentReferenceCommand

Updates the contents of a SageMaker hub for a ModelReference resource. A ModelReference allows you to access public SageMaker JumpStart models from within your private hub.

When using this API, you can update the MinVersion field for additional flexibility in the model version. You shouldn't update any additional fields when using this API, because the metadata in your private hub should match the public JumpStart model's metadata.

If you want to update a Model or Notebook resource in your hub, use the UpdateHubContent API instead.

For more information about adding model references to your hub, see Add models to a private hub .

UpdateImageCommand

Updates the properties of a SageMaker AI image. To change the image's tags, use the AddTags  and DeleteTags  APIs.

UpdateImageVersionCommand

Updates the properties of a SageMaker AI image version.

UpdateInferenceComponentCommand

Updates an inference component.

UpdateInferenceComponentRuntimeConfigCommand

Runtime settings for a model that is deployed with an inference component.

UpdateInferenceExperimentCommand

Updates an inference experiment that you created. The status of the inference experiment has to be either Created, Running. For more information on the status of an inference experiment, see DescribeInferenceExperiment .

UpdateMlflowTrackingServerCommand

Updates properties of an existing MLflow Tracking Server.

UpdateModelCardCommand

Update an Amazon SageMaker Model Card.

You cannot update both model card content and model card status in a single call.

UpdateModelPackageCommand

Updates a versioned model.

UpdateMonitoringAlertCommand

Update the parameters of a model monitor alert.

UpdateMonitoringScheduleCommand

Updates a previously created schedule.

UpdateNotebookInstanceCommand

Updates a notebook instance. NotebookInstance updates include upgrading or downgrading the ML compute instance used for your notebook instance to accommodate changes in your workload requirements.

UpdateNotebookInstanceLifecycleConfigCommand

Updates a notebook instance lifecycle configuration created with the CreateNotebookInstanceLifecycleConfig  API.

UpdatePartnerAppCommand

Updates all of the SageMaker Partner AI Apps in an account.

UpdatePipelineCommand

Updates a pipeline.

UpdatePipelineExecutionCommand

Updates a pipeline execution.

UpdateProjectCommand

Updates a machine learning (ML) project that is created from a template that sets up an ML pipeline from training to deploying an approved model.

You must not update a project that is in use. If you update the ServiceCatalogProvisioningUpdateDetails of a project that is active or being created, or updated, you may lose resources already created by the project.

UpdateSpaceCommand

Updates the settings of a space.

You can't edit the app type of a space in the SpaceSettings.

UpdateTrainingJobCommand

Update a model training job to request a new Debugger profiling configuration or to change warm pool retention length.

UpdateTrialCommand

Updates the display name of a trial.

UpdateTrialComponentCommand

Updates one or more properties of a trial component.

UpdateUserProfileCommand

Updates a user profile.

UpdateWorkforceCommand

Use this operation to update your workforce. You can use this operation to require that workers use specific IP addresses to work on tasks and to update your OpenID Connect (OIDC) Identity Provider (IdP) workforce configuration.

The worker portal is now supported in VPC and public internet.

Use SourceIpConfig to restrict worker access to tasks to a specific range of IP addresses. You specify allowed IP addresses by creating a list of up to ten CIDRs . By default, a workforce isn't restricted to specific IP addresses. If you specify a range of IP addresses, workers who attempt to access tasks using any IP address outside the specified range are denied and get a Not Found error message on the worker portal.

To restrict access to all the workers in public internet, add the SourceIpConfig CIDR value as "10.0.0.0/16".

Amazon SageMaker does not support Source Ip restriction for worker portals in VPC.

Use OidcConfig to update the configuration of a workforce created using your own OIDC IdP.

You can only update your OIDC IdP configuration when there are no work teams associated with your workforce. You can delete work teams using the DeleteWorkteam  operation.

After restricting access to a range of IP addresses or updating your OIDC IdP configuration with this operation, you can view details about your update workforce using the DescribeWorkforce  operation.

This operation only applies to private workforces.

UpdateWorkteamCommand

Updates an existing work team with new member definitions or description.

SageMakerClient Configuration

Parameter
Type
Description
defaultsMode
Optional
DefaultsMode | Provider<DefaultsMode>
The @smithy/smithy-client#DefaultsMode that will be used to determine how certain default configuration options are resolved in the SDK.
disableHostPrefix
Optional
boolean
Disable dynamically changing the endpoint of the client based on the hostPrefix trait of an operation.
extensions
Optional
RuntimeExtension[]
Optional extensions
logger
Optional
Logger
Optional logger for logging debug/info/warn/error.
maxAttempts
Optional
number | Provider<number>
Value for how many times a request will be made at most in case of retry.
profile
Optional
string
Setting a client profile is similar to setting a value for the AWS_PROFILE environment variable. Setting a profile on a client in code only affects the single client instance, unlike AWS_PROFILE.When set, and only for environments where an AWS configuration file exists, fields configurable by this file will be retrieved from the specified profile within that file. Conflicting code configuration and environment variables will still have higher priority.For client credential resolution that involves checking the AWS configuration file, the client's profile (this value) will be used unless a different profile is set in the credential provider options.
region
Optional
string | Provider<string>
The AWS region to which this client will send requests
requestHandler
Optional
__HttpHandlerUserInput
The HTTP handler to use or its constructor options. Fetch in browser and Https in Nodejs.
retryMode
Optional
string | Provider<string>
Specifies which retry algorithm to use.
useDualstackEndpoint
Optional
boolean | Provider<boolean>
Enables IPv6/IPv4 dualstack endpoint.
useFipsEndpoint
Optional
boolean | Provider<boolean>
Enables FIPS compatible endpoints.
Additional config fields are described in the full configuration type: SageMakerClientConfig