Package software.amazon.awscdk.services.stepfunctions.tasks
Tasks for AWS Step Functions
AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly.
A Task state represents a single unit of work performed by a state machine. All work in your state machine is performed by tasks. This module contains a collection of classes that allow you to call various AWS services from your Step Functions state machine.
Be sure to familiarize yourself with the aws-stepfunctions module documentation first.
This module is part of the AWS Cloud Development Kit project.
Table Of Contents
- Tasks for AWS Step Functions
Paths
Learn more about input and output processing in Step Functions here
Evaluate Expression
Use the EvaluateExpression to perform simple operations referencing state paths. The
expression referenced in the task will be evaluated in a Lambda function
(eval()). This allows you to not have to write Lambda code for simple operations.
Example: convert a wait time from milliseconds to seconds, concat this in a message and wait:
EvaluateExpression convertToSeconds = EvaluateExpression.Builder.create(this, "Convert to seconds")
.expression("$.waitMilliseconds / 1000")
.resultPath("$.waitSeconds")
.build();
EvaluateExpression createMessage = EvaluateExpression.Builder.create(this, "Create message")
// Note: this is a string inside a string.
.expression("`Now waiting ${$.waitSeconds} seconds...`")
.runtime(Runtime.NODEJS_LATEST)
.resultPath("$.message")
.build();
SnsPublish publishMessage = SnsPublish.Builder.create(this, "Publish message")
.topic(new Topic(this, "cool-topic"))
.message(TaskInput.fromJsonPathAt("$.message"))
.resultPath("$.sns")
.build();
Wait wait = Wait.Builder.create(this, "Wait")
.time(WaitTime.secondsPath("$.waitSeconds"))
.build();
StateMachine.Builder.create(this, "StateMachine")
.definition(convertToSeconds.next(createMessage).next(publishMessage).next(wait))
.build();
The EvaluateExpression supports a runtime prop to specify the Lambda
runtime to use to evaluate the expression. Currently, only runtimes
of the Node.js family are supported.
API Gateway
Step Functions supports API Gateway through the service integration pattern.
HTTP APIs are designed for low-latency, cost-effective integrations with AWS services, including AWS Lambda, and HTTP endpoints. HTTP APIs support OIDC and OAuth 2.0 authorization, and come with built-in support for CORS and automatic deployments. Previous-generation REST APIs currently offer more features. More details can be found here.
Call REST API Endpoint
The CallApiGatewayRestApiEndpoint calls the REST API endpoint.
import software.amazon.awscdk.services.apigateway.*;
RestApi restApi = new RestApi(this, "MyRestApi");
CallApiGatewayRestApiEndpoint invokeTask = CallApiGatewayRestApiEndpoint.Builder.create(this, "Call REST API")
.api(restApi)
.stageName("prod")
.method(HttpMethod.GET)
.build();
By default, the API endpoint URI will be constructed using the AWS region of
the stack in which the provided api is created.
To construct the endpoint with a different region, use the region parameter:
import software.amazon.awscdk.services.apigateway.*;
RestApi restApi = new RestApi(this, "MyRestApi");
CallApiGatewayRestApiEndpoint invokeTask = CallApiGatewayRestApiEndpoint.Builder.create(this, "Call REST API")
.api(restApi)
.stageName("prod")
.method(HttpMethod.GET)
.region("us-west-2")
.build();
Be aware that the header values must be arrays. When passing the Task Token
in the headers field WAIT_FOR_TASK_TOKEN integration, use
JsonPath.array() to wrap the token in an array:
import software.amazon.awscdk.services.apigateway.*;
RestApi api;
CallApiGatewayRestApiEndpoint.Builder.create(this, "Endpoint")
.api(api)
.stageName("Stage")
.method(HttpMethod.PUT)
.integrationPattern(IntegrationPattern.WAIT_FOR_TASK_TOKEN)
.headers(TaskInput.fromObject(Map.of(
"TaskToken", JsonPath.array(JsonPath.getTaskToken()))))
.build();
Call HTTP API Endpoint
The CallApiGatewayHttpApiEndpoint calls the HTTP API endpoint.
import software.amazon.awscdk.services.apigatewayv2.*;
HttpApi httpApi = new HttpApi(this, "MyHttpApi");
CallApiGatewayHttpApiEndpoint invokeTask = CallApiGatewayHttpApiEndpoint.Builder.create(this, "Call HTTP API")
.apiId(httpApi.getApiId())
.apiStack(Stack.of(httpApi))
.method(HttpMethod.GET)
.build();
AWS SDK
Step Functions supports calling AWS service's API actions through the service integration pattern.
You can use Step Functions' AWS SDK integrations to call any of the over two hundred AWS services directly from your state machine, giving you access to over nine thousand API actions.
Bucket myBucket;
CallAwsService getObject = CallAwsService.Builder.create(this, "GetObject")
.service("s3")
.action("getObject")
.parameters(Map.of(
"Bucket", myBucket.getBucketName(),
"Key", JsonPath.stringAt("$.key")))
.iamResources(List.of(myBucket.arnForObjects("*")))
.build();
Use camelCase for actions and PascalCase for parameter names.
The task automatically adds an IAM statement to the state machine role's policy based on the
service and action called. The resources for this statement must be specified in iamResources.
Use the iamAction prop to manually specify the IAM action name in the case where the IAM
action name does not match with the API service/action name:
CallAwsService listBuckets = CallAwsService.Builder.create(this, "ListBuckets")
.service("s3")
.action("listBuckets")
.iamResources(List.of("*"))
.iamAction("s3:ListAllMyBuckets")
.build();
Use the additionalIamStatements prop to pass additional IAM statements that will be added to the
state machine role's policy. Use it in the case where the call requires more than a single statement
to be executed:
CallAwsService detectLabels = CallAwsService.Builder.create(this, "DetectLabels")
.service("rekognition")
.action("detectLabels")
.iamResources(List.of("*"))
.additionalIamStatements(List.of(
PolicyStatement.Builder.create()
.actions(List.of("s3:getObject"))
.resources(List.of("arn:aws:s3:::amzn-s3-demo-bucket/*"))
.build()))
.build();
Cross-region AWS API call
You can call AWS API in a different region from your state machine by using the CallAwsServiceCrossRegion construct. In addition to the properties for CallAwsService construct, you have to set region property to call the API.
Bucket myBucket;
CallAwsServiceCrossRegion getObject = CallAwsServiceCrossRegion.Builder.create(this, "GetObject")
.region("ap-northeast-1")
.service("s3")
.action("getObject")
.parameters(Map.of(
"Bucket", myBucket.getBucketName(),
"Key", JsonPath.stringAt("$.key")))
.iamResources(List.of(myBucket.arnForObjects("*")))
.build();
Other properties such as additionalIamStatements can be used in the same way as the CallAwsService task.
Note that when you use integrationPattern.WAIT_FOR_TASK_TOKEN, the output path changes under Payload property.
Athena
Step Functions supports Athena through the service integration pattern.
StartQueryExecution
The StartQueryExecution API runs the SQL query statement.
AthenaStartQueryExecution startQueryExecutionJob = AthenaStartQueryExecution.Builder.create(this, "Start Athena Query")
.queryString(JsonPath.stringAt("$.queryString"))
.queryExecutionContext(QueryExecutionContext.builder()
.databaseName("mydatabase")
.build())
.resultConfiguration(ResultConfiguration.builder()
.encryptionConfiguration(EncryptionConfiguration.builder()
.encryptionOption(EncryptionOption.S3_MANAGED)
.build())
.outputLocation(Location.builder()
.bucketName("amzn-s3-demo-bucket")
.objectKey("folder")
.build())
.build())
.executionParameters(List.of("param1", "param2"))
.build();
You can reuse the query results by setting the resultReuseConfigurationMaxAge property.
AthenaStartQueryExecution startQueryExecutionJob = AthenaStartQueryExecution.Builder.create(this, "Start Athena Query")
.queryString(JsonPath.stringAt("$.queryString"))
.queryExecutionContext(QueryExecutionContext.builder()
.databaseName("mydatabase")
.build())
.resultConfiguration(ResultConfiguration.builder()
.encryptionConfiguration(EncryptionConfiguration.builder()
.encryptionOption(EncryptionOption.S3_MANAGED)
.build())
.outputLocation(Location.builder()
.bucketName("query-results-bucket")
.objectKey("folder")
.build())
.build())
.executionParameters(List.of("param1", "param2"))
.resultReuseConfigurationMaxAge(Duration.minutes(100))
.build();
GetQueryExecution
The GetQueryExecution API gets information about a single execution of a query.
AthenaGetQueryExecution getQueryExecutionJob = AthenaGetQueryExecution.Builder.create(this, "Get Query Execution")
.queryExecutionId(JsonPath.stringAt("$.QueryExecutionId"))
.build();
GetQueryResults
The GetQueryResults API that streams the results of a single query execution specified by QueryExecutionId from S3.
AthenaGetQueryResults getQueryResultsJob = AthenaGetQueryResults.Builder.create(this, "Get Query Results")
.queryExecutionId(JsonPath.stringAt("$.QueryExecutionId"))
.build();
StopQueryExecution
The StopQueryExecution API that stops a query execution.
AthenaStopQueryExecution stopQueryExecutionJob = AthenaStopQueryExecution.Builder.create(this, "Stop Query Execution")
.queryExecutionId(JsonPath.stringAt("$.QueryExecutionId"))
.build();
Batch
Step Functions supports Batch through the service integration pattern.
SubmitJob
The SubmitJob API submits an AWS Batch job from a job definition.
import software.amazon.awscdk.services.batch.*;
EcsJobDefinition batchJobDefinition;
JobQueue batchQueue;
BatchSubmitJob task = BatchSubmitJob.Builder.create(this, "Submit Job")
.jobDefinitionArn(batchJobDefinition.getJobDefinitionArn())
.jobName("MyJob")
.jobQueueArn(batchQueue.getJobQueueArn())
.build();
Bedrock
Step Functions supports Bedrock through the service integration pattern.
InvokeModel
The InvokeModel API invokes the specified Bedrock model to run inference using the input provided. The format of the input body and the response body depend on the model selected.
import software.amazon.awscdk.services.bedrock.*;
FoundationModel model = FoundationModel.fromFoundationModelId(this, "Model", FoundationModelIdentifier.AMAZON_TITAN_TEXT_G1_EXPRESS_V1);
BedrockInvokeModel task = BedrockInvokeModel.Builder.create(this, "Prompt Model")
.model(model)
.body(TaskInput.fromObject(Map.of(
"inputText", "Generate a list of five first names.",
"textGenerationConfig", Map.of(
"maxTokenCount", 100,
"temperature", 1))))
.resultSelector(Map.of(
"names", JsonPath.stringAt("$.Body.results[0].outputText")))
.build();
Using Input Path for S3 URI
Provide S3 URI as an input or output path to invoke a model
To specify the S3 URI as JSON path to your input or output fields, use props s3InputUri and s3OutputUri under BedrockInvokeModelProps and set
feature flag @aws-cdk/aws-stepfunctions-tasks:useNewS3UriParametersForBedrockInvokeModelTask to true.
If this flag is not enabled, the code will populate the S3Uri using InputPath and OutputPath fields, which is not recommended.
import software.amazon.awscdk.services.bedrock.*;
FoundationModel model = FoundationModel.fromFoundationModelId(this, "Model", FoundationModelIdentifier.AMAZON_TITAN_TEXT_G1_EXPRESS_V1);
BedrockInvokeModel task = BedrockInvokeModel.Builder.create(this, "Prompt Model")
.model(model)
.input(BedrockInvokeModelInputProps.builder().s3InputUri(JsonPath.stringAt("$.prompt")).build())
.output(BedrockInvokeModelOutputProps.builder().s3OutputUri(JsonPath.stringAt("$.prompt")).build())
.build();
Using Input Path
Provide S3 URI as an input or output path to invoke a model
Currently, input and output Path provided in the BedrockInvokeModelProps input is defined as S3URI field under task definition of state machine.
To modify the existing behaviour, set @aws-cdk/aws-stepfunctions-tasks:useNewS3UriParametersForBedrockInvokeModelTask to true.
If this feature flag is enabled, S3URI fields will be generated from other Props(s3InputUri and s3OutputUri), and the given inputPath, OutputPath will be rendered as
it is in the JSON task definition.
If the feature flag is set to false, the behavior will be to populate the S3Uri using the InputPath and OutputPath fields, which is not recommended.
import software.amazon.awscdk.services.bedrock.*;
FoundationModel model = FoundationModel.fromFoundationModelId(this, "Model", FoundationModelIdentifier.AMAZON_TITAN_TEXT_G1_EXPRESS_V1);
BedrockInvokeModel task = BedrockInvokeModel.Builder.create(this, "Prompt Model")
.model(model)
.inputPath(JsonPath.stringAt("$.prompt"))
.outputPath(JsonPath.stringAt("$.prompt"))
.build();
You can apply a guardrail to the invocation by setting guardrail.
import software.amazon.awscdk.services.bedrock.*;
FoundationModel model = FoundationModel.fromFoundationModelId(this, "Model", FoundationModelIdentifier.AMAZON_TITAN_TEXT_G1_EXPRESS_V1);
BedrockInvokeModel task = BedrockInvokeModel.Builder.create(this, "Prompt Model with guardrail")
.model(model)
.body(TaskInput.fromObject(Map.of(
"inputText", "Generate a list of five first names.",
"textGenerationConfig", Map.of(
"maxTokenCount", 100,
"temperature", 1))))
.guardrail(Guardrail.enable("guardrailId", 1))
.resultSelector(Map.of(
"names", JsonPath.stringAt("$.Body.results[0].outputText")))
.build();
createModelCustomizationJob
The CreateModelCustomizationJob API creates a fine-tuning job to customize a base model.
import software.amazon.awscdk.services.bedrock.*;
import software.amazon.awscdk.services.kms.*;
IBucket outputBucket;
IBucket trainingBucket;
IBucket validationBucket;
IKey kmsKey;
IVpc vpc;
FoundationModel model = FoundationModel.fromFoundationModelId(this, "Model", FoundationModelIdentifier.AMAZON_TITAN_TEXT_G1_EXPRESS_V1);
BedrockCreateModelCustomizationJob task = BedrockCreateModelCustomizationJob.Builder.create(this, "CreateModelCustomizationJob")
.baseModel(model)
.clientRequestToken("MyToken")
.customizationType(CustomizationType.FINE_TUNING)
.customModelKmsKey(kmsKey)
.customModelName("MyCustomModel") // required
.customModelTags(List.of(CustomModelTag.builder().key("key1").value("value1").build()))
.hyperParameters(Map.of(
"batchSize", "10"))
.jobName("MyCustomizationJob") // required
.jobTags(List.of(CustomModelTag.builder().key("key2").value("value2").build()))
.outputData(OutputBucketConfiguration.builder()
.bucket(outputBucket) // required
.path("output-data/")
.build())
.trainingData(TrainingBucketConfiguration.builder()
.bucket(trainingBucket)
.path("training-data/data.json")
.build()) // required
// If you don't provide validation data, you have to specify `Evaluation percentage` hyperparameter.
.validationData(List.of(ValidationBucketConfiguration.builder()
.bucket(validationBucket)
.path("validation-data/data.json")
.build()))
.vpcConfig(Map.of(
"securityGroups", List.of(SecurityGroup.Builder.create(this, "SecurityGroup").vpc(vpc).build()),
"subnets", vpc.getPrivateSubnets()))
.build();
CodeBuild
Step Functions supports CodeBuild through the service integration pattern.
StartBuild
StartBuild starts a CodeBuild Project by Project Name.
import software.amazon.awscdk.services.codebuild.*;
Project codebuildProject = Project.Builder.create(this, "Project")
.projectName("MyTestProject")
.buildSpec(BuildSpec.fromObject(Map.of(
"version", "0.2",
"phases", Map.of(
"build", Map.of(
"commands", List.of("echo \"Hello, CodeBuild!\""))))))
.build();
CodeBuildStartBuild task = CodeBuildStartBuild.Builder.create(this, "Task")
.project(codebuildProject)
.integrationPattern(IntegrationPattern.RUN_JOB)
.environmentVariablesOverride(Map.of(
"ZONE", BuildEnvironmentVariable.builder()
.type(BuildEnvironmentVariableType.PLAINTEXT)
.value(JsonPath.stringAt("$.envVariables.zone"))
.build()))
.build();
StartBuildBatch
StartBuildBatch starts a batch CodeBuild for a project by project name. It is necessary to enable the batch build feature in the CodeBuild project.
import software.amazon.awscdk.services.codebuild.*;
Project project = Project.Builder.create(this, "Project")
.projectName("MyTestProject")
.buildSpec(BuildSpec.fromObjectToYaml(Map.of(
"version", 0.2,
"batch", Map.of(
"build-list", List.of(Map.of(
"identifier", "id",
"buildspec", "version: 0.2\nphases:\n build:\n commands:\n - echo \"Hello, from small!\""))))))
.build();
project.enableBatchBuilds();
CodeBuildStartBuildBatch task = CodeBuildStartBuildBatch.Builder.create(this, "buildBatchTask")
.project(project)
.integrationPattern(IntegrationPattern.REQUEST_RESPONSE)
.environmentVariablesOverride(Map.of(
"test", BuildEnvironmentVariable.builder()
.type(BuildEnvironmentVariableType.PLAINTEXT)
.value("testValue")
.build()))
.build();
Note: enableBatchBuilds() will do nothing for imported projects.
If you are using an imported project, you must ensure that the project is already configured for batch builds.
DynamoDB
You can call DynamoDB APIs from a Task state.
Read more about calling DynamoDB APIs here
GetItem
The GetItem operation returns a set of attributes for the item with the given primary key.
Table myTable;
DynamoGetItem.Builder.create(this, "Get Item")
.key(Map.of("messageId", DynamoAttributeValue.fromString("message-007")))
.table(myTable)
.build();
PutItem
The PutItem operation creates a new item, or replaces an old item with a new item.
Table myTable;
DynamoPutItem.Builder.create(this, "PutItem")
.item(Map.of(
"MessageId", DynamoAttributeValue.fromString("message-007"),
"Text", DynamoAttributeValue.fromString(JsonPath.stringAt("$.bar")),
"TotalCount", DynamoAttributeValue.fromNumber(10)))
.table(myTable)
.build();
DeleteItem
The DeleteItem operation deletes a single item in a table by primary key.
Table myTable;
DynamoDeleteItem.Builder.create(this, "DeleteItem")
.key(Map.of("MessageId", DynamoAttributeValue.fromString("message-007")))
.table(myTable)
.resultPath(JsonPath.DISCARD)
.build();
UpdateItem
The UpdateItem operation edits an existing item's attributes, or adds a new item to the table if it does not already exist.
Table myTable;
DynamoUpdateItem.Builder.create(this, "UpdateItem")
.key(Map.of(
"MessageId", DynamoAttributeValue.fromString("message-007")))
.table(myTable)
.expressionAttributeValues(Map.of(
":val", DynamoAttributeValue.numberFromString(JsonPath.stringAt("$.Item.TotalCount.N")),
":rand", DynamoAttributeValue.fromNumber(20)))
.updateExpression("SET TotalCount = :val + :rand")
.build();
ECS
Step Functions supports ECS/Fargate through the service integration pattern.
RunTask
RunTask starts a new task using the specified task definition.
EC2
The EC2 launch type allows you to run your containerized applications on a cluster of Amazon EC2 instances that you manage.
When a task that uses the EC2 launch type is launched, Amazon ECS must determine where to place the task based on the requirements specified in the task definition, such as CPU and memory. Similarly, when you scale down the task count, Amazon ECS must determine which tasks to terminate. You can apply task placement strategies and constraints to customize how Amazon ECS places and terminates tasks. Learn more about task placement
The latest ACTIVE revision of the passed task definition is used for running the task.
The following example runs a job from a task definition on EC2
IVpc vpc = Vpc.fromLookup(this, "Vpc", VpcLookupOptions.builder()
.isDefault(true)
.build());
Cluster cluster = Cluster.Builder.create(this, "Ec2Cluster").vpc(vpc).build();
cluster.addCapacity("DefaultAutoScalingGroup", AddCapacityOptions.builder()
.instanceType(new InstanceType("t2.micro"))
.vpcSubnets(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build())
.build());
TaskDefinition taskDefinition = TaskDefinition.Builder.create(this, "TD")
.compatibility(Compatibility.EC2)
.build();
taskDefinition.addContainer("TheContainer", ContainerDefinitionOptions.builder()
.image(ContainerImage.fromRegistry("foo/bar"))
.memoryLimitMiB(256)
.build());
EcsRunTask runTask = EcsRunTask.Builder.create(this, "Run")
.integrationPattern(IntegrationPattern.RUN_JOB)
.cluster(cluster)
.taskDefinition(taskDefinition)
.launchTarget(EcsEc2LaunchTarget.Builder.create()
.placementStrategies(List.of(PlacementStrategy.spreadAcrossInstances(), PlacementStrategy.packedByCpu(), PlacementStrategy.randomly()))
.placementConstraints(List.of(PlacementConstraint.memberOf("blieptuut")))
.build())
.propagatedTagSource(PropagatedTagSource.TASK_DEFINITION)
.build();
Fargate
AWS Fargate is a serverless compute engine for containers that works with Amazon Elastic Container Service (ECS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Learn more about Fargate
The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you. The latest ACTIVE revision of the passed task definition is used for running the task. Learn more about Fargate Versioning
The following example runs a job from a task definition on Fargate
IVpc vpc = Vpc.fromLookup(this, "Vpc", VpcLookupOptions.builder()
.isDefault(true)
.build());
Cluster cluster = Cluster.Builder.create(this, "FargateCluster").vpc(vpc).build();
TaskDefinition taskDefinition = TaskDefinition.Builder.create(this, "TD")
.memoryMiB("512")
.cpu("256")
.compatibility(Compatibility.FARGATE)
.build();
ContainerDefinition containerDefinition = taskDefinition.addContainer("TheContainer", ContainerDefinitionOptions.builder()
.image(ContainerImage.fromRegistry("foo/bar"))
.memoryLimitMiB(256)
.build());
EcsRunTask runTask = EcsRunTask.Builder.create(this, "RunFargate")
.integrationPattern(IntegrationPattern.RUN_JOB)
.cluster(cluster)
.taskDefinition(taskDefinition)
.assignPublicIp(true)
.containerOverrides(List.of(ContainerOverride.builder()
.containerDefinition(containerDefinition)
.environment(List.of(TaskEnvironmentVariable.builder().name("SOME_KEY").value(JsonPath.stringAt("$.SomeKey")).build()))
.build()))
.launchTarget(new EcsFargateLaunchTarget())
.propagatedTagSource(PropagatedTagSource.TASK_DEFINITION)
.build();
Override CPU and Memory Parameter
By setting the property cpu or memoryMiB, you can override the Fargate or EC2 task instance size at runtime.
see: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_TaskOverride.html
IVpc vpc = Vpc.fromLookup(this, "Vpc", VpcLookupOptions.builder()
.isDefault(true)
.build());
Cluster cluster = Cluster.Builder.create(this, "ECSCluster").vpc(vpc).build();
TaskDefinition taskDefinition = TaskDefinition.Builder.create(this, "TD")
.compatibility(Compatibility.FARGATE)
.cpu("256")
.memoryMiB("512")
.build();
taskDefinition.addContainer("TheContainer", ContainerDefinitionOptions.builder()
.image(ContainerImage.fromRegistry("foo/bar"))
.build());
EcsRunTask runTask = EcsRunTask.Builder.create(this, "Run")
.integrationPattern(IntegrationPattern.RUN_JOB)
.cluster(cluster)
.taskDefinition(taskDefinition)
.launchTarget(new EcsFargateLaunchTarget())
.cpu("1024")
.memoryMiB("1048")
.build();
ECS enable Exec
By setting the property enableExecuteCommand to true, you can enable the ECS Exec feature for the task for either Fargate or EC2 launch types.
IVpc vpc = Vpc.fromLookup(this, "Vpc", VpcLookupOptions.builder()
.isDefault(true)
.build());
Cluster cluster = Cluster.Builder.create(this, "ECSCluster").vpc(vpc).build();
TaskDefinition taskDefinition = TaskDefinition.Builder.create(this, "TD")
.compatibility(Compatibility.EC2)
.build();
taskDefinition.addContainer("TheContainer", ContainerDefinitionOptions.builder()
.image(ContainerImage.fromRegistry("foo/bar"))
.memoryLimitMiB(256)
.build());
EcsRunTask runTask = EcsRunTask.Builder.create(this, "Run")
.integrationPattern(IntegrationPattern.RUN_JOB)
.cluster(cluster)
.taskDefinition(taskDefinition)
.launchTarget(new EcsEc2LaunchTarget())
.enableExecuteCommand(true)
.build();
EMR
Step Functions supports Amazon EMR through the service integration pattern. The service integration APIs correspond to Amazon EMR APIs but differ in the parameters that are used.
Read more about the differences when using these service integrations.
Create Cluster
Creates and starts running a cluster (job flow).
Corresponds to the runJobFlow API in EMR.
Role clusterRole = Role.Builder.create(this, "ClusterRole")
.assumedBy(new ServicePrincipal("ec2.amazonaws.com"))
.build();
Role serviceRole = Role.Builder.create(this, "ServiceRole")
.assumedBy(new ServicePrincipal("elasticmapreduce.amazonaws.com"))
.build();
Role autoScalingRole = Role.Builder.create(this, "AutoScalingRole")
.assumedBy(new ServicePrincipal("elasticmapreduce.amazonaws.com"))
.build();
autoScalingRole.assumeRolePolicy.addStatements(
PolicyStatement.Builder.create()
.effect(Effect.ALLOW)
.principals(List.of(
new ServicePrincipal("application-autoscaling.amazonaws.com")))
.actions(List.of("sts:AssumeRole"))
.build());
EmrCreateCluster.Builder.create(this, "Create Cluster")
.instances(InstancesConfigProperty.builder().build())
.clusterRole(clusterRole)
.name(TaskInput.fromJsonPathAt("$.ClusterName").getValue())
.serviceRole(serviceRole)
.autoScalingRole(autoScalingRole)
.build();
You can use the launch specification for On-Demand and Spot instances in the fleet.
EmrCreateCluster.Builder.create(this, "OnDemandSpecification")
.instances(InstancesConfigProperty.builder()
.instanceFleets(List.of(InstanceFleetConfigProperty.builder()
.instanceFleetType(EmrCreateCluster.getInstanceRoleType().MASTER)
.launchSpecifications(InstanceFleetProvisioningSpecificationsProperty.builder()
.onDemandSpecification(OnDemandProvisioningSpecificationProperty.builder()
.allocationStrategy(EmrCreateCluster.getOnDemandAllocationStrategy().LOWEST_PRICE)
.build())
.build())
.build()))
.build())
.name("OnDemandCluster")
.integrationPattern(IntegrationPattern.RUN_JOB)
.build();
EmrCreateCluster.Builder.create(this, "SpotSpecification")
.instances(InstancesConfigProperty.builder()
.instanceFleets(List.of(InstanceFleetConfigProperty.builder()
.instanceFleetType(EmrCreateCluster.getInstanceRoleType().MASTER)
.launchSpecifications(InstanceFleetProvisioningSpecificationsProperty.builder()
.spotSpecification(SpotProvisioningSpecificationProperty.builder()
.allocationStrategy(EmrCreateCluster.getSpotAllocationStrategy().CAPACITY_OPTIMIZED)
.timeoutAction(EmrCreateCluster.getSpotTimeoutAction().TERMINATE_CLUSTER)
.timeout(Duration.minutes(5))
.build())
.build())
.build()))
.build())
.name("SpotCluster")
.integrationPattern(IntegrationPattern.RUN_JOB)
.build();
You can customize EBS root device volume.
EmrCreateCluster.Builder.create(this, "Create Cluster")
.instances(InstancesConfigProperty.builder().build())
.name("ClusterName")
.ebsRootVolumeIops(4000)
.ebsRootVolumeSize(Size.gibibytes(20))
.ebsRootVolumeThroughput(200)
.build();
If you want to run multiple steps in parallel,
you can specify the stepConcurrencyLevel property. The concurrency range is between 1
and 256 inclusive, where the default concurrency of 1 means no step concurrency is allowed.
stepConcurrencyLevel requires the EMR release label to be 5.28.0 or above.
EmrCreateCluster.Builder.create(this, "Create Cluster")
.instances(InstancesConfigProperty.builder().build())
.name(TaskInput.fromJsonPathAt("$.ClusterName").getValue())
.stepConcurrencyLevel(10)
.build();
If you want to use an auto-termination policy,
you can specify the autoTerminationPolicyIdleTimeout property.
Specifies the amount of idle time after which the cluster automatically terminates. You can specify a minimum of 60 seconds and a maximum of 604800 seconds (seven days).
EmrCreateCluster.Builder.create(this, "Create Cluster")
.instances(InstancesConfigProperty.builder().build())
.name("ClusterName")
.autoTerminationPolicyIdleTimeout(Duration.seconds(100))
.build();
If you want to use managed scaling,
you can specify the managedScalingPolicy property.
EmrCreateCluster.Builder.create(this, "CreateCluster")
.instances(InstancesConfigProperty.builder()
.instanceFleets(List.of(InstanceFleetConfigProperty.builder()
.instanceFleetType(EmrCreateCluster.getInstanceRoleType().CORE)
.instanceTypeConfigs(List.of(InstanceTypeConfigProperty.builder()
.instanceType("m5.xlarge")
.build()))
.targetOnDemandCapacity(1)
.build(), InstanceFleetConfigProperty.builder()
.instanceFleetType(EmrCreateCluster.getInstanceRoleType().MASTER)
.instanceTypeConfigs(List.of(InstanceTypeConfigProperty.builder()
.instanceType("m5.xlarge")
.build()))
.targetOnDemandCapacity(1)
.build()))
.build())
.name("ClusterName")
.releaseLabel("emr-7.9.0")
.managedScalingPolicy(ManagedScalingPolicyProperty.builder()
.computeLimits(ManagedScalingComputeLimitsProperty.builder()
.unitType(EmrCreateCluster.getComputeLimitsUnitType().INSTANCE_FLEET_UNITS)
.maximumCapacityUnits(4)
.minimumCapacityUnits(1)
.maximumOnDemandCapacityUnits(4)
.maximumCoreCapacityUnits(2)
.build())
.build())
.build();
Termination Protection
Locks a cluster (job flow) so the EC2 instances in the cluster cannot be terminated by user intervention, an API call, or a job-flow error.
Corresponds to the setTerminationProtection API in EMR.
EmrSetClusterTerminationProtection.Builder.create(this, "Task")
.clusterId("ClusterId")
.terminationProtected(false)
.build();
Terminate Cluster
Shuts down a cluster (job flow).
Corresponds to the terminateJobFlows API in EMR.
EmrTerminateCluster.Builder.create(this, "Task")
.clusterId("ClusterId")
.build();
Add Step
Adds a new step to a running cluster.
Corresponds to the addJobFlowSteps API in EMR.
EmrAddStep.Builder.create(this, "Task")
.clusterId("ClusterId")
.name("StepName")
.jar("Jar")
.actionOnFailure(ActionOnFailure.CONTINUE)
.build();
To specify a custom runtime role use the executionRoleArn property.
Note: The EMR cluster must be created with a security configuration and the runtime role must have a specific trust policy. See this blog post for more details.
import software.amazon.awscdk.services.emr.*;
CfnSecurityConfiguration cfnSecurityConfiguration = CfnSecurityConfiguration.Builder.create(this, "EmrSecurityConfiguration")
.name("AddStepRuntimeRoleSecConfig")
.securityConfiguration(JSON.parse("\n {\n \"AuthorizationConfiguration\": {\n \"IAMConfiguration\": {\n \"EnableApplicationScopedIAMRole\": true,\n \"ApplicationScopedIAMRoleConfiguration\":\n {\n \"PropagateSourceIdentity\": true\n }\n },\n \"LakeFormationConfiguration\": {\n \"AuthorizedSessionTagValue\": \"Amazon EMR\"\n }\n }\n }"))
.build();
EmrCreateCluster task = EmrCreateCluster.Builder.create(this, "Create Cluster")
.instances(InstancesConfigProperty.builder().build())
.name(TaskInput.fromJsonPathAt("$.ClusterName").getValue())
.securityConfiguration(cfnSecurityConfiguration.getName())
.build();
Role executionRole = Role.Builder.create(this, "Role")
.assumedBy(new ArnPrincipal(task.getClusterRole().getRoleArn()))
.build();
executionRole.assumeRolePolicy.addStatements(
PolicyStatement.Builder.create()
.effect(Effect.ALLOW)
.principals(List.of(task.getClusterRole()))
.actions(List.of("sts:SetSourceIdentity"))
.build(),
PolicyStatement.Builder.create()
.effect(Effect.ALLOW)
.principals(List.of(task.getClusterRole()))
.actions(List.of("sts:TagSession"))
.conditions(Map.of(
"StringEquals", Map.of(
"aws:RequestTag/LakeFormationAuthorizedCaller", "Amazon EMR")))
.build());
EmrAddStep.Builder.create(this, "Task")
.clusterId("ClusterId")
.executionRoleArn(executionRole.getRoleArn())
.name("StepName")
.jar("Jar")
.actionOnFailure(ActionOnFailure.CONTINUE)
.build();
Cancel Step
Cancels a pending step in a running cluster.
Corresponds to the cancelSteps API in EMR.
EmrCancelStep.Builder.create(this, "Task")
.clusterId("ClusterId")
.stepId("StepId")
.build();
Modify Instance Fleet
Modifies the target On-Demand and target Spot capacities for the instance fleet with the specified InstanceFleetName.
Corresponds to the modifyInstanceFleet API in EMR.
EmrModifyInstanceFleetByName.Builder.create(this, "Task")
.clusterId("ClusterId")
.instanceFleetName("InstanceFleetName")
.targetOnDemandCapacity(2)
.targetSpotCapacity(0)
.build();
Modify Instance Group
Modifies the number of nodes and configuration settings of an instance group.
Corresponds to the modifyInstanceGroups API in EMR.
EmrModifyInstanceGroupByName.Builder.create(this, "Task")
.clusterId("ClusterId")
.instanceGroupName(JsonPath.stringAt("$.InstanceGroupName"))
.instanceGroup(InstanceGroupModifyConfigProperty.builder()
.instanceCount(1)
.build())
.build();
EMR on EKS
Step Functions supports Amazon EMR on EKS through the service integration pattern. The service integration APIs correspond to Amazon EMR on EKS APIs, but differ in the parameters that are used.
Read more about the differences when using these service integrations.
Setting up the EKS cluster is required.
Create Virtual Cluster
The CreateVirtualCluster API creates a single virtual cluster that's mapped to a single Kubernetes namespace.
The EKS cluster containing the Kubernetes namespace where the virtual cluster will be mapped can be passed in from the task input.
EmrContainersCreateVirtualCluster.Builder.create(this, "Create a Virtual Cluster")
.eksCluster(EksClusterInput.fromTaskInput(TaskInput.fromText("clusterId")))
.build();
The EKS cluster can also be passed in directly.
import software.amazon.awscdk.services.eks.*;
Cluster eksCluster;
EmrContainersCreateVirtualCluster.Builder.create(this, "Create a Virtual Cluster")
.eksCluster(EksClusterInput.fromCluster(eksCluster))
.build();
By default, the Kubernetes namespace that a virtual cluster maps to is "default", but a specific namespace within an EKS cluster can be selected.
EmrContainersCreateVirtualCluster.Builder.create(this, "Create a Virtual Cluster")
.eksCluster(EksClusterInput.fromTaskInput(TaskInput.fromText("clusterId")))
.eksNamespace("specified-namespace")
.build();
Delete Virtual Cluster
The DeleteVirtualCluster API deletes a virtual cluster.
EmrContainersDeleteVirtualCluster.Builder.create(this, "Delete a Virtual Cluster")
.virtualClusterId(TaskInput.fromJsonPathAt("$.virtualCluster"))
.build();
Start Job Run
The StartJobRun API starts a job run. A job is a unit of work that you submit to Amazon EMR on EKS for execution. The work performed by the job can be defined by a Spark jar, PySpark script, or SparkSQL query. A job run is an execution of the job on the virtual cluster.
Required setup:
- If not done already, follow the steps to setup EMR on EKS and create an EKS Cluster.
- Enable Cluster access
- Enable IAM Role access
The following actions must be performed if the virtual cluster ID is supplied from the task input. Otherwise, if it is supplied statically in the state machine definition, these actions will be done automatically.
- Create an IAM role
- Update the Role Trust Policy of the Job Execution Role.
The job can be configured with spark submit parameters:
EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run")
.virtualCluster(VirtualClusterInput.fromVirtualClusterId("de92jdei2910fwedz"))
.releaseLabel(ReleaseLabel.EMR_6_2_0)
.jobDriver(JobDriver.builder()
.sparkSubmitJobDriver(SparkSubmitJobDriver.builder()
.entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py"))
.sparkSubmitParameters("--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1")
.build())
.build())
.build();
Configuring the job can also be done via application configuration:
EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run")
.virtualCluster(VirtualClusterInput.fromVirtualClusterId("de92jdei2910fwedz"))
.releaseLabel(ReleaseLabel.EMR_6_2_0)
.jobName("EMR-Containers-Job")
.jobDriver(JobDriver.builder()
.sparkSubmitJobDriver(SparkSubmitJobDriver.builder()
.entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py"))
.build())
.build())
.applicationConfig(List.of(ApplicationConfiguration.builder()
.classification(Classification.SPARK_DEFAULTS)
.properties(Map.of(
"spark.executor.instances", "1",
"spark.executor.memory", "512M"))
.build()))
.build();
Job monitoring can be enabled if monitoring.logging is set true. This automatically generates an S3 bucket and CloudWatch logs.
EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run")
.virtualCluster(VirtualClusterInput.fromVirtualClusterId("de92jdei2910fwedz"))
.releaseLabel(ReleaseLabel.EMR_6_2_0)
.jobDriver(JobDriver.builder()
.sparkSubmitJobDriver(SparkSubmitJobDriver.builder()
.entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py"))
.sparkSubmitParameters("--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1")
.build())
.build())
.monitoring(Monitoring.builder()
.logging(true)
.build())
.build();
Otherwise, providing monitoring for jobs with existing log groups and log buckets is also available.
import software.amazon.awscdk.services.logs.*;
LogGroup logGroup = new LogGroup(this, "Log Group");
Bucket logBucket = new Bucket(this, "S3 Bucket");
EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run")
.virtualCluster(VirtualClusterInput.fromVirtualClusterId("de92jdei2910fwedz"))
.releaseLabel(ReleaseLabel.EMR_6_2_0)
.jobDriver(JobDriver.builder()
.sparkSubmitJobDriver(SparkSubmitJobDriver.builder()
.entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py"))
.sparkSubmitParameters("--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1")
.build())
.build())
.monitoring(Monitoring.builder()
.logGroup(logGroup)
.logBucket(logBucket)
.build())
.build();
Users can provide their own existing Job Execution Role.
EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run")
.virtualCluster(VirtualClusterInput.fromTaskInput(TaskInput.fromJsonPathAt("$.VirtualClusterId")))
.releaseLabel(ReleaseLabel.EMR_6_2_0)
.jobName("EMR-Containers-Job")
.executionRole(Role.fromRoleArn(this, "Job-Execution-Role", "arn:aws:iam::xxxxxxxxxxxx:role/JobExecutionRole"))
.jobDriver(JobDriver.builder()
.sparkSubmitJobDriver(SparkSubmitJobDriver.builder()
.entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py"))
.sparkSubmitParameters("--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1")
.build())
.build())
.build();
EKS
Step Functions supports Amazon EKS through the service integration pattern. The service integration APIs correspond to Amazon EKS APIs.
Read more about the differences when using these service integrations.
Call
Read and write Kubernetes resource objects via a Kubernetes API endpoint.
Corresponds to the call API in Step Functions Connector.
The following code snippet includes a Task state that uses eks:call to list the pods.
import software.amazon.awscdk.services.eks.*;
import software.amazon.awscdk.cdk.lambdalayer.kubectl.v33.KubectlV33Layer;
Cluster myEksCluster = Cluster.Builder.create(this, "my sample cluster")
.version(KubernetesVersion.V1_32)
.clusterName("myEksCluster")
.kubectlLayer(new KubectlV33Layer(this, "kubectl"))
.build();
EksCall.Builder.create(this, "Call a EKS Endpoint")
.cluster(myEksCluster)
.httpMethod(HttpMethods.GET)
.httpPath("/api/v1/namespaces/default/pods")
.build();
EventBridge
Step Functions supports Amazon EventBridge through the service integration pattern. The service integration APIs correspond to Amazon EventBridge APIs.
Read more about the differences when using these service integrations.
Put Events
Send events to an EventBridge bus.
Corresponds to the put-events API in Step Functions Connector.
The following code snippet includes a Task state that uses events:putevents to send an event to the default bus.
import software.amazon.awscdk.services.events.*;
EventBus myEventBus = EventBus.Builder.create(this, "EventBus")
.eventBusName("MyEventBus1")
.build();
EventBridgePutEvents.Builder.create(this, "Send an event to EventBridge")
.entries(List.of(EventBridgePutEventsEntry.builder()
.detail(TaskInput.fromObject(Map.of(
"Message", "Hello from Step Functions!")))
.eventBus(myEventBus)
.detailType("MessageFromStepFunctions")
.source("step.functions")
.build()))
.build();
EventBridge Scheduler
You can call EventBridge Scheduler APIs from a Task state.
Read more about calling Scheduler APIs here
Create Scheduler
The CreateSchedule API creates a new schedule.
Here is an example of how to create a schedule that puts an event to SQS queue every 5 minutes:
import software.amazon.awscdk.services.scheduler.*;
import software.amazon.awscdk.services.kms.*;
Key key;
CfnScheduleGroup scheduleGroup;
Queue targetQueue;
Queue deadLetterQueue;
Role schedulerRole = Role.Builder.create(this, "SchedulerRole")
.assumedBy(new ServicePrincipal("scheduler.amazonaws.com"))
.build();
// To send the message to the queue
// This policy changes depending on the type of target.
schedulerRole.addToPrincipalPolicy(PolicyStatement.Builder.create()
.actions(List.of("sqs:SendMessage"))
.resources(List.of(targetQueue.getQueueArn()))
.build());
EventBridgeSchedulerCreateScheduleTask createScheduleTask1 = EventBridgeSchedulerCreateScheduleTask.Builder.create(this, "createSchedule")
.scheduleName("TestSchedule")
.actionAfterCompletion(ActionAfterCompletion.NONE)
.clientToken("testToken")
.description("TestDescription")
.startDate(new Date())
.endDate(new Date(new Date().getTime() + 1000 * 60 * 60))
.flexibleTimeWindow(Duration.minutes(5))
.groupName(scheduleGroup.getRef())
.kmsKey(key)
.schedule(Schedule.rate(Duration.minutes(5)))
.timezone("UTC")
.enabled(true)
.target(EventBridgeSchedulerTarget.Builder.create()
.arn(targetQueue.getQueueArn())
.role(schedulerRole)
.retryPolicy(RetryPolicy.builder()
.maximumRetryAttempts(2)
.maximumEventAge(Duration.minutes(5))
.build())
.deadLetterQueue(deadLetterQueue)
.build())
.build();
Glue
Step Functions supports AWS Glue through the service integration pattern.
StartJobRun
You can call the StartJobRun API from a Task state.
GlueStartJobRun.Builder.create(this, "Task")
.glueJobName("my-glue-job")
.arguments(TaskInput.fromObject(Map.of(
"key", "value")))
.taskTimeout(Timeout.duration(Duration.minutes(30)))
.notifyDelayAfter(Duration.minutes(5))
.build();
You can configure workers by setting the workerTypeV2 and numberOfWorkers properties.
workerType is deprecated and no longer recommended. Use workerTypeV2 which is
a ENUM-like class for more powerful worker configuration around using pre-defined values or
dynamic values.
GlueStartJobRun.Builder.create(this, "Task")
.glueJobName("my-glue-job")
.workerConfiguration(WorkerConfigurationProperty.builder()
.workerTypeV2(WorkerTypeV2.G_1X) // Worker type
.numberOfWorkers(2)
.build())
.build();
To configure the worker type or number of workers dynamically from StateMachine's input,
you can configure it using JSON Path values using workerTypeV2 like this:
GlueStartJobRun.Builder.create(this, "Glue Job Task")
.glueJobName("my-glue-job")
.workerConfiguration(WorkerConfigurationProperty.builder()
.workerTypeV2(WorkerTypeV2.of(JsonPath.stringAt("$.glue_jobs_configs.executor_type")))
.numberOfWorkers(JsonPath.numberAt("$.glue_jobs_configs.max_number_workers"))
.build())
.build();
You can choose the execution class by setting the executionClass property.
GlueStartJobRun.Builder.create(this, "Task")
.glueJobName("my-glue-job")
.executionClass(ExecutionClass.FLEX)
.build();
StartCrawlerRun
You can call the StartCrawler API from a Task state through AWS SDK service integrations.
import software.amazon.awscdk.services.glue.*;
CfnCrawler myCrawler;
// You can get the crawler name from `crawler.ref`
// You can get the crawler name from `crawler.ref`
GlueStartCrawlerRun.Builder.create(this, "Task1")
.crawlerName(myCrawler.getRef())
.build();
// Of course, you can also specify the crawler name directly.
// Of course, you can also specify the crawler name directly.
GlueStartCrawlerRun.Builder.create(this, "Task2")
.crawlerName("my-crawler-job")
.build();
Glue DataBrew
Step Functions supports AWS Glue DataBrew through the service integration pattern.
Start Job Run
You can call the StartJobRun API from a Task state.
GlueDataBrewStartJobRun.Builder.create(this, "Task")
.name("databrew-job")
.build();
Invoke HTTP API
Step Functions supports calling third-party APIs with credentials managed by Amazon EventBridge Connections.
The following snippet creates a new API destination connection, and uses it to make a POST request to the specified URL. The endpoint response is available at the $.ResponseBody path.
import software.amazon.awscdk.services.events.*;
Connection connection = Connection.Builder.create(this, "Connection")
.authorization(Authorization.basic("username", SecretValue.unsafePlainText("password")))
.build();
HttpInvoke.Builder.create(this, "Invoke HTTP API")
.apiRoot("https://api.example.com")
.apiEndpoint(TaskInput.fromText(JsonPath.format("resource/{}/details", JsonPath.stringAt("$.resourceId"))))
.body(TaskInput.fromObject(Map.of("foo", "bar")))
.connection(connection)
.headers(TaskInput.fromObject(Map.of("Content-Type", "application/json")))
.method(TaskInput.fromText("POST"))
.queryStringParameters(TaskInput.fromObject(Map.of("id", "123")))
.urlEncodingFormat(URLEncodingFormat.BRACKETS)
.build();
Lambda
Step Functions supports AWS Lambda through the service integration pattern.
Invoke
Invoke a Lambda function.
You can specify the input to your Lambda function through the payload attribute.
By default, Step Functions invokes Lambda function with the state input (JSON path '$')
as the input.
The following snippet invokes a Lambda Function with the state input as the payload
by referencing the $ path.
Function fn;
LambdaInvoke.Builder.create(this, "Invoke with state input")
.lambdaFunction(fn)
.build();
When a function is invoked, the Lambda service sends these response elements back.
⚠️ The response from the Lambda function is in an attribute called Payload
The following snippet invokes a Lambda Function by referencing the $.Payload path
to reference the output of a Lambda executed before it.
Function fn;
LambdaInvoke.Builder.create(this, "Invoke with empty object as payload")
.lambdaFunction(fn)
.payload(TaskInput.fromObject(Map.of()))
.build();
// use the output of fn as input
// use the output of fn as input
LambdaInvoke.Builder.create(this, "Invoke with payload field in the state input")
.lambdaFunction(fn)
.payload(TaskInput.fromJsonPathAt("$.Payload"))
.build();
The following snippet invokes a Lambda and sets the task output to only include the Lambda function response.
Function fn;
LambdaInvoke.Builder.create(this, "Invoke and set function response as task output")
.lambdaFunction(fn)
.outputPath("$.Payload")
.build();
If you want to combine the input and the Lambda function response you can use
the payloadResponseOnly property and specify the resultPath. This will put the
Lambda function ARN directly in the "Resource" string, but it conflicts with the
integrationPattern, invocationType, clientContext, and qualifier properties.
Function fn;
LambdaInvoke.Builder.create(this, "Invoke and combine function response with task input")
.lambdaFunction(fn)
.payloadResponseOnly(true)
.resultPath("$.fn")
.build();
You can have Step Functions pause a task, and wait for an external process to return a task token. Read more about the callback pattern
To use the callback pattern, set the token property on the task. Call the Step
Functions SendTaskSuccess or SendTaskFailure APIs with the token to
indicate that the task has completed and the state machine should resume execution.
The following snippet invokes a Lambda with the task token as part of the input to the Lambda.
Function fn;
LambdaInvoke.Builder.create(this, "Invoke with callback")
.lambdaFunction(fn)
.integrationPattern(IntegrationPattern.WAIT_FOR_TASK_TOKEN)
.payload(TaskInput.fromObject(Map.of(
"token", JsonPath.getTaskToken(),
"input", JsonPath.stringAt("$.someField"))))
.build();
⚠️ The task will pause until it receives that task token back with a SendTaskSuccess or SendTaskFailure
call. Learn more about Callback with the Task
Token.
AWS Lambda can occasionally experience transient service errors. In this case, invoking Lambda
results in a 500 error, such as ClientExecutionTimeoutException, ServiceException, AWSLambdaException, or SdkClientException.
As a best practice, the LambdaInvoke task will retry on those errors with an interval of 2 seconds,
a back-off rate of 2 and 6 maximum attempts. Set the retryOnServiceExceptions prop to false to
disable this behavior.
MediaConvert
Step Functions supports AWS MediaConvert through the Optimized integration pattern.
CreateJob
The CreateJob API creates a new transcoding job. For information about jobs and job settings, see the User Guide at http://docs.aws.amazon.com/mediaconvert/latest/ug/what-is.html
You can call the CreateJob API from a Task state. Optionally you can specify the integrationPattern.
Make sure you update the required fields - Role & Settings and refer CreateJobRequest for all other optional parameters.
MediaConvertCreateJob.Builder.create(this, "CreateJob")
.createJobRequest(Map.of(
"Role", "arn:aws:iam::123456789012:role/MediaConvertRole",
"Settings", Map.of(
"OutputGroups", List.of(Map.of(
"Outputs", List.of(Map.of(
"ContainerSettings", Map.of(
"Container", "MP4"),
"VideoDescription", Map.of(
"CodecSettings", Map.of(
"Codec", "H_264",
"H264Settings", Map.of(
"MaxBitrate", 1000,
"RateControlMode", "QVBR",
"SceneChangeDetect", "TRANSITION_DETECTION"))),
"AudioDescriptions", List.of(Map.of(
"CodecSettings", Map.of(
"Codec", "AAC",
"AacSettings", Map.of(
"Bitrate", 96000,
"CodingMode", "CODING_MODE_2_0",
"SampleRate", 48000)))))),
"OutputGroupSettings", Map.of(
"Type", "FILE_GROUP_SETTINGS",
"FileGroupSettings", Map.of(
"Destination", "s3://EXAMPLE-DESTINATION-BUCKET/")))),
"Inputs", List.of(Map.of(
"AudioSelectors", Map.of(
"Audio Selector 1", Map.of(
"DefaultSelection", "DEFAULT")),
"FileInput", "s3://EXAMPLE-SOURCE-BUCKET/EXAMPLE-SOURCE_FILE")))))
.integrationPattern(IntegrationPattern.RUN_JOB)
.build();
SageMaker
Step Functions supports AWS SageMaker through the service integration pattern.
If your training job or model uses resources from AWS Marketplace,
network isolation is required.
To do so, set the enableNetworkIsolation property to true for SageMakerCreateModel or SageMakerCreateTrainingJob.
To set environment variables for the Docker container use the environment property.
Create Training Job
You can call the CreateTrainingJob API from a Task state.
SageMakerCreateTrainingJob.Builder.create(this, "TrainSagemaker")
.trainingJobName(JsonPath.stringAt("$.JobName"))
.algorithmSpecification(AlgorithmSpecification.builder()
.algorithmName("BlazingText")
.trainingInputMode(InputMode.FILE)
.build())
.inputDataConfig(List.of(Channel.builder()
.channelName("train")
.dataSource(DataSource.builder()
.s3DataSource(S3DataSource.builder()
.s3DataType(S3DataType.S3_PREFIX)
.s3Location(S3Location.fromJsonExpression("$.S3Bucket"))
.build())
.build())
.build()))
.outputDataConfig(OutputDataConfig.builder()
.s3OutputLocation(S3Location.fromBucket(Bucket.fromBucketName(this, "Bucket", "amzn-s3-demo-bucket"), "myoutputpath"))
.build())
.resourceConfig(ResourceConfig.builder()
.instanceCount(1)
.instanceType(new InstanceType(JsonPath.stringAt("$.InstanceType")))
.volumeSize(Size.gibibytes(50))
.build()) // optional: default is 1 instance of EC2 `M4.XLarge` with `10GB` volume
.stoppingCondition(StoppingCondition.builder()
.maxRuntime(Duration.hours(2))
.build())
.build();
You can specify TrainingInputMode via the trainingInputMode property.
- To download the data from Amazon Simple Storage Service (Amazon S3) to the provisioned ML storage volume, and mount the directory to a Docker volume, choose
InputMode.FILEif an algorithm supports it. - To stream data directly from Amazon S3 to the container, choose
InputMode.PIPEif an algorithm supports it. - To stream data directly from Amazon S3 to the container with no code changes and to provide file system access to the data, choose
InputMode.FAST_FILEif an algorithm supports it.
Create Transform Job
You can call the CreateTransformJob API from a Task state.
SageMakerCreateTransformJob.Builder.create(this, "Batch Inference")
.transformJobName("MyTransformJob")
.modelName("MyModelName")
.modelClientOptions(ModelClientOptions.builder()
.invocationsMaxRetries(3) // default is 0
.invocationsTimeout(Duration.minutes(5))
.build())
.transformInput(TransformInput.builder()
.transformDataSource(TransformDataSource.builder()
.s3DataSource(TransformS3DataSource.builder()
.s3Uri("s3://inputbucket/train")
.s3DataType(S3DataType.S3_PREFIX)
.build())
.build())
.build())
.transformOutput(TransformOutput.builder()
.s3OutputPath("s3://outputbucket/TransformJobOutputPath")
.build())
.transformResources(TransformResources.builder()
.instanceCount(1)
.instanceType(InstanceType.of(InstanceClass.M4, InstanceSize.XLARGE))
.build())
.build();
Create Endpoint
You can call the CreateEndpoint API from a Task state.
SageMakerCreateEndpoint.Builder.create(this, "SagemakerEndpoint")
.endpointName(JsonPath.stringAt("$.EndpointName"))
.endpointConfigName(JsonPath.stringAt("$.EndpointConfigName"))
.build();
Create Endpoint Config
You can call the CreateEndpointConfig API from a Task state.
SageMakerCreateEndpointConfig.Builder.create(this, "SagemakerEndpointConfig")
.endpointConfigName("MyEndpointConfig")
.productionVariants(List.of(ProductionVariant.builder()
.initialInstanceCount(2)
.instanceType(InstanceType.of(InstanceClass.M5, InstanceSize.XLARGE))
.modelName("MyModel")
.variantName("awesome-variant")
.build()))
.build();
Create Model
You can call the CreateModel API from a Task state.
SageMakerCreateModel.Builder.create(this, "Sagemaker")
.modelName("MyModel")
.primaryContainer(ContainerDefinition.Builder.create()
.image(DockerImage.fromJsonExpression(JsonPath.stringAt("$.Model.imageName")))
.mode(Mode.SINGLE_MODEL)
.modelS3Location(S3Location.fromJsonExpression("$.TrainingJob.ModelArtifacts.S3ModelArtifacts"))
.build())
.build();
Update Endpoint
You can call the UpdateEndpoint API from a Task state.
SageMakerUpdateEndpoint.Builder.create(this, "SagemakerEndpoint")
.endpointName(JsonPath.stringAt("$.Endpoint.Name"))
.endpointConfigName(JsonPath.stringAt("$.Endpoint.EndpointConfig"))
.build();
SNS
Step Functions supports Amazon SNS through the service integration pattern.
Publish
You can call the Publish API from a Task state to publish to an SNS topic.
Topic topic = new Topic(this, "Topic");
// Use a field from the execution data as message.
SnsPublish task1 = SnsPublish.Builder.create(this, "Publish1")
.topic(topic)
.integrationPattern(IntegrationPattern.REQUEST_RESPONSE)
.message(TaskInput.fromDataAt("$.state.message"))
.messageAttributes(Map.of(
"place", MessageAttribute.builder()
.value(JsonPath.stringAt("$.place"))
.build(),
"pic", MessageAttribute.builder()
// BINARY must be explicitly set
.dataType(MessageAttributeDataType.BINARY)
.value(JsonPath.stringAt("$.pic"))
.build(),
"people", MessageAttribute.builder()
.value(4)
.build(),
"handles", MessageAttribute.builder()
.value(List.of("@kslater", "@jjf", null, "@mfanning"))
.build()))
.build();
// Combine a field from the execution data with
// a literal object.
SnsPublish task2 = SnsPublish.Builder.create(this, "Publish2")
.topic(topic)
.message(TaskInput.fromObject(Map.of(
"field1", "somedata",
"field2", JsonPath.stringAt("$.field2"))))
.build();
Step Functions
Step Functions supports AWS Step Functions through the service integration pattern.
Start Execution
You can manage AWS Step Functions executions.
AWS Step Functions supports it's own StartExecution API as a service integration.
// Define a state machine with one Pass state
StateMachine child = StateMachine.Builder.create(this, "ChildStateMachine")
.definition(Chain.start(new Pass(this, "PassState")))
.build();
// Include the state machine in a Task state with callback pattern
StepFunctionsStartExecution task = StepFunctionsStartExecution.Builder.create(this, "ChildTask")
.stateMachine(child)
.integrationPattern(IntegrationPattern.WAIT_FOR_TASK_TOKEN)
.input(TaskInput.fromObject(Map.of(
"token", JsonPath.getTaskToken(),
"foo", "bar")))
.name("MyExecutionName")
.build();
// Define a second state machine with the Task state above
// Define a second state machine with the Task state above
StateMachine.Builder.create(this, "ParentStateMachine")
.definition(task)
.build();
You can utilize Associate Workflow Executions
via the associateWithParent property. This allows the Step Functions UI to link child
executions from parent executions, making it easier to trace execution flow across state machines.
StateMachine child;
StepFunctionsStartExecution task = StepFunctionsStartExecution.Builder.create(this, "ChildTask")
.stateMachine(child)
.associateWithParent(true)
.build();
This will add the payload AWS_STEP_FUNCTIONS_STARTED_BY_EXECUTION_ID.$: $$.Execution.Id to the
inputproperty for you, which will pass the execution ID from the context object to the
execution input. It requires input to be an object or not be set at all.
Invoke Activity
You can invoke a Step Functions Activity which enables you to have a task in your state machine where the work is performed by a worker that can be hosted on Amazon EC2, Amazon ECS, AWS Lambda, basically anywhere. Activities are a way to associate code running somewhere (known as an activity worker) with a specific task in a state machine.
When Step Functions reaches an activity task state, the workflow waits for an activity worker to poll for a task. An activity worker polls Step Functions by using GetActivityTask, and sending the ARN for the related activity.
After the activity worker completes its work, it can provide a report of its
success or failure by using SendTaskSuccess or SendTaskFailure. These two
calls use the taskToken provided by GetActivityTask to associate the result
with that task.
The following example creates an activity and creates a task that invokes the activity.
Activity submitJobActivity = new Activity(this, "SubmitJob");
StepFunctionsInvokeActivity.Builder.create(this, "Submit Job")
.activity(submitJobActivity)
.build();
Use the Parameters field to create a collection of key-value pairs that are passed as input. The values of each can either be static values that you include in your state machine definition, or selected from either the input or the context object with a path.
Activity submitJobActivity = new Activity(this, "SubmitJob");
StepFunctionsInvokeActivity.Builder.create(this, "Submit Job")
.activity(submitJobActivity)
.parameters(Map.of(
"comment", "Selecting what I care about.",
"MyDetails", Map.of(
"size", JsonPath.stringAt("$.product.details.size"),
"exists", JsonPath.stringAt("$.product.availability"),
"StaticValue", "foo")))
.build();
SQS
Step Functions supports Amazon SQS
Send Message
You can call the SendMessage API from a Task state
to send a message to an SQS queue.
Queue queue = new Queue(this, "Queue");
// Use a field from the execution data as message.
SqsSendMessage task1 = SqsSendMessage.Builder.create(this, "Send1")
.queue(queue)
.messageBody(TaskInput.fromJsonPathAt("$.message"))
.build();
// Combine a field from the execution data with
// a literal object.
SqsSendMessage task2 = SqsSendMessage.Builder.create(this, "Send2")
.queue(queue)
.messageBody(TaskInput.fromObject(Map.of(
"field1", "somedata",
"field2", JsonPath.stringAt("$.field2"))))
.build();
-
ClassDescriptionThe generation of Elastic Inference (EI) instance.The size of the Elastic Inference (EI) instance to use for the production variant.The action that EventBridge Scheduler applies to the schedule after the schedule completes invoking the target.The action to take when the cluster step fails.Specify the training algorithm and algorithm-specific metadata.A builder for
AlgorithmSpecificationAn implementation forAlgorithmSpecificationA configuration specification to be used when provisioning virtual clusters, which can include configurations for applications and software bundled with Amazon EMR on EKS.A builder forApplicationConfigurationAn implementation forApplicationConfigurationHow to assemble the results of the transform job as a single S3 object.Get an Athena Query Execution as a Task.A fluent builder forAthenaGetQueryExecution.Properties for getting a Query Execution using JSONata.A builder forAthenaGetQueryExecutionJsonataPropsAn implementation forAthenaGetQueryExecutionJsonataPropsProperties for getting a Query Execution using JSONPath.A builder forAthenaGetQueryExecutionJsonPathPropsAn implementation forAthenaGetQueryExecutionJsonPathPropsProperties for getting a Query Execution.A builder forAthenaGetQueryExecutionPropsAn implementation forAthenaGetQueryExecutionPropsGet an Athena Query Results as a Task.A fluent builder forAthenaGetQueryResults.Properties for getting a Query Results using JSONata.A builder forAthenaGetQueryResultsJsonataPropsAn implementation forAthenaGetQueryResultsJsonataPropsProperties for getting a Query Results using JSONPath.A builder forAthenaGetQueryResultsJsonPathPropsAn implementation forAthenaGetQueryResultsJsonPathPropsProperties for getting a Query Results.A builder forAthenaGetQueryResultsPropsAn implementation forAthenaGetQueryResultsPropsStart an Athena Query as a Task.A fluent builder forAthenaStartQueryExecution.Properties for starting a Query Execution using JSONata.A builder forAthenaStartQueryExecutionJsonataPropsAn implementation forAthenaStartQueryExecutionJsonataPropsProperties for starting a Query Execution using JSONPath.A builder forAthenaStartQueryExecutionJsonPathPropsAn implementation forAthenaStartQueryExecutionJsonPathPropsProperties for starting a Query Execution.A builder forAthenaStartQueryExecutionPropsAn implementation forAthenaStartQueryExecutionPropsStop an Athena Query Execution as a Task.A fluent builder forAthenaStopQueryExecution.Properties for stopping a Query Execution using JSONata.A builder forAthenaStopQueryExecutionJsonataPropsAn implementation forAthenaStopQueryExecutionJsonataPropsProperties for stopping a Query Execution using JSONPath.A builder forAthenaStopQueryExecutionJsonPathPropsAn implementation forAthenaStopQueryExecutionJsonPathPropsProperties for stopping a Query Execution.A builder forAthenaStopQueryExecutionPropsAn implementation forAthenaStopQueryExecutionPropsThe authentication method used to call the endpoint.The overrides that should be sent to a container.A builder forBatchContainerOverridesAn implementation forBatchContainerOverridesAn object representing an AWS Batch job dependency.A builder forBatchJobDependencyAn implementation forBatchJobDependencySpecifies the number of records to include in a mini-batch for an HTTP inference request.Task to submits an AWS Batch job from a job definition.A fluent builder forBatchSubmitJob.Properties for BatchSubmitJob using JSONata.A builder forBatchSubmitJobJsonataPropsAn implementation forBatchSubmitJobJsonataPropsProperties for BatchSubmitJob using JSONPath.A builder forBatchSubmitJobJsonPathPropsAn implementation forBatchSubmitJobJsonPathPropsProperties for BatchSubmitJob.A builder forBatchSubmitJobPropsAn implementation forBatchSubmitJobPropsA Step Functions Task to create model customization in Bedrock.A fluent builder forBedrockCreateModelCustomizationJob.Properties for invoking a Bedrock Model.A builder forBedrockCreateModelCustomizationJobPropsAn implementation forBedrockCreateModelCustomizationJobPropsA Step Functions Task to invoke a model in Bedrock.A fluent builder forBedrockInvokeModel.Location to retrieve the input data, prior to calling Bedrock InvokeModel.A builder forBedrockInvokeModelInputPropsAn implementation forBedrockInvokeModelInputPropsProperties for invoking a Bedrock Model.A builder forBedrockInvokeModelJsonataPropsAn implementation forBedrockInvokeModelJsonataPropsProperties for invoking a Bedrock Model.A builder forBedrockInvokeModelJsonPathPropsAn implementation forBedrockInvokeModelJsonPathPropsLocation where the Bedrock InvokeModel API response is written.A builder forBedrockInvokeModelOutputPropsAn implementation forBedrockInvokeModelOutputPropsProperties for invoking a Bedrock Model.A builder forBedrockInvokeModelPropsAn implementation forBedrockInvokeModelPropsBase CallApiGatewayEdnpoint Task Props.A builder forCallApiGatewayEndpointBaseOptionsAn implementation forCallApiGatewayEndpointBaseOptionsBase CallApiGatewayEndpoint Task Props.A builder forCallApiGatewayEndpointBasePropsAn implementation forCallApiGatewayEndpointBasePropsBase CallApiGatewayEndpoint Task Props.A builder forCallApiGatewayEndpointJsonataBasePropsAn implementation forCallApiGatewayEndpointJsonataBasePropsBase CallApiGatewayEndpoint Task Props.A builder forCallApiGatewayEndpointJsonPathBasePropsAn implementation forCallApiGatewayEndpointJsonPathBasePropsCall HTTP API endpoint as a Task.A fluent builder forCallApiGatewayHttpApiEndpoint.Properties for calling an HTTP API Endpoint using JSONata.A builder forCallApiGatewayHttpApiEndpointJsonataPropsAn implementation forCallApiGatewayHttpApiEndpointJsonataPropsProperties for calling an HTTP API Endpoint using JSONPath.A builder forCallApiGatewayHttpApiEndpointJsonPathPropsAn implementation forCallApiGatewayHttpApiEndpointJsonPathPropsBase properties for calling an HTTP API Endpoint.A builder forCallApiGatewayHttpApiEndpointOptionsAn implementation forCallApiGatewayHttpApiEndpointOptionsProperties for calling an HTTP API Endpoint.A builder forCallApiGatewayHttpApiEndpointPropsAn implementation forCallApiGatewayHttpApiEndpointPropsCall REST API endpoint as a Task.A fluent builder forCallApiGatewayRestApiEndpoint.Properties for calling an REST API Endpoint using JSONata.A builder forCallApiGatewayRestApiEndpointJsonataPropsAn implementation forCallApiGatewayRestApiEndpointJsonataPropsProperties for calling an REST API Endpoint using JSONPath.A builder forCallApiGatewayRestApiEndpointJsonPathPropsAn implementation forCallApiGatewayRestApiEndpointJsonPathPropsBase properties for calling an REST API Endpoint.A builder forCallApiGatewayRestApiEndpointOptionsAn implementation forCallApiGatewayRestApiEndpointOptionsProperties for calling an REST API Endpoint.A builder forCallApiGatewayRestApiEndpointPropsAn implementation forCallApiGatewayRestApiEndpointPropsA StepFunctions task to call an AWS service API.A fluent builder forCallAwsService.A Step Functions task to call an AWS service API across regions.A fluent builder forCallAwsServiceCrossRegion.Properties for calling an AWS service's API action using JSONata from your state machine across regions.A builder forCallAwsServiceCrossRegionJsonataPropsAn implementation forCallAwsServiceCrossRegionJsonataPropsProperties for calling an AWS service's API action using JSONPath from your state machine across regions.A builder forCallAwsServiceCrossRegionJsonPathPropsAn implementation forCallAwsServiceCrossRegionJsonPathPropsProperties for calling an AWS service's API action from your state machine across regions.A builder forCallAwsServiceCrossRegionPropsAn implementation forCallAwsServiceCrossRegionPropsProperties for calling an AWS service's API action using JSONata from your state machine.A builder forCallAwsServiceJsonataPropsAn implementation forCallAwsServiceJsonataPropsProperties for calling an AWS service's API action using JSONPath from your state machine.A builder forCallAwsServiceJsonPathPropsAn implementation forCallAwsServiceJsonPathPropsProperties for calling an AWS service's API action from your state machine.A builder forCallAwsServicePropsAn implementation forCallAwsServicePropsDescribes the training, validation or test dataset and the Amazon S3 location where it is stored.A builder forChannelAn implementation forChannelThe classification within a EMR Containers application configuration.Start a CodeBuild Build as a task.A fluent builder forCodeBuildStartBuild.Start a CodeBuild BatchBuild as a task.A fluent builder forCodeBuildStartBuildBatch.Properties for CodeBuildStartBuildBatch using JSONata.A builder forCodeBuildStartBuildBatchJsonataPropsAn implementation forCodeBuildStartBuildBatchJsonataPropsProperties for CodeBuildStartBuildBatch using JSONPath.A builder forCodeBuildStartBuildBatchJsonPathPropsAn implementation forCodeBuildStartBuildBatchJsonPathPropsProperties for CodeBuildStartBuildBatch.A builder forCodeBuildStartBuildBatchPropsAn implementation forCodeBuildStartBuildBatchPropsProperties for CodeBuildStartBuild using JSONata.A builder forCodeBuildStartBuildJsonataPropsAn implementation forCodeBuildStartBuildJsonataPropsProperties for CodeBuildStartBuild using JSONPath.A builder forCodeBuildStartBuildJsonPathPropsAn implementation forCodeBuildStartBuildJsonPathPropsProperties for CodeBuildStartBuild.A builder forCodeBuildStartBuildPropsAn implementation forCodeBuildStartBuildPropsBasic properties for ECS Tasks.A builder forCommonEcsRunTaskPropsAn implementation forCommonEcsRunTaskPropsCompression type of the data.Describes the container, as part of model definition.A fluent builder forContainerDefinition.Configuration options for the ContainerDefinition.A builder forContainerDefinitionConfigAn implementation forContainerDefinitionConfigProperties to define a ContainerDefinition.A builder forContainerDefinitionOptionsAn implementation forContainerDefinitionOptionsA list of container overrides that specify the name of a container and the overrides it should receive.A builder forContainerOverrideAn implementation forContainerOverrideThe overrides that should be sent to a container.A builder forContainerOverridesAn implementation forContainerOverridesOptions to configure a cron expression.A builder forCronOptionsAn implementation forCronOptionsThe customization type.The key/value pair for a tag.A builder forCustomModelTagAn implementation forCustomModelTagS3 bucket configuration for data storage destination.A builder forDataBucketConfigurationAn implementation forDataBucketConfigurationLocation of the channel data.A builder forDataSourceAn implementation forDataSourceCreatesIDockerImageinstances.Configuration for a using Docker image.A builder forDockerImageConfigAn implementation forDockerImageConfigRepresents the data for an attribute.Determines the level of detail about provisioned throughput consumption that is returned.A StepFunctions task to call DynamoDeleteItem.A fluent builder forDynamoDeleteItem.Properties for DynamoDeleteItem Task using JSONata.A builder forDynamoDeleteItemJsonataPropsAn implementation forDynamoDeleteItemJsonataPropsProperties for DynamoDeleteItem Task using JSONPath.A builder forDynamoDeleteItemJsonPathPropsAn implementation forDynamoDeleteItemJsonPathPropsProperties for DynamoDeleteItem Task.A builder forDynamoDeleteItemPropsAn implementation forDynamoDeleteItemPropsA StepFunctions task to call DynamoGetItem.A fluent builder forDynamoGetItem.Properties for DynamoGetItem Task using JSONata.A builder forDynamoGetItemJsonataPropsAn implementation forDynamoGetItemJsonataPropsProperties for DynamoGetItem Task using JSONPath.A builder forDynamoGetItemJsonPathPropsAn implementation forDynamoGetItemJsonPathPropsProperties for DynamoGetItem Task.A builder forDynamoGetItemPropsAn implementation forDynamoGetItemPropsDetermines whether item collection metrics are returned.Class to generate projection expression.A StepFunctions task to call DynamoPutItem.A fluent builder forDynamoPutItem.Properties for DynamoPutItem Task using JSONata.A builder forDynamoPutItemJsonataPropsAn implementation forDynamoPutItemJsonataPropsProperties for DynamoPutItem Task using JSONPath.A builder forDynamoPutItemJsonPathPropsAn implementation forDynamoPutItemJsonPathPropsProperties for DynamoPutItem Task.A builder forDynamoPutItemPropsAn implementation forDynamoPutItemPropsUse ReturnValues if you want to get the item attributes as they appear before or after they are changed.A StepFunctions task to call DynamoUpdateItem.A fluent builder forDynamoUpdateItem.Properties for DynamoUpdateItem Task using JSONata.A builder forDynamoUpdateItemJsonataPropsAn implementation forDynamoUpdateItemJsonataPropsProperties for DynamoUpdateItem Task using JSONPath.A builder forDynamoUpdateItemJsonPathPropsAn implementation forDynamoUpdateItemJsonPathPropsProperties for DynamoUpdateItem Task.A builder forDynamoUpdateItemPropsAn implementation forDynamoUpdateItemPropsConfiguration for running an ECS task on EC2.A fluent builder forEcsEc2LaunchTarget.Options to run an ECS task on EC2 in StepFunctions and ECS.A builder forEcsEc2LaunchTargetOptionsAn implementation forEcsEc2LaunchTargetOptionsConfiguration for running an ECS task on Fargate.A fluent builder forEcsFargateLaunchTarget.Properties to define an ECS service.A builder forEcsFargateLaunchTargetOptionsAn implementation forEcsFargateLaunchTargetOptionsConfiguration options for the ECS launch type.A builder forEcsLaunchTargetConfigAn implementation forEcsLaunchTargetConfigRun a Task on ECS or Fargate.A fluent builder forEcsRunTask.Properties for ECS Tasks using JSONata.A builder forEcsRunTaskJsonataPropsAn implementation forEcsRunTaskJsonataPropsProperties for ECS Tasks using JSONPath.A builder forEcsRunTaskJsonPathPropsAn implementation forEcsRunTaskJsonPathPropsProperties for ECS Tasks.A builder forEcsRunTaskPropsAn implementation forEcsRunTaskPropsCall a EKS endpoint as a Task.A fluent builder forEksCall.Properties for calling a EKS endpoint with EksCall using JSONata.A builder forEksCallJsonataPropsAn implementation forEksCallJsonataPropsProperties for calling a EKS endpoint with EksCall using JSONPath.A builder forEksCallJsonPathPropsAn implementation forEksCallJsonPathPropsProperties for calling a EKS endpoint with EksCall.A builder forEksCallPropsAn implementation forEksCallPropsClass that supports methods which return the EKS cluster name depending on input type.A Step Functions Task to add a Step to an EMR Cluster.A fluent builder forEmrAddStep.Properties for EmrAddStep using JSONata.A builder forEmrAddStepJsonataPropsAn implementation forEmrAddStepJsonataPropsProperties for EmrAddStep using JSONPath.A builder forEmrAddStepJsonPathPropsAn implementation forEmrAddStepJsonPathPropsProperties for EmrAddStep.A builder forEmrAddStepPropsAn implementation forEmrAddStepPropsA Step Functions task to cancel a Step on an EMR Cluster.A fluent builder forEmrCancelStep.Properties for calling an EMR CancelStep using JSONata from your state machine.A builder forEmrCancelStepJsonataPropsAn implementation forEmrCancelStepJsonataPropsProperties for calling an EMR CancelStep using JSONPath from your state machine.A builder forEmrCancelStepJsonPathPropsAn implementation forEmrCancelStepJsonPathPropsProperties for calling an EMR CancelStep from your state machine.A builder forEmrCancelStepPropsAn implementation forEmrCancelStepPropsTask that creates an EMR Containers virtual cluster from an EKS cluster.A fluent builder forEmrContainersCreateVirtualCluster.Properties to define a EMR Containers CreateVirtualCluster Task using JSONata on an EKS cluster.A builder forEmrContainersCreateVirtualClusterJsonataPropsAn implementation forEmrContainersCreateVirtualClusterJsonataPropsProperties to define a EMR Containers CreateVirtualCluster Task using JSONPath on an EKS cluster.A builder forEmrContainersCreateVirtualClusterJsonPathPropsAn implementation forEmrContainersCreateVirtualClusterJsonPathPropsProperties to define a EMR Containers CreateVirtualCluster Task on an EKS cluster.A builder forEmrContainersCreateVirtualClusterPropsAn implementation forEmrContainersCreateVirtualClusterPropsDeletes an EMR Containers virtual cluster as a Task.A fluent builder forEmrContainersDeleteVirtualCluster.Properties to define a EMR Containers DeleteVirtualCluster Task using JSONata.A builder forEmrContainersDeleteVirtualClusterJsonataPropsAn implementation forEmrContainersDeleteVirtualClusterJsonataPropsProperties to define a EMR Containers DeleteVirtualCluster Task using JSONPath.A builder forEmrContainersDeleteVirtualClusterJsonPathPropsAn implementation forEmrContainersDeleteVirtualClusterJsonPathPropsProperties to define a EMR Containers DeleteVirtualCluster Task.A builder forEmrContainersDeleteVirtualClusterPropsAn implementation forEmrContainersDeleteVirtualClusterPropsStarts a job run.A fluent builder forEmrContainersStartJobRun.Properties for calling EMR Containers StartJobRun using JSONata.A builder forEmrContainersStartJobRunJsonataPropsAn implementation forEmrContainersStartJobRunJsonataPropsProperties for calling EMR Containers StartJobRun using JSONPath.A builder forEmrContainersStartJobRunJsonPathPropsAn implementation forEmrContainersStartJobRunJsonPathPropsThe props for a EMR Containers StartJobRun Task.A builder forEmrContainersStartJobRunPropsAn implementation forEmrContainersStartJobRunPropsA Step Functions Task to create an EMR Cluster.Properties for the EMR Cluster Applications.A builder forEmrCreateCluster.ApplicationConfigPropertyAn implementation forEmrCreateCluster.ApplicationConfigPropertyAn automatic scaling policy for a core instance group or task instance group in an Amazon EMR cluster.A builder forEmrCreateCluster.AutoScalingPolicyPropertyAn implementation forEmrCreateCluster.AutoScalingPolicyPropertyConfiguration of a bootstrap action.A builder forEmrCreateCluster.BootstrapActionConfigPropertyAn implementation forEmrCreateCluster.BootstrapActionConfigPropertyA fluent builder forEmrCreateCluster.CloudWatch Alarm Comparison Operators.The definition of a CloudWatch metric alarm, which determines when an automatic scaling activity is triggered.A builder forEmrCreateCluster.CloudWatchAlarmDefinitionPropertyAn implementation forEmrCreateCluster.CloudWatchAlarmDefinitionPropertyCloudWatch Alarm Statistics.CloudWatch Alarm Units.The unit type for managed scaling policy compute limits.An optional configuration specification to be used when provisioning cluster instances, which can include configurations for applications and software bundled with Amazon EMR.A builder forEmrCreateCluster.ConfigurationPropertyAn implementation forEmrCreateCluster.ConfigurationPropertyConfiguration of requested EBS block device associated with the instance group with count of volumes that will be associated to every instance.A builder forEmrCreateCluster.EbsBlockDeviceConfigPropertyAn implementation forEmrCreateCluster.EbsBlockDeviceConfigPropertyEBS Volume Types.The Amazon EBS configuration of a cluster instance.A builder forEmrCreateCluster.EbsConfigurationPropertyAn implementation forEmrCreateCluster.EbsConfigurationPropertyThe Cluster ScaleDownBehavior specifies the way that individual Amazon EC2 instances terminate when an automatic scale-in activity occurs or an instance group is resized.The configuration that defines an instance fleet.A builder forEmrCreateCluster.InstanceFleetConfigPropertyAn implementation forEmrCreateCluster.InstanceFleetConfigPropertyThe launch specification for On-Demand and Spot instances in the fleet, which determines the defined duration and provisioning timeout behavior, and allocation strategy.An implementation forEmrCreateCluster.InstanceFleetProvisioningSpecificationsPropertyConfiguration defining a new instance group.A builder forEmrCreateCluster.InstanceGroupConfigPropertyAn implementation forEmrCreateCluster.InstanceGroupConfigPropertyEC2 Instance Market.Instance Role Types.A specification of the number and type of Amazon EC2 instances.A builder forEmrCreateCluster.InstancesConfigPropertyAn implementation forEmrCreateCluster.InstancesConfigPropertyAn instance type configuration for each instance type in an instance fleet, which determines the EC2 instances Amazon EMR attempts to provision to fulfill On-Demand and Spot target capacities.A builder forEmrCreateCluster.InstanceTypeConfigPropertyAn implementation forEmrCreateCluster.InstanceTypeConfigPropertyAttributes for Kerberos configuration when Kerberos authentication is enabled using a security configuration.A builder forEmrCreateCluster.KerberosAttributesPropertyAn implementation forEmrCreateCluster.KerberosAttributesPropertyThe EC2 unit limits for a managed scaling policy.A builder forEmrCreateCluster.ManagedScalingComputeLimitsPropertyAn implementation forEmrCreateCluster.ManagedScalingComputeLimitsPropertyThe managed scaling policy for an Amazon EMR cluster.A builder forEmrCreateCluster.ManagedScalingPolicyPropertyAn implementation forEmrCreateCluster.ManagedScalingPolicyPropertyA CloudWatch dimension, which is specified using a Key (known as a Name in CloudWatch), Value pair.A builder forEmrCreateCluster.MetricDimensionPropertyAn implementation forEmrCreateCluster.MetricDimensionPropertyOn-Demand Allocation Strategies.The launch specification for On-Demand Instances in the instance fleet, which determines the allocation strategy.An implementation forEmrCreateCluster.OnDemandProvisioningSpecificationPropertyThe Amazon EC2 Availability Zone configuration of the cluster (job flow).A builder forEmrCreateCluster.PlacementTypePropertyAn implementation forEmrCreateCluster.PlacementTypePropertyThe type of adjustment the automatic scaling activity makes when triggered, and the periodicity of the adjustment.A builder forEmrCreateCluster.ScalingActionPropertyAn implementation forEmrCreateCluster.ScalingActionPropertyAutoScaling Adjustment Type.The upper and lower EC2 instance limits for an automatic scaling policy.A builder forEmrCreateCluster.ScalingConstraintsPropertyAn implementation forEmrCreateCluster.ScalingConstraintsPropertyA scale-in or scale-out rule that defines scaling activity, including the CloudWatch metric alarm that triggers activity, how EC2 instances are added or removed, and the periodicity of adjustments.A builder forEmrCreateCluster.ScalingRulePropertyAn implementation forEmrCreateCluster.ScalingRulePropertyThe conditions that trigger an automatic scaling activity and the definition of a CloudWatch metric alarm.A builder forEmrCreateCluster.ScalingTriggerPropertyAn implementation forEmrCreateCluster.ScalingTriggerPropertyConfiguration of the script to run during a bootstrap action.A builder forEmrCreateCluster.ScriptBootstrapActionConfigPropertyAn implementation forEmrCreateCluster.ScriptBootstrapActionConfigPropertyAn automatic scaling configuration, which describes how the policy adds or removes instances, the cooldown period, and the number of EC2 instances that will be added each time the CloudWatch metric alarm condition is satisfied.An implementation forEmrCreateCluster.SimpleScalingPolicyConfigurationPropertySpot Allocation Strategies.The launch specification for Spot instances in the instance fleet, which determines the defined duration and provisioning timeout behavior.A builder forEmrCreateCluster.SpotProvisioningSpecificationPropertyAn implementation forEmrCreateCluster.SpotProvisioningSpecificationPropertySpot Timeout Actions.EBS volume specifications such as volume type, IOPS, and size (GiB) that will be requested for the EBS volume attached to an EC2 instance in the cluster.A builder forEmrCreateCluster.VolumeSpecificationPropertyAn implementation forEmrCreateCluster.VolumeSpecificationPropertyProperties for calling an AWS service's API action using JSONata from your state machine across regions.A builder forEmrCreateClusterJsonataPropsAn implementation forEmrCreateClusterJsonataPropsProperties for calling an AWS service's API action using JSONPath from your state machine across regions.A builder forEmrCreateClusterJsonPathPropsAn implementation forEmrCreateClusterJsonPathPropsProperties for calling an AWS service's API action from your state machine across regions.A builder forEmrCreateClusterPropsAn implementation forEmrCreateClusterPropsA Step Functions Task to to modify an InstanceFleet on an EMR Cluster.A fluent builder forEmrModifyInstanceFleetByName.Properties for EmrModifyInstanceFleetByName using JSONata.A builder forEmrModifyInstanceFleetByNameJsonataPropsAn implementation forEmrModifyInstanceFleetByNameJsonataPropsProperties for EmrModifyInstanceFleetByName using JSONPath.A builder forEmrModifyInstanceFleetByNameJsonPathPropsAn implementation forEmrModifyInstanceFleetByNameJsonPathPropsProperties for EmrModifyInstanceFleetByName.A builder forEmrModifyInstanceFleetByNamePropsAn implementation forEmrModifyInstanceFleetByNamePropsA Step Functions Task to to modify an InstanceGroup on an EMR Cluster.A fluent builder forEmrModifyInstanceGroupByName.Modify the size or configurations of an instance group.An implementation forEmrModifyInstanceGroupByName.InstanceGroupModifyConfigPropertyCustom policy for requesting termination protection or termination of specific instances when shrinking an instance group.An implementation forEmrModifyInstanceGroupByName.InstanceResizePolicyPropertyPolicy for customizing shrink operations.A builder forEmrModifyInstanceGroupByName.ShrinkPolicyPropertyAn implementation forEmrModifyInstanceGroupByName.ShrinkPolicyPropertyProperties for EmrModifyInstanceGroupByName using JSONata.A builder forEmrModifyInstanceGroupByNameJsonataPropsAn implementation forEmrModifyInstanceGroupByNameJsonataPropsProperties for EmrModifyInstanceGroupByName using JSONPath.A builder forEmrModifyInstanceGroupByNameJsonPathPropsAn implementation forEmrModifyInstanceGroupByNameJsonPathPropsProperties for EmrModifyInstanceGroupByName.A builder forEmrModifyInstanceGroupByNamePropsAn implementation forEmrModifyInstanceGroupByNamePropsA Step Functions Task to to set Termination Protection on an EMR Cluster.A fluent builder forEmrSetClusterTerminationProtection.Properties for EmrSetClusterTerminationProtection using JSONata.A builder forEmrSetClusterTerminationProtectionJsonataPropsAn implementation forEmrSetClusterTerminationProtectionJsonataPropsProperties for EmrSetClusterTerminationProtection using JSONPath.A builder forEmrSetClusterTerminationProtectionJsonPathPropsAn implementation forEmrSetClusterTerminationProtectionJsonPathPropsProperties for EmrSetClusterTerminationProtection.A builder forEmrSetClusterTerminationProtectionPropsAn implementation forEmrSetClusterTerminationProtectionPropsA Step Functions Task to terminate an EMR Cluster.A fluent builder forEmrTerminateCluster.Properties for EmrTerminateCluster using JSONata.A builder forEmrTerminateClusterJsonataPropsAn implementation forEmrTerminateClusterJsonataPropsProperties for EmrTerminateCluster using JSONPath.A builder forEmrTerminateClusterJsonPathPropsAn implementation forEmrTerminateClusterJsonPathPropsProperties for EmrTerminateCluster.A builder forEmrTerminateClusterPropsAn implementation forEmrTerminateClusterPropsEncryption Configuration of the S3 bucket.A builder forEncryptionConfigurationAn implementation forEncryptionConfigurationEncryption Options of the S3 bucket.A Step Functions Task to evaluate an expression.A fluent builder forEvaluateExpression.Properties for EvaluateExpression.A builder forEvaluateExpressionPropsAn implementation forEvaluateExpressionPropsA StepFunctions Task to send events to an EventBridge event bus.A fluent builder forEventBridgePutEvents.An entry to be sent to EventBridge.A builder forEventBridgePutEventsEntryAn implementation forEventBridgePutEventsEntryProperties for sending events with PutEvents using JSONata.A builder forEventBridgePutEventsJsonataPropsAn implementation forEventBridgePutEventsJsonataPropsProperties for sending events with PutEvents using JSONPath.A builder forEventBridgePutEventsJsonPathPropsAn implementation forEventBridgePutEventsJsonPathPropsProperties for sending events with PutEvents.A builder forEventBridgePutEventsPropsAn implementation forEventBridgePutEventsPropsCreate a new AWS EventBridge Scheduler schedule.A fluent builder forEventBridgeSchedulerCreateScheduleTask.Properties for creating an AWS EventBridge Scheduler schedule using JSONata.A builder forEventBridgeSchedulerCreateScheduleTaskJsonataPropsAn implementation forEventBridgeSchedulerCreateScheduleTaskJsonataPropsProperties for creating an AWS EventBridge Scheduler schedule using JSONPath.A builder forEventBridgeSchedulerCreateScheduleTaskJsonPathPropsAn implementation forEventBridgeSchedulerCreateScheduleTaskJsonPathPropsProperties for creating an AWS EventBridge Scheduler schedule.A builder forEventBridgeSchedulerCreateScheduleTaskPropsAn implementation forEventBridgeSchedulerCreateScheduleTaskPropsThe target that EventBridge Scheduler will invoke.A fluent builder forEventBridgeSchedulerTarget.Properties forEventBridgeSchedulerTarget.A builder forEventBridgeSchedulerTargetPropsAn implementation forEventBridgeSchedulerTargetPropsThe excecution class of the job.Start a Job run as a Task.A fluent builder forGlueDataBrewStartJobRun.Properties for starting a job run with StartJobRun using JSONata.A builder forGlueDataBrewStartJobRunJsonataPropsAn implementation forGlueDataBrewStartJobRunJsonataPropsProperties for starting a job run with StartJobRun using JSONPath.A builder forGlueDataBrewStartJobRunJsonPathPropsAn implementation forGlueDataBrewStartJobRunJsonPathPropsProperties for starting a job run with StartJobRun.A builder forGlueDataBrewStartJobRunPropsAn implementation forGlueDataBrewStartJobRunPropsStarts an AWS Glue Crawler in a Task state.A fluent builder forGlueStartCrawlerRun.Properties for starting an AWS Glue Crawler as a task that using JSONata.A builder forGlueStartCrawlerRunJsonataPropsAn implementation forGlueStartCrawlerRunJsonataPropsProperties for starting an AWS Glue Crawler as a task that using JSONPath.A builder forGlueStartCrawlerRunJsonPathPropsAn implementation forGlueStartCrawlerRunJsonPathPropsProperties for starting an AWS Glue Crawler as a task.A builder forGlueStartCrawlerRunPropsAn implementation forGlueStartCrawlerRunPropsStarts an AWS Glue job in a Task state.A fluent builder forGlueStartJobRun.Properties for starting an AWS Glue job as a task.A builder forGlueStartJobRunJsonataPropsAn implementation forGlueStartJobRunJsonataPropsProperties for starting an AWS Glue job as a task.A builder forGlueStartJobRunJsonPathPropsAn implementation forGlueStartJobRunJsonPathPropsProperties for starting an AWS Glue job as a task.A builder forGlueStartJobRunPropsAn implementation forGlueStartJobRunPropsGuradrail settings for BedrockInvokeModel.A Step Functions Task to call a public third-party API.A fluent builder forHttpInvoke.Properties for calling an external HTTP endpoint with HttpInvoke using JSONata.A builder forHttpInvokeJsonataPropsAn implementation forHttpInvokeJsonataPropsProperties for calling an external HTTP endpoint with HttpInvoke using JSONPath.A builder forHttpInvokeJsonPathPropsAn implementation forHttpInvokeJsonPathPropsProperties for calling an external HTTP endpoint with HttpInvoke.A builder forHttpInvokePropsAn implementation forHttpInvokePropsHttp Methods that API Gateway supports.Method type of a EKS call.VPC configuration.Internal default implementation forIBedrockCreateModelCustomizationJobVpcConfig.A proxy class which represents a concrete javascript instance of this type.Configuration of the container used to host the model.Internal default implementation forIContainerDefinition.A proxy class which represents a concrete javascript instance of this type.An Amazon ECS launch type determines the type of infrastructure on which your tasks and services are hosted.Internal default implementation forIEcsLaunchTarget.A proxy class which represents a concrete javascript instance of this type.Input mode that the algorithm supports.Task to train a machine learning model using Amazon SageMaker.Internal default implementation forISageMakerTask.A proxy class which represents a concrete javascript instance of this type.An object representing an AWS Batch job dependency.A builder forJobDependencyAn implementation forJobDependencySpecify the driver that the EMR Containers job runs on.A builder forJobDriverAn implementation forJobDriverInvocation type of a Lambda.Invoke a Lambda function as a Task.A fluent builder forLambdaInvoke.Properties for invoking a Lambda function with LambdaInvoke using Jsonata.A builder forLambdaInvokeJsonataPropsAn implementation forLambdaInvokeJsonataPropsProperties for invoking a Lambda function with LambdaInvoke using JsonPath.A builder forLambdaInvokeJsonPathPropsAn implementation forLambdaInvokeJsonPathPropsProperties for invoking a Lambda function with LambdaInvoke.A builder forLambdaInvokePropsAn implementation forLambdaInvokePropsOptions for binding a launch target to an ECS run job task.A builder forLaunchTargetBindOptionsAn implementation forLaunchTargetBindOptionsA Step Functions Task to create a job in MediaConvert.A fluent builder forMediaConvertCreateJob.Properties for creating a MediaConvert Job using JSONata.A builder forMediaConvertCreateJobJsonataPropsAn implementation forMediaConvertCreateJobJsonataPropsProperties for creating a MediaConvert Job using JSONPath.A builder forMediaConvertCreateJobJsonPathPropsAn implementation forMediaConvertCreateJobJsonPathPropsProperties for creating a MediaConvert Job.A builder forMediaConvertCreateJobPropsAn implementation forMediaConvertCreateJobPropsA message attribute to add to the SNS message.A builder forMessageAttributeAn implementation forMessageAttributeThe data type set for the SNS message attributes.Specifies the metric name and regular expressions used to parse algorithm logs.A builder forMetricDefinitionAn implementation forMetricDefinitionSpecifies how many models the container hosts.Configures the timeout and maximum number of retries for processing a transform job invocation.A builder forModelClientOptionsAn implementation forModelClientOptionsConfiguration setting for monitoring.A builder forMonitoringAn implementation forMonitoringS3 bucket configuration for the output data.A builder forOutputBucketConfigurationAn implementation forOutputBucketConfigurationConfigures the S3 bucket where SageMaker will save the result of model training.A builder forOutputDataConfigAn implementation forOutputDataConfigIdentifies a model that you want to host and the resources to deploy for hosting it.A builder forProductionVariantAn implementation forProductionVariantDatabase and data catalog context in which the query execution occurs.A builder forQueryExecutionContextAn implementation forQueryExecutionContextDefine the format of the input data.The Amazon EMR release version to use for the job run.Specifies the resources, ML compute instances, and ML storage volumes to deploy for model training.A builder forResourceConfigAn implementation forResourceConfigLocation of query result along with S3 bucket configuration.A builder forResultConfigurationAn implementation forResultConfigurationThe information about the retry policy settings.A builder forRetryPolicyAn implementation forRetryPolicyS3 Data Distribution Type.S3 location of the channel data.A builder forS3DataSourceAn implementation forS3DataSourceS3 Data Type.ConstructsIS3Locationobjects.Options for binding an S3 Location.A builder forS3LocationBindOptionsAn implementation forS3LocationBindOptionsStores information about the location of an object in Amazon S3.A builder forS3LocationConfigAn implementation forS3LocationConfigA Step Functions Task to create a SageMaker endpoint.A fluent builder forSageMakerCreateEndpoint.A Step Functions Task to create a SageMaker endpoint configuration.A fluent builder forSageMakerCreateEndpointConfig.Properties for creating an Amazon SageMaker endpoint configuration using JSONata.A builder forSageMakerCreateEndpointConfigJsonataPropsAn implementation forSageMakerCreateEndpointConfigJsonataPropsProperties for creating an Amazon SageMaker endpoint configuration using JSONPath.A builder forSageMakerCreateEndpointConfigJsonPathPropsAn implementation forSageMakerCreateEndpointConfigJsonPathPropsProperties for creating an Amazon SageMaker endpoint configuration.A builder forSageMakerCreateEndpointConfigPropsAn implementation forSageMakerCreateEndpointConfigPropsProperties for creating an Amazon SageMaker endpoint using JSONata.A builder forSageMakerCreateEndpointJsonataPropsAn implementation forSageMakerCreateEndpointJsonataPropsProperties for creating an Amazon SageMaker endpoint using JSONPath.A builder forSageMakerCreateEndpointJsonPathPropsAn implementation forSageMakerCreateEndpointJsonPathPropsProperties for creating an Amazon SageMaker endpoint.A builder forSageMakerCreateEndpointPropsAn implementation forSageMakerCreateEndpointPropsA Step Functions Task to create a SageMaker model.A fluent builder forSageMakerCreateModel.Properties for creating an Amazon SageMaker model using JSONata.A builder forSageMakerCreateModelJsonataPropsAn implementation forSageMakerCreateModelJsonataPropsProperties for creating an Amazon SageMaker model using JSONPath.A builder forSageMakerCreateModelJsonPathPropsAn implementation forSageMakerCreateModelJsonPathPropsProperties for creating an Amazon SageMaker model.A builder forSageMakerCreateModelPropsAn implementation forSageMakerCreateModelPropsClass representing the SageMaker Create Training Job task.A fluent builder forSageMakerCreateTrainingJob.Properties for creating an Amazon SageMaker training job using JSONata.A builder forSageMakerCreateTrainingJobJsonataPropsAn implementation forSageMakerCreateTrainingJobJsonataPropsProperties for creating an Amazon SageMaker training job using JSONPath.A builder forSageMakerCreateTrainingJobJsonPathPropsAn implementation forSageMakerCreateTrainingJobJsonPathPropsProperties for creating an Amazon SageMaker training job.A builder forSageMakerCreateTrainingJobPropsAn implementation forSageMakerCreateTrainingJobPropsClass representing the SageMaker Create Transform Job task.A fluent builder forSageMakerCreateTransformJob.Properties for creating an Amazon SageMaker transform job task using JSONata.A builder forSageMakerCreateTransformJobJsonataPropsAn implementation forSageMakerCreateTransformJobJsonataPropsProperties for creating an Amazon SageMaker transform job task using JSONPath.A builder forSageMakerCreateTransformJobJsonPathPropsAn implementation forSageMakerCreateTransformJobJsonPathPropsProperties for creating an Amazon SageMaker transform job task.A builder forSageMakerCreateTransformJobPropsAn implementation forSageMakerCreateTransformJobPropsA Step Functions Task to update a SageMaker endpoint.A fluent builder forSageMakerUpdateEndpoint.Properties for updating Amazon SageMaker endpoint using JSONata.A builder forSageMakerUpdateEndpointJsonataPropsAn implementation forSageMakerUpdateEndpointJsonataPropsProperties for updating Amazon SageMaker endpoint using JSONPath.A builder forSageMakerUpdateEndpointJsonPathPropsAn implementation forSageMakerUpdateEndpointJsonPathPropsProperties for updating Amazon SageMaker endpoint.A builder forSageMakerUpdateEndpointPropsAn implementation forSageMakerUpdateEndpointPropsSchedule for EventBridge Scheduler.Configuration for a shuffle option for input data in a channel.A builder forShuffleConfigAn implementation forShuffleConfigA Step Functions Task to publish messages to SNS topic.A fluent builder forSnsPublish.Properties for publishing a message to an SNS topic using JSONata.A builder forSnsPublishJsonataPropsAn implementation forSnsPublishJsonataPropsProperties for publishing a message to an SNS topic using JSONPath.A builder forSnsPublishJsonPathPropsAn implementation forSnsPublishJsonPathPropsProperties for publishing a message to an SNS topic.A builder forSnsPublishPropsAn implementation forSnsPublishPropsThe information about job driver for Spark submit.A builder forSparkSubmitJobDriverAn implementation forSparkSubmitJobDriverMethod to use to split the transform job's data files into smaller batches.A StepFunctions Task to send messages to SQS queue.A fluent builder forSqsSendMessage.Properties for sending a message to an SQS queue using JSONata.A builder forSqsSendMessageJsonataPropsAn implementation forSqsSendMessageJsonataPropsProperties for sending a message to an SQS queue using JSONPath.A builder forSqsSendMessageJsonPathPropsAn implementation forSqsSendMessageJsonPathPropsProperties for sending a message to an SQS queue.A builder forSqsSendMessagePropsAn implementation forSqsSendMessagePropsA Step Functions Task to invoke an Activity worker.A fluent builder forStepFunctionsInvokeActivity.Properties for invoking an Activity worker using JSONata.A builder forStepFunctionsInvokeActivityJsonataPropsAn implementation forStepFunctionsInvokeActivityJsonataPropsProperties for invoking an Activity worker using JSONPath.A builder forStepFunctionsInvokeActivityJsonPathPropsAn implementation forStepFunctionsInvokeActivityJsonPathPropsProperties for invoking an Activity worker.A builder forStepFunctionsInvokeActivityPropsAn implementation forStepFunctionsInvokeActivityPropsA Step Functions Task to call StartExecution on another state machine.A fluent builder forStepFunctionsStartExecution.Properties for StartExecution using JSONata.A builder forStepFunctionsStartExecutionJsonataPropsAn implementation forStepFunctionsStartExecutionJsonataPropsProperties for StartExecution using JSONPath.A builder forStepFunctionsStartExecutionJsonPathPropsAn implementation forStepFunctionsStartExecutionJsonPathPropsProperties for StartExecution.A builder forStepFunctionsStartExecutionPropsAn implementation forStepFunctionsStartExecutionPropsSpecifies a limit to how long a model training job can run.A builder forStoppingConditionAn implementation forStoppingConditionAn environment variable to be set in the container run as a task.A builder forTaskEnvironmentVariableAn implementation forTaskEnvironmentVariableS3 bucket configuration for the training data.A builder forTrainingBucketConfigurationAn implementation forTrainingBucketConfigurationS3 location of the input data that the model can consume.A builder forTransformDataSourceAn implementation forTransformDataSourceDataset to be transformed and the Amazon S3 location where it is stored.A builder forTransformInputAn implementation forTransformInputS3 location where you want Amazon SageMaker to save the results from the transform job.A builder forTransformOutputAn implementation forTransformOutputML compute instances for the transform job.A builder forTransformResourcesAn implementation forTransformResourcesLocation of the channel data.A builder forTransformS3DataSourceAn implementation forTransformS3DataSourceThe style used when applying URL encoding to array values.S3 bucket configuration for the validation data.A builder forValidationBucketConfigurationAn implementation forValidationBucketConfigurationClass that returns a virtual cluster's id depending on input type.Specifies the VPC that you want your Amazon SageMaker training job to connect to.A builder forVpcConfigAn implementation forVpcConfigProperties for the worker configuration.A builder forWorkerConfigurationPropertyAn implementation forWorkerConfigurationPropertyDeprecated.The type of predefined worker that is allocated when a job runs.
workerTypeV2property forWorkerConfigurationProperty