Package software.amazon.awscdk.services.stepfunctions.tasks
Tasks for AWS Step Functions
---
AWS CDK v1 has reached End-of-Support on 2023-06-01. This package is no longer being updated, and users should migrate to AWS CDK v2.
For more information on how to migrate, see the Migrating to AWS CDK v2 guide.
AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly.
A Task state represents a single unit of work performed by a state machine. All work in your state machine is performed by tasks.
This module is part of the AWS Cloud Development Kit project.
Table Of Contents
- Tasks for AWS Step Functions
Task
A Task state represents a single unit of work performed by a state machine. In the
CDK, the exact work to be done is determined by a class that implements IStepFunctionsTask
.
AWS Step Functions integrates with some AWS services so that you can call API actions, and coordinate executions directly from the Amazon States Language in Step Functions. You can directly call and pass parameters to the APIs of those services.
Paths
In the Amazon States Language, a path is a string beginning with $
that you
can use to identify components within JSON text.
Learn more about input and output processing in Step Functions here
InputPath
Both InputPath
and Parameters
fields provide a way to manipulate JSON as it
moves through your workflow. AWS Step Functions applies the InputPath
field first,
and then the Parameters
field. You can first filter your raw input to a selection
you want using InputPath, and then apply Parameters to manipulate that input
further, or add new values. If you don't specify an InputPath
, a default value
of $
will be used.
The following example provides the field named input
as the input to the Task
state that runs a Lambda function.
Function fn; LambdaInvoke submitJob = LambdaInvoke.Builder.create(this, "Invoke Handler") .lambdaFunction(fn) .inputPath("$.input") .build();
OutputPath
Tasks also allow you to select a portion of the state output to pass to the next
state. This enables you to filter out unwanted information, and pass only the
portion of the JSON that you care about. If you don't specify an OutputPath
,
a default value of $
will be used. This passes the entire JSON node to the next
state.
The response from a Lambda function includes the response from the function as well as other metadata.
The following example assigns the output from the Task to a field named result
Function fn; LambdaInvoke submitJob = LambdaInvoke.Builder.create(this, "Invoke Handler") .lambdaFunction(fn) .outputPath("$.Payload.result") .build();
ResultSelector
You can use ResultSelector
to manipulate the raw result of a Task, Map or Parallel state before it is
passed to ResultPath
. For service integrations, the raw
result contains metadata in addition to the response payload. You can use
ResultSelector to construct a JSON payload that becomes the effective result
using static values or references to the raw result or context object.
The following example extracts the output payload of a Lambda function Task and combines it with some static values and the state name from the context object.
Function fn; LambdaInvoke.Builder.create(this, "Invoke Handler") .lambdaFunction(fn) .resultSelector(Map.of( "lambdaOutput", JsonPath.stringAt("$.Payload"), "invokeRequestId", JsonPath.stringAt("$.SdkResponseMetadata.RequestId"), "staticValue", Map.of( "foo", "bar"), "stateName", JsonPath.stringAt("$.State.Name"))) .build();
ResultPath
The output of a state can be a copy of its input, the result it produces (for
example, output from a Task state’s Lambda function), or a combination of its
input and result. Use ResultPath
to control which combination of these is
passed to the state output. If you don't specify an ResultPath
, a default
value of $
will be used.
The following example adds the item from calling DynamoDB's getItem
API to the state
input and passes it to the next state.
Table myTable; DynamoPutItem.Builder.create(this, "PutItem") .item(Map.of( "MessageId", DynamoAttributeValue.fromString("message-id"))) .table(myTable) .resultPath("$.Item") .build();
⚠️ The OutputPath
is computed after applying ResultPath
. All service integrations
return metadata as part of their response. When using ResultPath
, it's not possible to
merge a subset of the task output to the input.
Task parameters from the state JSON
Most tasks take parameters. Parameter values can either be static, supplied directly
in the workflow definition (by specifying their values), or a value available at runtime
in the state machine's execution (either as its input or an output of a prior state).
Parameter values available at runtime can be specified via the JsonPath
class,
using methods such as JsonPath.stringAt()
.
The following example provides the field named input
as the input to the Lambda function
and invokes it asynchronously.
Function fn; LambdaInvoke submitJob = LambdaInvoke.Builder.create(this, "Invoke Handler") .lambdaFunction(fn) .payload(TaskInput.fromJsonPathAt("$.input")) .invocationType(LambdaInvocationType.EVENT) .build();
You can also use intrinsic functions available on JsonPath
, for example JsonPath.format()
.
Here is an example of starting an Athena query that is dynamically created using the task input:
AthenaStartQueryExecution startQueryExecutionJob = AthenaStartQueryExecution.Builder.create(this, "Athena Start Query") .queryString(JsonPath.format("select contacts where year={};", JsonPath.stringAt("$.year"))) .queryExecutionContext(QueryExecutionContext.builder() .databaseName("interactions") .build()) .resultConfiguration(ResultConfiguration.builder() .encryptionConfiguration(EncryptionConfiguration.builder() .encryptionOption(EncryptionOption.S3_MANAGED) .build()) .outputLocation(Location.builder() .bucketName("mybucket") .objectKey("myprefix") .build()) .build()) .integrationPattern(IntegrationPattern.RUN_JOB) .build();
Each service integration has its own set of parameters that can be supplied.
Evaluate Expression
Use the EvaluateExpression
to perform simple operations referencing state paths. The
expression
referenced in the task will be evaluated in a Lambda function
(eval()
). This allows you to not have to write Lambda code for simple operations.
Example: convert a wait time from milliseconds to seconds, concat this in a message and wait:
EvaluateExpression convertToSeconds = EvaluateExpression.Builder.create(this, "Convert to seconds") .expression("$.waitMilliseconds / 1000") .resultPath("$.waitSeconds") .build(); EvaluateExpression createMessage = EvaluateExpression.Builder.create(this, "Create message") // Note: this is a string inside a string. .expression("`Now waiting ${$.waitSeconds} seconds...`") .runtime(Runtime.NODEJS_14_X) .resultPath("$.message") .build(); SnsPublish publishMessage = SnsPublish.Builder.create(this, "Publish message") .topic(new Topic(this, "cool-topic")) .message(TaskInput.fromJsonPathAt("$.message")) .resultPath("$.sns") .build(); Wait wait = Wait.Builder.create(this, "Wait") .time(WaitTime.secondsPath("$.waitSeconds")) .build(); StateMachine.Builder.create(this, "StateMachine") .definition(convertToSeconds.next(createMessage).next(publishMessage).next(wait)) .build();
The EvaluateExpression
supports a runtime
prop to specify the Lambda
runtime to use to evaluate the expression. Currently, only runtimes
of the Node.js family are supported.
API Gateway
Step Functions supports API Gateway through the service integration pattern.
HTTP APIs are designed for low-latency, cost-effective integrations with AWS services, including AWS Lambda, and HTTP endpoints. HTTP APIs support OIDC and OAuth 2.0 authorization, and come with built-in support for CORS and automatic deployments. Previous-generation REST APIs currently offer more features. More details can be found here.
Call REST API Endpoint
The CallApiGatewayRestApiEndpoint
calls the REST API endpoint.
import software.amazon.awscdk.services.apigateway.*; RestApi restApi = new RestApi(this, "MyRestApi"); CallApiGatewayRestApiEndpoint invokeTask = CallApiGatewayRestApiEndpoint.Builder.create(this, "Call REST API") .api(restApi) .stageName("prod") .method(HttpMethod.GET) .build();
Be aware that the header values must be arrays. When passing the Task Token
in the headers field WAIT_FOR_TASK_TOKEN
integration, use
JsonPath.array()
to wrap the token in an array:
import software.amazon.awscdk.services.apigateway.*; RestApi api; CallApiGatewayRestApiEndpoint.Builder.create(this, "Endpoint") .api(api) .stageName("Stage") .method(HttpMethod.PUT) .integrationPattern(IntegrationPattern.WAIT_FOR_TASK_TOKEN) .headers(TaskInput.fromObject(Map.of( "TaskToken", JsonPath.array(JsonPath.getTaskToken())))) .build();
Call HTTP API Endpoint
The CallApiGatewayHttpApiEndpoint
calls the HTTP API endpoint.
import software.amazon.awscdk.services.apigatewayv2.*; HttpApi httpApi = new HttpApi(this, "MyHttpApi"); CallApiGatewayHttpApiEndpoint invokeTask = CallApiGatewayHttpApiEndpoint.Builder.create(this, "Call HTTP API") .apiId(httpApi.getApiId()) .apiStack(Stack.of(httpApi)) .method(HttpMethod.GET) .build();
AWS SDK
Step Functions supports calling AWS service's API actions through the service integration pattern.
You can use Step Functions' AWS SDK integrations to call any of the over two hundred AWS services directly from your state machine, giving you access to over nine thousand API actions.
Bucket myBucket; CallAwsService getObject = CallAwsService.Builder.create(this, "GetObject") .service("s3") .action("getObject") .parameters(Map.of( "Bucket", myBucket.getBucketName(), "Key", JsonPath.stringAt("$.key"))) .iamResources(List.of(myBucket.arnForObjects("*"))) .build();
Use camelCase for actions and PascalCase for parameter names.
The task automatically adds an IAM statement to the state machine role's policy based on the
service and action called. The resources for this statement must be specified in iamResources
.
Use the iamAction
prop to manually specify the IAM action name in the case where the IAM
action name does not match with the API service/action name:
CallAwsService listBuckets = CallAwsService.Builder.create(this, "ListBuckets") .service("s3") .action("listBuckets") .iamResources(List.of("*")) .iamAction("s3:ListAllMyBuckets") .build();
Athena
Step Functions supports Athena through the service integration pattern.
StartQueryExecution
The StartQueryExecution API runs the SQL query statement.
AthenaStartQueryExecution startQueryExecutionJob = AthenaStartQueryExecution.Builder.create(this, "Start Athena Query") .queryString(JsonPath.stringAt("$.queryString")) .queryExecutionContext(QueryExecutionContext.builder() .databaseName("mydatabase") .build()) .resultConfiguration(ResultConfiguration.builder() .encryptionConfiguration(EncryptionConfiguration.builder() .encryptionOption(EncryptionOption.S3_MANAGED) .build()) .outputLocation(Location.builder() .bucketName("query-results-bucket") .objectKey("folder") .build()) .build()) .build();
GetQueryExecution
The GetQueryExecution API gets information about a single execution of a query.
AthenaGetQueryExecution getQueryExecutionJob = AthenaGetQueryExecution.Builder.create(this, "Get Query Execution") .queryExecutionId(JsonPath.stringAt("$.QueryExecutionId")) .build();
GetQueryResults
The GetQueryResults API that streams the results of a single query execution specified by QueryExecutionId from S3.
AthenaGetQueryResults getQueryResultsJob = AthenaGetQueryResults.Builder.create(this, "Get Query Results") .queryExecutionId(JsonPath.stringAt("$.QueryExecutionId")) .build();
StopQueryExecution
The StopQueryExecution API that stops a query execution.
AthenaStopQueryExecution stopQueryExecutionJob = AthenaStopQueryExecution.Builder.create(this, "Stop Query Execution") .queryExecutionId(JsonPath.stringAt("$.QueryExecutionId")) .build();
Batch
Step Functions supports Batch through the service integration pattern.
SubmitJob
The SubmitJob API submits an AWS Batch job from a job definition.
import software.amazon.awscdk.services.batch.*; JobDefinition batchJobDefinition; JobQueue batchQueue; BatchSubmitJob task = BatchSubmitJob.Builder.create(this, "Submit Job") .jobDefinitionArn(batchJobDefinition.getJobDefinitionArn()) .jobName("MyJob") .jobQueueArn(batchQueue.getJobQueueArn()) .build();
CodeBuild
Step Functions supports CodeBuild through the service integration pattern.
StartBuild
StartBuild starts a CodeBuild Project by Project Name.
import software.amazon.awscdk.services.codebuild.*; Project codebuildProject = Project.Builder.create(this, "Project") .projectName("MyTestProject") .buildSpec(BuildSpec.fromObject(Map.of( "version", "0.2", "phases", Map.of( "build", Map.of( "commands", List.of("echo \"Hello, CodeBuild!\"")))))) .build(); CodeBuildStartBuild task = CodeBuildStartBuild.Builder.create(this, "Task") .project(codebuildProject) .integrationPattern(IntegrationPattern.RUN_JOB) .environmentVariablesOverride(Map.of( "ZONE", BuildEnvironmentVariable.builder() .type(BuildEnvironmentVariableType.PLAINTEXT) .value(JsonPath.stringAt("$.envVariables.zone")) .build())) .build();
DynamoDB
You can call DynamoDB APIs from a Task
state.
Read more about calling DynamoDB APIs here
GetItem
The GetItem operation returns a set of attributes for the item with the given primary key.
Table myTable; DynamoGetItem.Builder.create(this, "Get Item") .key(Map.of("messageId", DynamoAttributeValue.fromString("message-007"))) .table(myTable) .build();
PutItem
The PutItem operation creates a new item, or replaces an old item with a new item.
Table myTable; DynamoPutItem.Builder.create(this, "PutItem") .item(Map.of( "MessageId", DynamoAttributeValue.fromString("message-007"), "Text", DynamoAttributeValue.fromString(JsonPath.stringAt("$.bar")), "TotalCount", DynamoAttributeValue.fromNumber(10))) .table(myTable) .build();
DeleteItem
The DeleteItem operation deletes a single item in a table by primary key.
Table myTable; DynamoDeleteItem.Builder.create(this, "DeleteItem") .key(Map.of("MessageId", DynamoAttributeValue.fromString("message-007"))) .table(myTable) .resultPath(JsonPath.DISCARD) .build();
UpdateItem
The UpdateItem operation edits an existing item's attributes, or adds a new item to the table if it does not already exist.
Table myTable; DynamoUpdateItem.Builder.create(this, "UpdateItem") .key(Map.of( "MessageId", DynamoAttributeValue.fromString("message-007"))) .table(myTable) .expressionAttributeValues(Map.of( ":val", DynamoAttributeValue.numberFromString(JsonPath.stringAt("$.Item.TotalCount.N")), ":rand", DynamoAttributeValue.fromNumber(20))) .updateExpression("SET TotalCount = :val + :rand") .build();
ECS
Step Functions supports ECS/Fargate through the service integration pattern.
RunTask
RunTask starts a new task using the specified task definition.
EC2
The EC2 launch type allows you to run your containerized applications on a cluster of Amazon EC2 instances that you manage.
When a task that uses the EC2 launch type is launched, Amazon ECS must determine where to place the task based on the requirements specified in the task definition, such as CPU and memory. Similarly, when you scale down the task count, Amazon ECS must determine which tasks to terminate. You can apply task placement strategies and constraints to customize how Amazon ECS places and terminates tasks. Learn more about task placement
The latest ACTIVE revision of the passed task definition is used for running the task.
The following example runs a job from a task definition on EC2
IVpc vpc = Vpc.fromLookup(this, "Vpc", VpcLookupOptions.builder() .isDefault(true) .build()); Cluster cluster = Cluster.Builder.create(this, "Ec2Cluster").vpc(vpc).build(); cluster.addCapacity("DefaultAutoScalingGroup", AddCapacityOptions.builder() .instanceType(new InstanceType("t2.micro")) .vpcSubnets(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build()) .build()); TaskDefinition taskDefinition = TaskDefinition.Builder.create(this, "TD") .compatibility(Compatibility.EC2) .build(); taskDefinition.addContainer("TheContainer", ContainerDefinitionOptions.builder() .image(ContainerImage.fromRegistry("foo/bar")) .memoryLimitMiB(256) .build()); EcsRunTask runTask = EcsRunTask.Builder.create(this, "Run") .integrationPattern(IntegrationPattern.RUN_JOB) .cluster(cluster) .taskDefinition(taskDefinition) .launchTarget(EcsEc2LaunchTarget.Builder.create() .placementStrategies(List.of(PlacementStrategy.spreadAcrossInstances(), PlacementStrategy.packedByCpu(), PlacementStrategy.randomly())) .placementConstraints(List.of(PlacementConstraint.memberOf("blieptuut"))) .build()) .build();
Fargate
AWS Fargate is a serverless compute engine for containers that works with Amazon Elastic Container Service (ECS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Learn more about Fargate
The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you. The latest ACTIVE revision of the passed task definition is used for running the task. Learn more about Fargate Versioning
The following example runs a job from a task definition on Fargate
IVpc vpc = Vpc.fromLookup(this, "Vpc", VpcLookupOptions.builder() .isDefault(true) .build()); Cluster cluster = Cluster.Builder.create(this, "FargateCluster").vpc(vpc).build(); TaskDefinition taskDefinition = TaskDefinition.Builder.create(this, "TD") .memoryMiB("512") .cpu("256") .compatibility(Compatibility.FARGATE) .build(); ContainerDefinition containerDefinition = taskDefinition.addContainer("TheContainer", ContainerDefinitionOptions.builder() .image(ContainerImage.fromRegistry("foo/bar")) .memoryLimitMiB(256) .build()); EcsRunTask runTask = EcsRunTask.Builder.create(this, "RunFargate") .integrationPattern(IntegrationPattern.RUN_JOB) .cluster(cluster) .taskDefinition(taskDefinition) .assignPublicIp(true) .containerOverrides(List.of(ContainerOverride.builder() .containerDefinition(containerDefinition) .environment(List.of(TaskEnvironmentVariable.builder().name("SOME_KEY").value(JsonPath.stringAt("$.SomeKey")).build())) .build())) .launchTarget(new EcsFargateLaunchTarget()) .build();
EMR
Step Functions supports Amazon EMR through the service integration pattern. The service integration APIs correspond to Amazon EMR APIs but differ in the parameters that are used.
Read more about the differences when using these service integrations.
Create Cluster
Creates and starts running a cluster (job flow).
Corresponds to the runJobFlow
API in EMR.
Role clusterRole = Role.Builder.create(this, "ClusterRole") .assumedBy(new ServicePrincipal("ec2.amazonaws.com")) .build(); Role serviceRole = Role.Builder.create(this, "ServiceRole") .assumedBy(new ServicePrincipal("elasticmapreduce.amazonaws.com")) .build(); Role autoScalingRole = Role.Builder.create(this, "AutoScalingRole") .assumedBy(new ServicePrincipal("elasticmapreduce.amazonaws.com")) .build(); autoScalingRole.assumeRolePolicy.addStatements( PolicyStatement.Builder.create() .effect(Effect.ALLOW) .principals(List.of( new ServicePrincipal("application-autoscaling.amazonaws.com"))) .actions(List.of("sts:AssumeRole")) .build()); EmrCreateCluster.Builder.create(this, "Create Cluster") .instances(InstancesConfigProperty.builder().build()) .clusterRole(clusterRole) .name(TaskInput.fromJsonPathAt("$.ClusterName").getValue()) .serviceRole(serviceRole) .autoScalingRole(autoScalingRole) .build();
If you want to run multiple steps in parallel,
you can specify the stepConcurrencyLevel
property. The concurrency range is between 1
and 256 inclusive, where the default concurrency of 1 means no step concurrency is allowed.
stepConcurrencyLevel
requires the EMR release label to be 5.28.0 or above.
EmrCreateCluster.Builder.create(this, "Create Cluster") .instances(InstancesConfigProperty.builder().build()) .name(TaskInput.fromJsonPathAt("$.ClusterName").getValue()) .stepConcurrencyLevel(10) .build();
Termination Protection
Locks a cluster (job flow) so the EC2 instances in the cluster cannot be terminated by user intervention, an API call, or a job-flow error.
Corresponds to the setTerminationProtection
API in EMR.
EmrSetClusterTerminationProtection.Builder.create(this, "Task") .clusterId("ClusterId") .terminationProtected(false) .build();
Terminate Cluster
Shuts down a cluster (job flow).
Corresponds to the terminateJobFlows
API in EMR.
EmrTerminateCluster.Builder.create(this, "Task") .clusterId("ClusterId") .build();
Add Step
Adds a new step to a running cluster.
Corresponds to the addJobFlowSteps
API in EMR.
EmrAddStep.Builder.create(this, "Task") .clusterId("ClusterId") .name("StepName") .jar("Jar") .actionOnFailure(ActionOnFailure.CONTINUE) .build();
Cancel Step
Cancels a pending step in a running cluster.
Corresponds to the cancelSteps
API in EMR.
EmrCancelStep.Builder.create(this, "Task") .clusterId("ClusterId") .stepId("StepId") .build();
Modify Instance Fleet
Modifies the target On-Demand and target Spot capacities for the instance fleet with the specified InstanceFleetName.
Corresponds to the modifyInstanceFleet
API in EMR.
EmrModifyInstanceFleetByName.Builder.create(this, "Task") .clusterId("ClusterId") .instanceFleetName("InstanceFleetName") .targetOnDemandCapacity(2) .targetSpotCapacity(0) .build();
Modify Instance Group
Modifies the number of nodes and configuration settings of an instance group.
Corresponds to the modifyInstanceGroups
API in EMR.
EmrModifyInstanceGroupByName.Builder.create(this, "Task") .clusterId("ClusterId") .instanceGroupName(JsonPath.stringAt("$.InstanceGroupName")) .instanceGroup(InstanceGroupModifyConfigProperty.builder() .instanceCount(1) .build()) .build();
EMR on EKS
Step Functions supports Amazon EMR on EKS through the service integration pattern. The service integration APIs correspond to Amazon EMR on EKS APIs, but differ in the parameters that are used.
Read more about the differences when using these service integrations.
Setting up the EKS cluster is required.
Create Virtual Cluster
The CreateVirtualCluster API creates a single virtual cluster that's mapped to a single Kubernetes namespace.
The EKS cluster containing the Kubernetes namespace where the virtual cluster will be mapped can be passed in from the task input.
EmrContainersCreateVirtualCluster.Builder.create(this, "Create a Virtual Cluster") .eksCluster(EksClusterInput.fromTaskInput(TaskInput.fromText("clusterId"))) .build();
The EKS cluster can also be passed in directly.
import software.amazon.awscdk.services.eks.*; Cluster eksCluster; EmrContainersCreateVirtualCluster.Builder.create(this, "Create a Virtual Cluster") .eksCluster(EksClusterInput.fromCluster(eksCluster)) .build();
By default, the Kubernetes namespace that a virtual cluster maps to is "default", but a specific namespace within an EKS cluster can be selected.
EmrContainersCreateVirtualCluster.Builder.create(this, "Create a Virtual Cluster") .eksCluster(EksClusterInput.fromTaskInput(TaskInput.fromText("clusterId"))) .eksNamespace("specified-namespace") .build();
Delete Virtual Cluster
The DeleteVirtualCluster API deletes a virtual cluster.
EmrContainersDeleteVirtualCluster.Builder.create(this, "Delete a Virtual Cluster") .virtualClusterId(TaskInput.fromJsonPathAt("$.virtualCluster")) .build();
Start Job Run
The StartJobRun API starts a job run. A job is a unit of work that you submit to Amazon EMR on EKS for execution. The work performed by the job can be defined by a Spark jar, PySpark script, or SparkSQL query. A job run is an execution of the job on the virtual cluster.
Required setup:
- If not done already, follow the steps to setup EMR on EKS and create an EKS Cluster.
- Enable Cluster access
- Enable IAM Role access
The following actions must be performed if the virtual cluster ID is supplied from the task input. Otherwise, if it is supplied statically in the state machine definition, these actions will be done automatically.
- Create an IAM role
- Update the Role Trust Policy of the Job Execution Role.
The job can be configured with spark submit parameters:
EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run") .virtualCluster(VirtualClusterInput.fromVirtualClusterId("de92jdei2910fwedz")) .releaseLabel(ReleaseLabel.EMR_6_2_0) .jobDriver(JobDriver.builder() .sparkSubmitJobDriver(SparkSubmitJobDriver.builder() .entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py")) .sparkSubmitParameters("--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1") .build()) .build()) .build();
Configuring the job can also be done via application configuration:
EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run") .virtualCluster(VirtualClusterInput.fromVirtualClusterId("de92jdei2910fwedz")) .releaseLabel(ReleaseLabel.EMR_6_2_0) .jobName("EMR-Containers-Job") .jobDriver(JobDriver.builder() .sparkSubmitJobDriver(SparkSubmitJobDriver.builder() .entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py")) .build()) .build()) .applicationConfig(List.of(ApplicationConfiguration.builder() .classification(Classification.SPARK_DEFAULTS) .properties(Map.of( "spark.executor.instances", "1", "spark.executor.memory", "512M")) .build())) .build();
Job monitoring can be enabled if monitoring.logging
is set true. This automatically generates an S3 bucket and CloudWatch logs.
EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run") .virtualCluster(VirtualClusterInput.fromVirtualClusterId("de92jdei2910fwedz")) .releaseLabel(ReleaseLabel.EMR_6_2_0) .jobDriver(JobDriver.builder() .sparkSubmitJobDriver(SparkSubmitJobDriver.builder() .entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py")) .sparkSubmitParameters("--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1") .build()) .build()) .monitoring(Monitoring.builder() .logging(true) .build()) .build();
Otherwise, providing monitoring for jobs with existing log groups and log buckets is also available.
import software.amazon.awscdk.services.logs.*; LogGroup logGroup = new LogGroup(this, "Log Group"); Bucket logBucket = new Bucket(this, "S3 Bucket"); EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run") .virtualCluster(VirtualClusterInput.fromVirtualClusterId("de92jdei2910fwedz")) .releaseLabel(ReleaseLabel.EMR_6_2_0) .jobDriver(JobDriver.builder() .sparkSubmitJobDriver(SparkSubmitJobDriver.builder() .entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py")) .sparkSubmitParameters("--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1") .build()) .build()) .monitoring(Monitoring.builder() .logGroup(logGroup) .logBucket(logBucket) .build()) .build();
Users can provide their own existing Job Execution Role.
EmrContainersStartJobRun.Builder.create(this, "EMR Containers Start Job Run") .virtualCluster(VirtualClusterInput.fromTaskInput(TaskInput.fromJsonPathAt("$.VirtualClusterId"))) .releaseLabel(ReleaseLabel.EMR_6_2_0) .jobName("EMR-Containers-Job") .executionRole(Role.fromRoleArn(this, "Job-Execution-Role", "arn:aws:iam::xxxxxxxxxxxx:role/JobExecutionRole")) .jobDriver(JobDriver.builder() .sparkSubmitJobDriver(SparkSubmitJobDriver.builder() .entryPoint(TaskInput.fromText("local:///usr/lib/spark/examples/src/main/python/pi.py")) .sparkSubmitParameters("--conf spark.executor.instances=2 --conf spark.executor.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1") .build()) .build()) .build();
EKS
Step Functions supports Amazon EKS through the service integration pattern. The service integration APIs correspond to Amazon EKS APIs.
Read more about the differences when using these service integrations.
Call
Read and write Kubernetes resource objects via a Kubernetes API endpoint.
Corresponds to the call
API in Step Functions Connector.
The following code snippet includes a Task state that uses eks:call to list the pods.
import software.amazon.awscdk.services.eks.*; Cluster myEksCluster = Cluster.Builder.create(this, "my sample cluster") .version(KubernetesVersion.V1_18) .clusterName("myEksCluster") .build(); EksCall.Builder.create(this, "Call a EKS Endpoint") .cluster(myEksCluster) .httpMethod(HttpMethods.GET) .httpPath("/api/v1/namespaces/default/pods") .build();
EventBridge
Step Functions supports Amazon EventBridge through the service integration pattern. The service integration APIs correspond to Amazon EventBridge APIs.
Read more about the differences when using these service integrations.
Put Events
Send events to an EventBridge bus.
Corresponds to the put-events
API in Step Functions Connector.
The following code snippet includes a Task state that uses events:putevents to send an event to the default bus.
import software.amazon.awscdk.services.events.*; EventBus myEventBus = EventBus.Builder.create(this, "EventBus") .eventBusName("MyEventBus1") .build(); EventBridgePutEvents.Builder.create(this, "Send an event to EventBridge") .entries(List.of(EventBridgePutEventsEntry.builder() .detail(TaskInput.fromObject(Map.of( "Message", "Hello from Step Functions!"))) .eventBus(myEventBus) .detailType("MessageFromStepFunctions") .source("step.functions") .build())) .build();
Glue
Step Functions supports AWS Glue through the service integration pattern.
You can call the StartJobRun
API from a Task
state.
GlueStartJobRun.Builder.create(this, "Task") .glueJobName("my-glue-job") .arguments(TaskInput.fromObject(Map.of( "key", "value"))) .timeout(Duration.minutes(30)) .notifyDelayAfter(Duration.minutes(5)) .build();
Glue DataBrew
Step Functions supports AWS Glue DataBrew through the service integration pattern.
You can call the StartJobRun
API from a Task
state.
GlueDataBrewStartJobRun.Builder.create(this, "Task") .name("databrew-job") .build();
Lambda
Invoke a Lambda function.
You can specify the input to your Lambda function through the payload
attribute.
By default, Step Functions invokes Lambda function with the state input (JSON path '$')
as the input.
The following snippet invokes a Lambda Function with the state input as the payload
by referencing the $
path.
Function fn; LambdaInvoke.Builder.create(this, "Invoke with state input") .lambdaFunction(fn) .build();
When a function is invoked, the Lambda service sends these response elements back.
⚠️ The response from the Lambda function is in an attribute called Payload
The following snippet invokes a Lambda Function by referencing the $.Payload
path
to reference the output of a Lambda executed before it.
Function fn; LambdaInvoke.Builder.create(this, "Invoke with empty object as payload") .lambdaFunction(fn) .payload(TaskInput.fromObject(Map.of())) .build(); // use the output of fn as input // use the output of fn as input LambdaInvoke.Builder.create(this, "Invoke with payload field in the state input") .lambdaFunction(fn) .payload(TaskInput.fromJsonPathAt("$.Payload")) .build();
The following snippet invokes a Lambda and sets the task output to only include the Lambda function response.
Function fn; LambdaInvoke.Builder.create(this, "Invoke and set function response as task output") .lambdaFunction(fn) .outputPath("$.Payload") .build();
If you want to combine the input and the Lambda function response you can use
the payloadResponseOnly
property and specify the resultPath
. This will put the
Lambda function ARN directly in the "Resource" string, but it conflicts with the
integrationPattern, invocationType, clientContext, and qualifier properties.
Function fn; LambdaInvoke.Builder.create(this, "Invoke and combine function response with task input") .lambdaFunction(fn) .payloadResponseOnly(true) .resultPath("$.fn") .build();
You can have Step Functions pause a task, and wait for an external process to return a task token. Read more about the callback pattern
To use the callback pattern, set the token
property on the task. Call the Step
Functions SendTaskSuccess
or SendTaskFailure
APIs with the token to
indicate that the task has completed and the state machine should resume execution.
The following snippet invokes a Lambda with the task token as part of the input to the Lambda.
Function fn; LambdaInvoke.Builder.create(this, "Invoke with callback") .lambdaFunction(fn) .integrationPattern(IntegrationPattern.WAIT_FOR_TASK_TOKEN) .payload(TaskInput.fromObject(Map.of( "token", JsonPath.getTaskToken(), "input", JsonPath.stringAt("$.someField")))) .build();
⚠️ The task will pause until it receives that task token back with a SendTaskSuccess
or SendTaskFailure
call. Learn more about Callback with the Task
Token.
AWS Lambda can occasionally experience transient service errors. In this case, invoking Lambda
results in a 500 error, such as ServiceException
, AWSLambdaException
, or SdkClientException
.
As a best practice, the LambdaInvoke
task will retry on those errors with an interval of 2 seconds,
a back-off rate of 2 and 6 maximum attempts. Set the retryOnServiceExceptions
prop to false
to
disable this behavior.
SageMaker
Step Functions supports AWS SageMaker through the service integration pattern.
If your training job or model uses resources from AWS Marketplace,
network isolation is required.
To do so, set the enableNetworkIsolation
property to true
for SageMakerCreateModel
or SageMakerCreateTrainingJob
.
To set environment variables for the Docker container use the environment
property.
Create Training Job
You can call the CreateTrainingJob
API from a Task
state.
SageMakerCreateTrainingJob.Builder.create(this, "TrainSagemaker") .trainingJobName(JsonPath.stringAt("$.JobName")) .algorithmSpecification(AlgorithmSpecification.builder() .algorithmName("BlazingText") .trainingInputMode(InputMode.FILE) .build()) .inputDataConfig(List.of(Channel.builder() .channelName("train") .dataSource(DataSource.builder() .s3DataSource(S3DataSource.builder() .s3DataType(S3DataType.S3_PREFIX) .s3Location(S3Location.fromJsonExpression("$.S3Bucket")) .build()) .build()) .build())) .outputDataConfig(OutputDataConfig.builder() .s3OutputLocation(S3Location.fromBucket(Bucket.fromBucketName(this, "Bucket", "mybucket"), "myoutputpath")) .build()) .resourceConfig(ResourceConfig.builder() .instanceCount(1) .instanceType(new InstanceType(JsonPath.stringAt("$.InstanceType"))) .volumeSize(Size.gibibytes(50)) .build()) // optional: default is 1 instance of EC2 `M4.XLarge` with `10GB` volume .stoppingCondition(StoppingCondition.builder() .maxRuntime(Duration.hours(2)) .build()) .build();
Create Transform Job
You can call the CreateTransformJob
API from a Task
state.
SageMakerCreateTransformJob.Builder.create(this, "Batch Inference") .transformJobName("MyTransformJob") .modelName("MyModelName") .modelClientOptions(ModelClientOptions.builder() .invocationsMaxRetries(3) // default is 0 .invocationsTimeout(Duration.minutes(5)) .build()) .transformInput(TransformInput.builder() .transformDataSource(TransformDataSource.builder() .s3DataSource(TransformS3DataSource.builder() .s3Uri("s3://inputbucket/train") .s3DataType(S3DataType.S3_PREFIX) .build()) .build()) .build()) .transformOutput(TransformOutput.builder() .s3OutputPath("s3://outputbucket/TransformJobOutputPath") .build()) .transformResources(TransformResources.builder() .instanceCount(1) .instanceType(InstanceType.of(InstanceClass.M4, InstanceSize.XLARGE)) .build()) .build();
Create Endpoint
You can call the CreateEndpoint
API from a Task
state.
SageMakerCreateEndpoint.Builder.create(this, "SagemakerEndpoint") .endpointName(JsonPath.stringAt("$.EndpointName")) .endpointConfigName(JsonPath.stringAt("$.EndpointConfigName")) .build();
Create Endpoint Config
You can call the CreateEndpointConfig
API from a Task
state.
SageMakerCreateEndpointConfig.Builder.create(this, "SagemakerEndpointConfig") .endpointConfigName("MyEndpointConfig") .productionVariants(List.of(ProductionVariant.builder() .initialInstanceCount(2) .instanceType(InstanceType.of(InstanceClass.M5, InstanceSize.XLARGE)) .modelName("MyModel") .variantName("awesome-variant") .build())) .build();
Create Model
You can call the CreateModel
API from a Task
state.
SageMakerCreateModel.Builder.create(this, "Sagemaker") .modelName("MyModel") .primaryContainer(ContainerDefinition.Builder.create() .image(DockerImage.fromJsonExpression(JsonPath.stringAt("$.Model.imageName"))) .mode(Mode.SINGLE_MODEL) .modelS3Location(S3Location.fromJsonExpression("$.TrainingJob.ModelArtifacts.S3ModelArtifacts")) .build()) .build();
Update Endpoint
You can call the UpdateEndpoint
API from a Task
state.
SageMakerUpdateEndpoint.Builder.create(this, "SagemakerEndpoint") .endpointName(JsonPath.stringAt("$.Endpoint.Name")) .endpointConfigName(JsonPath.stringAt("$.Endpoint.EndpointConfig")) .build();
SNS
Step Functions supports Amazon SNS through the service integration pattern.
You can call the Publish
API from a Task
state to publish to an SNS topic.
Topic topic = new Topic(this, "Topic"); // Use a field from the execution data as message. SnsPublish task1 = SnsPublish.Builder.create(this, "Publish1") .topic(topic) .integrationPattern(IntegrationPattern.REQUEST_RESPONSE) .message(TaskInput.fromDataAt("$.state.message")) .messageAttributes(Map.of( "place", MessageAttribute.builder() .value(JsonPath.stringAt("$.place")) .build(), "pic", MessageAttribute.builder() // BINARY must be explicitly set .dataType(MessageAttributeDataType.BINARY) .value(JsonPath.stringAt("$.pic")) .build(), "people", MessageAttribute.builder() .value(4) .build(), "handles", MessageAttribute.builder() .value(List.of("@kslater", "@jjf", null, "@mfanning")) .build())) .build(); // Combine a field from the execution data with // a literal object. SnsPublish task2 = SnsPublish.Builder.create(this, "Publish2") .topic(topic) .message(TaskInput.fromObject(Map.of( "field1", "somedata", "field2", JsonPath.stringAt("$.field2")))) .build();
Step Functions
Start Execution
You can manage AWS Step Functions executions.
AWS Step Functions supports it's own StartExecution
API as a service integration.
// Define a state machine with one Pass state StateMachine child = StateMachine.Builder.create(this, "ChildStateMachine") .definition(Chain.start(new Pass(this, "PassState"))) .build(); // Include the state machine in a Task state with callback pattern StepFunctionsStartExecution task = StepFunctionsStartExecution.Builder.create(this, "ChildTask") .stateMachine(child) .integrationPattern(IntegrationPattern.WAIT_FOR_TASK_TOKEN) .input(TaskInput.fromObject(Map.of( "token", JsonPath.getTaskToken(), "foo", "bar"))) .name("MyExecutionName") .build(); // Define a second state machine with the Task state above // Define a second state machine with the Task state above StateMachine.Builder.create(this, "ParentStateMachine") .definition(task) .build();
You can utilize Associate Workflow Executions
via the associateWithParent
property. This allows the Step Functions UI to link child
executions from parent executions, making it easier to trace execution flow across state machines.
StateMachine child; StepFunctionsStartExecution task = StepFunctionsStartExecution.Builder.create(this, "ChildTask") .stateMachine(child) .associateWithParent(true) .build();
This will add the payload AWS_STEP_FUNCTIONS_STARTED_BY_EXECUTION_ID.$: $$.Execution.Id
to the
input
property for you, which will pass the execution ID from the context object to the
execution input. It requires input
to be an object or not be set at all.
Invoke Activity
You can invoke a Step Functions Activity which enables you to have a task in your state machine where the work is performed by a worker that can be hosted on Amazon EC2, Amazon ECS, AWS Lambda, basically anywhere. Activities are a way to associate code running somewhere (known as an activity worker) with a specific task in a state machine.
When Step Functions reaches an activity task state, the workflow waits for an activity worker to poll for a task. An activity worker polls Step Functions by using GetActivityTask, and sending the ARN for the related activity.
After the activity worker completes its work, it can provide a report of its
success or failure by using SendTaskSuccess
or SendTaskFailure
. These two
calls use the taskToken provided by GetActivityTask to associate the result
with that task.
The following example creates an activity and creates a task that invokes the activity.
Activity submitJobActivity = new Activity(this, "SubmitJob"); StepFunctionsInvokeActivity.Builder.create(this, "Submit Job") .activity(submitJobActivity) .build();
SQS
Step Functions supports Amazon SQS
You can call the SendMessage
API from a Task
state
to send a message to an SQS queue.
Deprecated: AWS CDK v1 has reached End-of-Support on 2023-06-01. This package is no longer being updated, and users should migrate to AWS CDK v2. For more information on how to migrate, see https://docs.aws.amazon.com/cdk/v2/guide/migrating-v2.htmlQueue queue = new Queue(this, "Queue"); // Use a field from the execution data as message. SqsSendMessage task1 = SqsSendMessage.Builder.create(this, "Send1") .queue(queue) .messageBody(TaskInput.fromJsonPathAt("$.message")) .build(); // Combine a field from the execution data with // a literal object. SqsSendMessage task2 = SqsSendMessage.Builder.create(this, "Send2") .queue(queue) .messageBody(TaskInput.fromObject(Map.of( "field1", "somedata", "field2", JsonPath.stringAt("$.field2")))) .build();
-
ClassDescriptionThe generation of Elastic Inference (EI) instance.The size of the Elastic Inference (EI) instance to use for the production variant.The action to take when the cluster step fails.Specify the training algorithm and algorithm-specific metadata.A builder for
AlgorithmSpecification
An implementation forAlgorithmSpecification
A configuration specification to be used when provisioning virtual clusters, which can include configurations for applications and software bundled with Amazon EMR on EKS.A builder forApplicationConfiguration
An implementation forApplicationConfiguration
How to assemble the results of the transform job as a single S3 object.Get an Athena Query Execution as a Task.A fluent builder forAthenaGetQueryExecution
.Properties for getting a Query Execution.A builder forAthenaGetQueryExecutionProps
An implementation forAthenaGetQueryExecutionProps
Get an Athena Query Results as a Task.A fluent builder forAthenaGetQueryResults
.Properties for getting a Query Results.A builder forAthenaGetQueryResultsProps
An implementation forAthenaGetQueryResultsProps
Start an Athena Query as a Task.A fluent builder forAthenaStartQueryExecution
.Properties for starting a Query Execution.A builder forAthenaStartQueryExecutionProps
An implementation forAthenaStartQueryExecutionProps
Stop an Athena Query Execution as a Task.A fluent builder forAthenaStopQueryExecution
.Properties for stoping a Query Execution.A builder forAthenaStopQueryExecutionProps
An implementation forAthenaStopQueryExecutionProps
The authentication method used to call the endpoint.The overrides that should be sent to a container.A builder forBatchContainerOverrides
An implementation forBatchContainerOverrides
An object representing an AWS Batch job dependency.A builder forBatchJobDependency
An implementation forBatchJobDependency
Specifies the number of records to include in a mini-batch for an HTTP inference request.Task to submits an AWS Batch job from a job definition.A fluent builder forBatchSubmitJob
.Properties for RunBatchJob.A builder forBatchSubmitJobProps
An implementation forBatchSubmitJobProps
Base CallApiGatewayEdnpoint Task Props.A builder forCallApiGatewayEndpointBaseProps
An implementation forCallApiGatewayEndpointBaseProps
Call HTTP API endpoint as a Task.A fluent builder forCallApiGatewayHttpApiEndpoint
.Properties for calling an HTTP API Endpoint.A builder forCallApiGatewayHttpApiEndpointProps
An implementation forCallApiGatewayHttpApiEndpointProps
Call REST API endpoint as a Task.A fluent builder forCallApiGatewayRestApiEndpoint
.Properties for calling an REST API Endpoint.A builder forCallApiGatewayRestApiEndpointProps
An implementation forCallApiGatewayRestApiEndpointProps
A StepFunctions task to call an AWS service API.A fluent builder forCallAwsService
.Properties for calling an AWS service's API action from your state machine.A builder forCallAwsServiceProps
An implementation forCallAwsServiceProps
Describes the training, validation or test dataset and the Amazon S3 location where it is stored.A builder forChannel
An implementation forChannel
The classification within a EMR Containers application configuration.Start a CodeBuild Build as a task.A fluent builder forCodeBuildStartBuild
.Properties for CodeBuildStartBuild.A builder forCodeBuildStartBuildProps
An implementation forCodeBuildStartBuildProps
Basic properties for ECS Tasks.A builder forCommonEcsRunTaskProps
An implementation forCommonEcsRunTaskProps
Compression type of the data.Describes the container, as part of model definition.A fluent builder forContainerDefinition
.Configuration options for the ContainerDefinition.A builder forContainerDefinitionConfig
An implementation forContainerDefinitionConfig
Properties to define a ContainerDefinition.A builder forContainerDefinitionOptions
An implementation forContainerDefinitionOptions
A list of container overrides that specify the name of a container and the overrides it should receive.A builder forContainerOverride
An implementation forContainerOverride
The overrides that should be sent to a container.A builder forContainerOverrides
An implementation forContainerOverrides
Location of the channel data.A builder forDataSource
An implementation forDataSource
CreatesIDockerImage
instances.Configuration for a using Docker image.A builder forDockerImageConfig
An implementation forDockerImageConfig
Represents the data for an attribute.Determines the level of detail about provisioned throughput consumption that is returned.A StepFunctions task to call DynamoDeleteItem.A fluent builder forDynamoDeleteItem
.Properties for DynamoDeleteItem Task.A builder forDynamoDeleteItemProps
An implementation forDynamoDeleteItemProps
A StepFunctions task to call DynamoGetItem.A fluent builder forDynamoGetItem
.Properties for DynamoGetItem Task.A builder forDynamoGetItemProps
An implementation forDynamoGetItemProps
Determines whether item collection metrics are returned.Class to generate projection expression.A StepFunctions task to call DynamoPutItem.A fluent builder forDynamoPutItem
.Properties for DynamoPutItem Task.A builder forDynamoPutItemProps
An implementation forDynamoPutItemProps
Use ReturnValues if you want to get the item attributes as they appear before or after they are changed.A StepFunctions task to call DynamoUpdateItem.A fluent builder forDynamoUpdateItem
.Properties for DynamoUpdateItem Task.A builder forDynamoUpdateItemProps
An implementation forDynamoUpdateItemProps
Configuration for running an ECS task on EC2.A fluent builder forEcsEc2LaunchTarget
.Options to run an ECS task on EC2 in StepFunctions and ECS.A builder forEcsEc2LaunchTargetOptions
An implementation forEcsEc2LaunchTargetOptions
Configuration for running an ECS task on Fargate.A fluent builder forEcsFargateLaunchTarget
.Properties to define an ECS service.A builder forEcsFargateLaunchTargetOptions
An implementation forEcsFargateLaunchTargetOptions
Configuration options for the ECS launch type.A builder forEcsLaunchTargetConfig
An implementation forEcsLaunchTargetConfig
Run a Task on ECS or Fargate.A fluent builder forEcsRunTask
.Deprecated.No replacementDeprecated.Deprecated.No replacementDeprecated.Deprecated.Properties for ECS Tasks.A builder forEcsRunTaskProps
An implementation forEcsRunTaskProps
Call a EKS endpoint as a Task.A fluent builder forEksCall
.Properties for calling a EKS endpoint with EksCall.A builder forEksCallProps
An implementation forEksCallProps
Class that supports methods which return the EKS cluster name depending on input type.A Step Functions Task to add a Step to an EMR Cluster.A fluent builder forEmrAddStep
.Properties for EmrAddStep.A builder forEmrAddStepProps
An implementation forEmrAddStepProps
A Step Functions Task to to cancel a Step on an EMR Cluster.A fluent builder forEmrCancelStep
.Properties for EmrCancelStep.A builder forEmrCancelStepProps
An implementation forEmrCancelStepProps
Task that creates an EMR Containers virtual cluster from an EKS cluster.A fluent builder forEmrContainersCreateVirtualCluster
.Properties to define a EMR Containers CreateVirtualCluster Task on an EKS cluster.A builder forEmrContainersCreateVirtualClusterProps
An implementation forEmrContainersCreateVirtualClusterProps
Deletes an EMR Containers virtual cluster as a Task.A fluent builder forEmrContainersDeleteVirtualCluster
.Properties to define a EMR Containers DeleteVirtualCluster Task.A builder forEmrContainersDeleteVirtualClusterProps
An implementation forEmrContainersDeleteVirtualClusterProps
Starts a job run.A fluent builder forEmrContainersStartJobRun
.The props for a EMR Containers StartJobRun Task.A builder forEmrContainersStartJobRunProps
An implementation forEmrContainersStartJobRunProps
A Step Functions Task to create an EMR Cluster.Properties for the EMR Cluster Applications.A builder forEmrCreateCluster.ApplicationConfigProperty
An implementation forEmrCreateCluster.ApplicationConfigProperty
An automatic scaling policy for a core instance group or task instance group in an Amazon EMR cluster.A builder forEmrCreateCluster.AutoScalingPolicyProperty
An implementation forEmrCreateCluster.AutoScalingPolicyProperty
Configuration of a bootstrap action.A builder forEmrCreateCluster.BootstrapActionConfigProperty
An implementation forEmrCreateCluster.BootstrapActionConfigProperty
A fluent builder forEmrCreateCluster
.CloudWatch Alarm Comparison Operators.The definition of a CloudWatch metric alarm, which determines when an automatic scaling activity is triggered.A builder forEmrCreateCluster.CloudWatchAlarmDefinitionProperty
An implementation forEmrCreateCluster.CloudWatchAlarmDefinitionProperty
CloudWatch Alarm Statistics.CloudWatch Alarm Units.An optional configuration specification to be used when provisioning cluster instances, which can include configurations for applications and software bundled with Amazon EMR.A builder forEmrCreateCluster.ConfigurationProperty
An implementation forEmrCreateCluster.ConfigurationProperty
Configuration of requested EBS block device associated with the instance group with count of volumes that will be associated to every instance.A builder forEmrCreateCluster.EbsBlockDeviceConfigProperty
An implementation forEmrCreateCluster.EbsBlockDeviceConfigProperty
EBS Volume Types.The Amazon EBS configuration of a cluster instance.A builder forEmrCreateCluster.EbsConfigurationProperty
An implementation forEmrCreateCluster.EbsConfigurationProperty
The Cluster ScaleDownBehavior specifies the way that individual Amazon EC2 instances terminate when an automatic scale-in activity occurs or an instance group is resized.The configuration that defines an instance fleet.A builder forEmrCreateCluster.InstanceFleetConfigProperty
An implementation forEmrCreateCluster.InstanceFleetConfigProperty
The launch specification for Spot instances in the fleet, which determines the defined duration and provisioning timeout behavior.An implementation forEmrCreateCluster.InstanceFleetProvisioningSpecificationsProperty
Configuration defining a new instance group.A builder forEmrCreateCluster.InstanceGroupConfigProperty
An implementation forEmrCreateCluster.InstanceGroupConfigProperty
EC2 Instance Market.Instance Role Types.A specification of the number and type of Amazon EC2 instances.A builder forEmrCreateCluster.InstancesConfigProperty
An implementation forEmrCreateCluster.InstancesConfigProperty
An instance type configuration for each instance type in an instance fleet, which determines the EC2 instances Amazon EMR attempts to provision to fulfill On-Demand and Spot target capacities.A builder forEmrCreateCluster.InstanceTypeConfigProperty
An implementation forEmrCreateCluster.InstanceTypeConfigProperty
Attributes for Kerberos configuration when Kerberos authentication is enabled using a security configuration.A builder forEmrCreateCluster.KerberosAttributesProperty
An implementation forEmrCreateCluster.KerberosAttributesProperty
A CloudWatch dimension, which is specified using a Key (known as a Name in CloudWatch), Value pair.A builder forEmrCreateCluster.MetricDimensionProperty
An implementation forEmrCreateCluster.MetricDimensionProperty
The Amazon EC2 Availability Zone configuration of the cluster (job flow).A builder forEmrCreateCluster.PlacementTypeProperty
An implementation forEmrCreateCluster.PlacementTypeProperty
The type of adjustment the automatic scaling activity makes when triggered, and the periodicity of the adjustment.A builder forEmrCreateCluster.ScalingActionProperty
An implementation forEmrCreateCluster.ScalingActionProperty
AutoScaling Adjustment Type.The upper and lower EC2 instance limits for an automatic scaling policy.A builder forEmrCreateCluster.ScalingConstraintsProperty
An implementation forEmrCreateCluster.ScalingConstraintsProperty
A scale-in or scale-out rule that defines scaling activity, including the CloudWatch metric alarm that triggers activity, how EC2 instances are added or removed, and the periodicity of adjustments.A builder forEmrCreateCluster.ScalingRuleProperty
An implementation forEmrCreateCluster.ScalingRuleProperty
The conditions that trigger an automatic scaling activity and the definition of a CloudWatch metric alarm.A builder forEmrCreateCluster.ScalingTriggerProperty
An implementation forEmrCreateCluster.ScalingTriggerProperty
Configuration of the script to run during a bootstrap action.A builder forEmrCreateCluster.ScriptBootstrapActionConfigProperty
An implementation forEmrCreateCluster.ScriptBootstrapActionConfigProperty
An automatic scaling configuration, which describes how the policy adds or removes instances, the cooldown period, and the number of EC2 instances that will be added each time the CloudWatch metric alarm condition is satisfied.An implementation forEmrCreateCluster.SimpleScalingPolicyConfigurationProperty
Spot Allocation Strategies.The launch specification for Spot instances in the instance fleet, which determines the defined duration and provisioning timeout behavior.A builder forEmrCreateCluster.SpotProvisioningSpecificationProperty
An implementation forEmrCreateCluster.SpotProvisioningSpecificationProperty
Spot Timeout Actions.EBS volume specifications such as volume type, IOPS, and size (GiB) that will be requested for the EBS volume attached to an EC2 instance in the cluster.A builder forEmrCreateCluster.VolumeSpecificationProperty
An implementation forEmrCreateCluster.VolumeSpecificationProperty
Properties for EmrCreateCluster.A builder forEmrCreateClusterProps
An implementation forEmrCreateClusterProps
A Step Functions Task to to modify an InstanceFleet on an EMR Cluster.A fluent builder forEmrModifyInstanceFleetByName
.Properties for EmrModifyInstanceFleetByName.A builder forEmrModifyInstanceFleetByNameProps
An implementation forEmrModifyInstanceFleetByNameProps
A Step Functions Task to to modify an InstanceGroup on an EMR Cluster.A fluent builder forEmrModifyInstanceGroupByName
.Modify the size or configurations of an instance group.An implementation forEmrModifyInstanceGroupByName.InstanceGroupModifyConfigProperty
Custom policy for requesting termination protection or termination of specific instances when shrinking an instance group.An implementation forEmrModifyInstanceGroupByName.InstanceResizePolicyProperty
Policy for customizing shrink operations.A builder forEmrModifyInstanceGroupByName.ShrinkPolicyProperty
An implementation forEmrModifyInstanceGroupByName.ShrinkPolicyProperty
Properties for EmrModifyInstanceGroupByName.A builder forEmrModifyInstanceGroupByNameProps
An implementation forEmrModifyInstanceGroupByNameProps
A Step Functions Task to to set Termination Protection on an EMR Cluster.A fluent builder forEmrSetClusterTerminationProtection
.Properties for EmrSetClusterTerminationProtection.A builder forEmrSetClusterTerminationProtectionProps
An implementation forEmrSetClusterTerminationProtectionProps
A Step Functions Task to terminate an EMR Cluster.A fluent builder forEmrTerminateCluster
.Properties for EmrTerminateCluster.A builder forEmrTerminateClusterProps
An implementation forEmrTerminateClusterProps
Encryption Configuration of the S3 bucket.A builder forEncryptionConfiguration
An implementation forEncryptionConfiguration
Encryption Options of the S3 bucket.A Step Functions Task to evaluate an expression.A fluent builder forEvaluateExpression
.Properties for EvaluateExpression.A builder forEvaluateExpressionProps
An implementation forEvaluateExpressionProps
A StepFunctions Task to send events to an EventBridge event bus.A fluent builder forEventBridgePutEvents
.An entry to be sent to EventBridge.A builder forEventBridgePutEventsEntry
An implementation forEventBridgePutEventsEntry
Properties for sending events with PutEvents.A builder forEventBridgePutEventsProps
An implementation forEventBridgePutEventsProps
Start a Job run as a Task.A fluent builder forGlueDataBrewStartJobRun
.Properties for starting a job run with StartJobRun.A builder forGlueDataBrewStartJobRunProps
An implementation forGlueDataBrewStartJobRunProps
Starts an AWS Glue job in a Task state.A fluent builder forGlueStartJobRun
.Properties for starting an AWS Glue job as a task.A builder forGlueStartJobRunProps
An implementation forGlueStartJobRunProps
Http Methods that API Gateway supports.Method type of a EKS call.Configuration of the container used to host the model.Internal default implementation forIContainerDefinition
.A proxy class which represents a concrete javascript instance of this type.An Amazon ECS launch type determines the type of infrastructure on which your tasks and services are hosted.Internal default implementation forIEcsLaunchTarget
.A proxy class which represents a concrete javascript instance of this type.Input mode that the algorithm supports.Deprecated.useLambdaInvocationType
Deprecated.useStepFunctionsInvokeActivity
Deprecated.Deprecated.useStepFunctionsInvokeActivity
andStepFunctionsInvokeActivityProps
.Deprecated.Deprecated.Deprecated.UseLambdaInvoke
Deprecated.Deprecated.useLambdaInvoke
Deprecated.Deprecated.Task to train a machine learning model using Amazon SageMaker.Internal default implementation forISageMakerTask
.A proxy class which represents a concrete javascript instance of this type.An object representing an AWS Batch job dependency.A builder forJobDependency
An implementation forJobDependency
Specify the driver that the EMR Containers job runs on.A builder forJobDriver
An implementation forJobDriver
Invocation type of a Lambda.Invoke a Lambda function as a Task.A fluent builder forLambdaInvoke
.Properties for invoking a Lambda function with LambdaInvoke.A builder forLambdaInvokeProps
An implementation forLambdaInvokeProps
Options for binding a launch target to an ECS run job task.A builder forLaunchTargetBindOptions
An implementation forLaunchTargetBindOptions
A message attribute to add to the SNS message.A builder forMessageAttribute
An implementation forMessageAttribute
The data type set for the SNS message attributes.Specifies the metric name and regular expressions used to parse algorithm logs.A builder forMetricDefinition
An implementation forMetricDefinition
Specifies how many models the container hosts.Configures the timeout and maximum number of retries for processing a transform job invocation.A builder forModelClientOptions
An implementation forModelClientOptions
Configuration setting for monitoring.A builder forMonitoring
An implementation forMonitoring
Configures the S3 bucket where SageMaker will save the result of model training.A builder forOutputDataConfig
An implementation forOutputDataConfig
Identifies a model that you want to host and the resources to deploy for hosting it.A builder forProductionVariant
An implementation forProductionVariant
Deprecated.UseSnsPublish
Deprecated.Deprecated.UseSnsPublish
Deprecated.Deprecated.Database and data catalog context in which the query execution occurs.A builder forQueryExecutionContext
An implementation forQueryExecutionContext
Define the format of the input data.The Amazon EMR release version to use for the job run.Specifies the resources, ML compute instances, and ML storage volumes to deploy for model training.A builder forResourceConfig
An implementation forResourceConfig
Location of query result along with S3 bucket configuration.A builder forResultConfiguration
An implementation forResultConfiguration
Deprecated.useBatchSubmitJob
Deprecated.Deprecated.useBatchSubmitJob
Deprecated.Deprecated.Deprecated.replaced byEcsRunTask
Deprecated.Deprecated.useEcsRunTask
andEcsRunTaskProps
Deprecated.Deprecated.Deprecated.replaced byEcsRunTask
Deprecated.Deprecated.replaced byEcsRunTask
andEcsRunTaskProps
Deprecated.Deprecated.Deprecated.useGlueStartJobRun
Deprecated.Deprecated.useGlueStartJobRun
Deprecated.Deprecated.Deprecated.UseLambdaInvoke
Deprecated.Deprecated.UseLambdaInvoke
Deprecated.Deprecated.S3 Data Distribution Type.S3 location of the channel data.A builder forS3DataSource
An implementation forS3DataSource
S3 Data Type.ConstructsIS3Location
objects.Options for binding an S3 Location.A builder forS3LocationBindOptions
An implementation forS3LocationBindOptions
Stores information about the location of an object in Amazon S3.A builder forS3LocationConfig
An implementation forS3LocationConfig
A Step Functions Task to create a SageMaker endpoint.A fluent builder forSageMakerCreateEndpoint
.A Step Functions Task to create a SageMaker endpoint configuration.A fluent builder forSageMakerCreateEndpointConfig
.Properties for creating an Amazon SageMaker endpoint configuration.A builder forSageMakerCreateEndpointConfigProps
An implementation forSageMakerCreateEndpointConfigProps
Properties for creating an Amazon SageMaker endpoint.A builder forSageMakerCreateEndpointProps
An implementation forSageMakerCreateEndpointProps
A Step Functions Task to create a SageMaker model.A fluent builder forSageMakerCreateModel
.Properties for creating an Amazon SageMaker model.A builder forSageMakerCreateModelProps
An implementation forSageMakerCreateModelProps
Class representing the SageMaker Create Training Job task.A fluent builder forSageMakerCreateTrainingJob
.Properties for creating an Amazon SageMaker training job.A builder forSageMakerCreateTrainingJobProps
An implementation forSageMakerCreateTrainingJobProps
Class representing the SageMaker Create Transform Job task.A fluent builder forSageMakerCreateTransformJob
.Properties for creating an Amazon SageMaker transform job task.A builder forSageMakerCreateTransformJobProps
An implementation forSageMakerCreateTransformJobProps
A Step Functions Task to update a SageMaker endpoint.A fluent builder forSageMakerUpdateEndpoint
.Properties for updating Amazon SageMaker endpoint.A builder forSageMakerUpdateEndpointProps
An implementation forSageMakerUpdateEndpointProps
Deprecated.UseSqsSendMessage
Deprecated.Deprecated.UseSqsSendMessage
Deprecated.Deprecated.Configuration for a shuffle option for input data in a channel.A builder forShuffleConfig
An implementation forShuffleConfig
A Step Functions Task to publish messages to SNS topic.A fluent builder forSnsPublish
.Properties for publishing a message to an SNS topic.A builder forSnsPublishProps
An implementation forSnsPublishProps
The information about job driver for Spark submit.A builder forSparkSubmitJobDriver
An implementation forSparkSubmitJobDriver
Method to use to split the transform job's data files into smaller batches.A StepFunctions Task to send messages to SQS queue.A fluent builder forSqsSendMessage
.Properties for sending a message to an SQS queue.A builder forSqsSendMessageProps
An implementation forSqsSendMessageProps
Deprecated.use 'StepFunctionsStartExecution'Deprecated.Deprecated.use 'StepFunctionsStartExecution'Deprecated.Deprecated.A Step Functions Task to invoke an Activity worker.A fluent builder forStepFunctionsInvokeActivity
.Properties for invoking an Activity worker.A builder forStepFunctionsInvokeActivityProps
An implementation forStepFunctionsInvokeActivityProps
A Step Functions Task to call StartExecution on another state machine.A fluent builder forStepFunctionsStartExecution
.Properties for StartExecution.A builder forStepFunctionsStartExecutionProps
An implementation forStepFunctionsStartExecutionProps
Specifies a limit to how long a model training job can run.A builder forStoppingCondition
An implementation forStoppingCondition
An environment variable to be set in the container run as a task.A builder forTaskEnvironmentVariable
An implementation forTaskEnvironmentVariable
S3 location of the input data that the model can consume.A builder forTransformDataSource
An implementation forTransformDataSource
Dataset to be transformed and the Amazon S3 location where it is stored.A builder forTransformInput
An implementation forTransformInput
S3 location where you want Amazon SageMaker to save the results from the transform job.A builder forTransformOutput
An implementation forTransformOutput
ML compute instances for the transform job.A builder forTransformResources
An implementation forTransformResources
Location of the channel data.A builder forTransformS3DataSource
An implementation forTransformS3DataSource
Class that returns a virtual cluster's id depending on input type.Specifies the VPC that you want your Amazon SageMaker training job to connect to.A builder forVpcConfig
An implementation forVpcConfig