CfnPipe
- class aws_cdk.aws_pipes.CfnPipe(scope, id, *, role_arn, source, target, description=None, desired_state=None, enrichment=None, enrichment_parameters=None, name=None, source_parameters=None, tags=None, target_parameters=None)
Bases:
CfnResource
A CloudFormation
AWS::Pipes::Pipe
.Create a pipe. Amazon EventBridge Pipes connect event sources to targets and reduces the need for specialized knowledge and integration code.
- CloudformationResource:
AWS::Pipes::Pipe
- Link:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-pipes-pipe.html
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes cfn_pipe = pipes.CfnPipe(self, "MyCfnPipe", role_arn="roleArn", source="source", target="target", # the properties below are optional description="description", desired_state="desiredState", enrichment="enrichment", enrichment_parameters=pipes.CfnPipe.PipeEnrichmentParametersProperty( http_parameters=pipes.CfnPipe.PipeEnrichmentHttpParametersProperty( header_parameters={ "header_parameters_key": "headerParameters" }, path_parameter_values=["pathParameterValues"], query_string_parameters={ "query_string_parameters_key": "queryStringParameters" } ), input_template="inputTemplate" ), name="name", source_parameters=pipes.CfnPipe.PipeSourceParametersProperty( active_mq_broker_parameters=pipes.CfnPipe.PipeSourceActiveMQBrokerParametersProperty( credentials=pipes.CfnPipe.MQBrokerAccessCredentialsProperty( basic_auth="basicAuth" ), queue_name="queueName", # the properties below are optional batch_size=123, maximum_batching_window_in_seconds=123 ), dynamo_db_stream_parameters=pipes.CfnPipe.PipeSourceDynamoDBStreamParametersProperty( starting_position="startingPosition", # the properties below are optional batch_size=123, dead_letter_config=pipes.CfnPipe.DeadLetterConfigProperty( arn="arn" ), maximum_batching_window_in_seconds=123, maximum_record_age_in_seconds=123, maximum_retry_attempts=123, on_partial_batch_item_failure="onPartialBatchItemFailure", parallelization_factor=123 ), filter_criteria=pipes.CfnPipe.FilterCriteriaProperty( filters=[pipes.CfnPipe.FilterProperty( pattern="pattern" )] ), kinesis_stream_parameters=pipes.CfnPipe.PipeSourceKinesisStreamParametersProperty( starting_position="startingPosition", # the properties below are optional batch_size=123, dead_letter_config=pipes.CfnPipe.DeadLetterConfigProperty( arn="arn" ), maximum_batching_window_in_seconds=123, maximum_record_age_in_seconds=123, maximum_retry_attempts=123, on_partial_batch_item_failure="onPartialBatchItemFailure", parallelization_factor=123, starting_position_timestamp="startingPositionTimestamp" ), managed_streaming_kafka_parameters=pipes.CfnPipe.PipeSourceManagedStreamingKafkaParametersProperty( topic_name="topicName", # the properties below are optional batch_size=123, consumer_group_id="consumerGroupId", credentials=pipes.CfnPipe.MSKAccessCredentialsProperty( client_certificate_tls_auth="clientCertificateTlsAuth", sasl_scram512_auth="saslScram512Auth" ), maximum_batching_window_in_seconds=123, starting_position="startingPosition" ), rabbit_mq_broker_parameters=pipes.CfnPipe.PipeSourceRabbitMQBrokerParametersProperty( credentials=pipes.CfnPipe.MQBrokerAccessCredentialsProperty( basic_auth="basicAuth" ), queue_name="queueName", # the properties below are optional batch_size=123, maximum_batching_window_in_seconds=123, virtual_host="virtualHost" ), self_managed_kafka_parameters=pipes.CfnPipe.PipeSourceSelfManagedKafkaParametersProperty( topic_name="topicName", # the properties below are optional additional_bootstrap_servers=["additionalBootstrapServers"], batch_size=123, consumer_group_id="consumerGroupId", credentials=pipes.CfnPipe.SelfManagedKafkaAccessConfigurationCredentialsProperty( basic_auth="basicAuth", client_certificate_tls_auth="clientCertificateTlsAuth", sasl_scram256_auth="saslScram256Auth", sasl_scram512_auth="saslScram512Auth" ), maximum_batching_window_in_seconds=123, server_root_ca_certificate="serverRootCaCertificate", starting_position="startingPosition", vpc=pipes.CfnPipe.SelfManagedKafkaAccessConfigurationVpcProperty( security_group=["securityGroup"], subnets=["subnets"] ) ), sqs_queue_parameters=pipes.CfnPipe.PipeSourceSqsQueueParametersProperty( batch_size=123, maximum_batching_window_in_seconds=123 ) ), tags={ "tags_key": "tags" }, target_parameters=pipes.CfnPipe.PipeTargetParametersProperty( batch_job_parameters=pipes.CfnPipe.PipeTargetBatchJobParametersProperty( job_definition="jobDefinition", job_name="jobName", # the properties below are optional array_properties=pipes.CfnPipe.BatchArrayPropertiesProperty( size=123 ), container_overrides=pipes.CfnPipe.BatchContainerOverridesProperty( command=["command"], environment=[pipes.CfnPipe.BatchEnvironmentVariableProperty( name="name", value="value" )], instance_type="instanceType", resource_requirements=[pipes.CfnPipe.BatchResourceRequirementProperty( type="type", value="value" )] ), depends_on=[pipes.CfnPipe.BatchJobDependencyProperty( job_id="jobId", type="type" )], parameters={ "parameters_key": "parameters" }, retry_strategy=pipes.CfnPipe.BatchRetryStrategyProperty( attempts=123 ) ), cloud_watch_logs_parameters=pipes.CfnPipe.PipeTargetCloudWatchLogsParametersProperty( log_stream_name="logStreamName", timestamp="timestamp" ), ecs_task_parameters=pipes.CfnPipe.PipeTargetEcsTaskParametersProperty( task_definition_arn="taskDefinitionArn", # the properties below are optional capacity_provider_strategy=[pipes.CfnPipe.CapacityProviderStrategyItemProperty( capacity_provider="capacityProvider", # the properties below are optional base=123, weight=123 )], enable_ecs_managed_tags=False, enable_execute_command=False, group="group", launch_type="launchType", network_configuration=pipes.CfnPipe.NetworkConfigurationProperty( awsvpc_configuration=pipes.CfnPipe.AwsVpcConfigurationProperty( subnets=["subnets"], # the properties below are optional assign_public_ip="assignPublicIp", security_groups=["securityGroups"] ) ), overrides=pipes.CfnPipe.EcsTaskOverrideProperty( container_overrides=[pipes.CfnPipe.EcsContainerOverrideProperty( command=["command"], cpu=123, environment=[pipes.CfnPipe.EcsEnvironmentVariableProperty( name="name", value="value" )], environment_files=[pipes.CfnPipe.EcsEnvironmentFileProperty( type="type", value="value" )], memory=123, memory_reservation=123, name="name", resource_requirements=[pipes.CfnPipe.EcsResourceRequirementProperty( type="type", value="value" )] )], cpu="cpu", ephemeral_storage=pipes.CfnPipe.EcsEphemeralStorageProperty( size_in_gi_b=123 ), execution_role_arn="executionRoleArn", inference_accelerator_overrides=[pipes.CfnPipe.EcsInferenceAcceleratorOverrideProperty( device_name="deviceName", device_type="deviceType" )], memory="memory", task_role_arn="taskRoleArn" ), placement_constraints=[pipes.CfnPipe.PlacementConstraintProperty( expression="expression", type="type" )], placement_strategy=[pipes.CfnPipe.PlacementStrategyProperty( field="field", type="type" )], platform_version="platformVersion", propagate_tags="propagateTags", reference_id="referenceId", tags=[CfnTag( key="key", value="value" )], task_count=123 ), event_bridge_event_bus_parameters=pipes.CfnPipe.PipeTargetEventBridgeEventBusParametersProperty( detail_type="detailType", endpoint_id="endpointId", resources=["resources"], source="source", time="time" ), http_parameters=pipes.CfnPipe.PipeTargetHttpParametersProperty( header_parameters={ "header_parameters_key": "headerParameters" }, path_parameter_values=["pathParameterValues"], query_string_parameters={ "query_string_parameters_key": "queryStringParameters" } ), input_template="inputTemplate", kinesis_stream_parameters=pipes.CfnPipe.PipeTargetKinesisStreamParametersProperty( partition_key="partitionKey" ), lambda_function_parameters=pipes.CfnPipe.PipeTargetLambdaFunctionParametersProperty( invocation_type="invocationType" ), redshift_data_parameters=pipes.CfnPipe.PipeTargetRedshiftDataParametersProperty( database="database", sqls=["sqls"], # the properties below are optional db_user="dbUser", secret_manager_arn="secretManagerArn", statement_name="statementName", with_event=False ), sage_maker_pipeline_parameters=pipes.CfnPipe.PipeTargetSageMakerPipelineParametersProperty( pipeline_parameter_list=[pipes.CfnPipe.SageMakerPipelineParameterProperty( name="name", value="value" )] ), sqs_queue_parameters=pipes.CfnPipe.PipeTargetSqsQueueParametersProperty( message_deduplication_id="messageDeduplicationId", message_group_id="messageGroupId" ), step_function_state_machine_parameters=pipes.CfnPipe.PipeTargetStateMachineParametersProperty( invocation_type="invocationType" ) ) )
Create a new
AWS::Pipes::Pipe
.- Parameters:
scope (
Construct
) –scope in which this resource is defined.
id (
str
) –scoped id of the resource.
role_arn (
str
) – The ARN of the role that allows the pipe to send data to the target.source (
str
) – The ARN of the source resource.target (
str
) – The ARN of the target resource.description (
Optional
[str
]) – A description of the pipe.desired_state (
Optional
[str
]) – The state the pipe should be in.enrichment (
Optional
[str
]) – The ARN of the enrichment resource.enrichment_parameters (
Union
[PipeEnrichmentParametersProperty
,Dict
[str
,Any
],IResolvable
,None
]) – The parameters required to set up enrichment on your pipe.name (
Optional
[str
]) – The name of the pipe.source_parameters (
Union
[IResolvable
,PipeSourceParametersProperty
,Dict
[str
,Any
],None
]) – The parameters required to set up a source for your pipe.tags (
Optional
[Mapping
[str
,str
]]) – The list of key-value pairs to associate with the pipe.target_parameters (
Union
[IResolvable
,PipeTargetParametersProperty
,Dict
[str
,Any
],None
]) – The parameters required to set up a target for your pipe. For more information about pipe target parameters, including how to use dynamic path parameters, see Target parameters in the Amazon EventBridge User Guide .
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_depends_on(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
)- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
)value (
Any
)
- See:
- Return type:
None
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermdediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
).- Parameters:
policy (
Optional
[RemovalPolicy
])apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resoure, please consult that specific resource’s documentation.
- Return type:
None
- get_att(attribute_name)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
)- See:
- Return type:
Any
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) –tree inspector to collect and process attributes.
- Return type:
None
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::Pipes::Pipe'
- attr_arn
The ARN of the pipe.
- CloudformationAttribute:
Arn
- attr_creation_time
The time the pipe was created.
- CloudformationAttribute:
CreationTime
- attr_current_state
The state the pipe is in.
- CloudformationAttribute:
CurrentState
- attr_last_modified_time
ss.sTZD).
- CloudformationAttribute:
LastModifiedTime
- Type:
When the pipe was last updated, in ISO-8601 format (YYYY-MM-DDThh
- Type:
mm
- attr_state_reason
The reason the pipe is in its current state.
- CloudformationAttribute:
StateReason
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- description
A description of the pipe.
- desired_state
The state the pipe should be in.
- enrichment
The ARN of the enrichment resource.
- enrichment_parameters
The parameters required to set up enrichment on your pipe.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- name
The name of the pipe.
- node
The construct tree node associated with this construct.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- role_arn
The ARN of the role that allows the pipe to send data to the target.
- source
The ARN of the source resource.
- source_parameters
The parameters required to set up a source for your pipe.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
The list of key-value pairs to associate with the pipe.
- target
The ARN of the target resource.
- target_parameters
The parameters required to set up a target for your pipe.
For more information about pipe target parameters, including how to use dynamic path parameters, see Target parameters in the Amazon EventBridge User Guide .
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
)- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(construct)
Check whether the given construct is a CfnResource.
- Parameters:
construct (
IConstruct
)- Return type:
bool
- classmethod is_construct(x)
Return whether the given object is a Construct.
- Parameters:
x (
Any
)- Return type:
bool
AwsVpcConfigurationProperty
- class CfnPipe.AwsVpcConfigurationProperty(*, subnets, assign_public_ip=None, security_groups=None)
Bases:
object
This structure specifies the VPC subnets and security groups for the task, and whether a public IP address is to be used.
This structure is relevant only for ECS tasks that use the
awsvpc
network mode.- Parameters:
subnets (
Sequence
[str
]) – Specifies the subnets associated with the task. These subnets must all be in the same VPC. You can specify as many as 16 subnets.assign_public_ip (
Optional
[str
]) – Specifies whether the task’s elastic network interface receives a public IP address. You can specifyENABLED
only whenLaunchType
inEcsParameters
is set toFARGATE
.security_groups (
Optional
[Sequence
[str
]]) – Specifies the security groups associated with the task. These security groups must all be in the same VPC. You can specify as many as five security groups. If you do not specify a security group, the default security group for the VPC is used.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes aws_vpc_configuration_property = pipes.CfnPipe.AwsVpcConfigurationProperty( subnets=["subnets"], # the properties below are optional assign_public_ip="assignPublicIp", security_groups=["securityGroups"] )
Attributes
- assign_public_ip
Specifies whether the task’s elastic network interface receives a public IP address.
You can specify
ENABLED
only whenLaunchType
inEcsParameters
is set toFARGATE
.
- security_groups
Specifies the security groups associated with the task.
These security groups must all be in the same VPC. You can specify as many as five security groups. If you do not specify a security group, the default security group for the VPC is used.
- subnets
Specifies the subnets associated with the task.
These subnets must all be in the same VPC. You can specify as many as 16 subnets.
BatchArrayPropertiesProperty
- class CfnPipe.BatchArrayPropertiesProperty(*, size=None)
Bases:
object
The array properties for the submitted job, such as the size of the array.
The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. This parameter is used only if the target is an AWS Batch job.
- Parameters:
size (
Union
[int
,float
,None
]) – The size of the array, if this is an array batch job.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes batch_array_properties_property = pipes.CfnPipe.BatchArrayPropertiesProperty( size=123 )
Attributes
- size
The size of the array, if this is an array batch job.
BatchContainerOverridesProperty
- class CfnPipe.BatchContainerOverridesProperty(*, command=None, environment=None, instance_type=None, resource_requirements=None)
Bases:
object
The overrides that are sent to a container.
- Parameters:
command (
Optional
[Sequence
[str
]]) – The command to send to the container that overrides the default command from the Docker image or the task definition.environment (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,BatchEnvironmentVariableProperty
,Dict
[str
,Any
]]],None
]) – The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. .. epigraph:: Environment variables cannot start with “AWS Batch
“. This naming convention is reserved for variables that AWS Batch sets.instance_type (
Optional
[str
]) – The instance type to use for a multi-node parallel job. .. epigraph:: This parameter isn’t applicable to single-node container jobs or jobs that run on Fargate resources, and shouldn’t be provided.resource_requirements (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,BatchResourceRequirementProperty
,Dict
[str
,Any
]]],None
]) – The type and amount of resources to assign to a container. This overrides the settings in the job definition. The supported resources includeGPU
,MEMORY
, andVCPU
.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes batch_container_overrides_property = pipes.CfnPipe.BatchContainerOverridesProperty( command=["command"], environment=[pipes.CfnPipe.BatchEnvironmentVariableProperty( name="name", value="value" )], instance_type="instanceType", resource_requirements=[pipes.CfnPipe.BatchResourceRequirementProperty( type="type", value="value" )] )
Attributes
- command
The command to send to the container that overrides the default command from the Docker image or the task definition.
- environment
The environment variables to send to the container.
You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. .. epigraph:
Environment variables cannot start with " ``AWS Batch`` ". This naming convention is reserved for variables that AWS Batch sets.
- instance_type
The instance type to use for a multi-node parallel job.
This parameter isn’t applicable to single-node container jobs or jobs that run on Fargate resources, and shouldn’t be provided.
- resource_requirements
The type and amount of resources to assign to a container.
This overrides the settings in the job definition. The supported resources include
GPU
,MEMORY
, andVCPU
.
BatchEnvironmentVariableProperty
- class CfnPipe.BatchEnvironmentVariableProperty(*, name=None, value=None)
Bases:
object
The environment variables to send to the container.
You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. .. epigraph:
Environment variables cannot start with " ``AWS Batch`` ". This naming convention is reserved for variables that AWS Batch sets.
- Parameters:
name (
Optional
[str
]) – The name of the key-value pair. For environment variables, this is the name of the environment variable.value (
Optional
[str
]) – The value of the key-value pair. For environment variables, this is the value of the environment variable.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes batch_environment_variable_property = pipes.CfnPipe.BatchEnvironmentVariableProperty( name="name", value="value" )
Attributes
- name
The name of the key-value pair.
For environment variables, this is the name of the environment variable.
- value
The value of the key-value pair.
For environment variables, this is the value of the environment variable.
BatchJobDependencyProperty
- class CfnPipe.BatchJobDependencyProperty(*, job_id=None, type=None)
Bases:
object
An object that represents an AWS Batch job dependency.
- Parameters:
job_id (
Optional
[str
]) – The job ID of the AWS Batch job that’s associated with this dependency.type (
Optional
[str
]) – The type of the job dependency.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes batch_job_dependency_property = pipes.CfnPipe.BatchJobDependencyProperty( job_id="jobId", type="type" )
Attributes
- job_id
The job ID of the AWS Batch job that’s associated with this dependency.
- type
The type of the job dependency.
BatchResourceRequirementProperty
- class CfnPipe.BatchResourceRequirementProperty(*, type, value)
Bases:
object
The type and amount of a resource to assign to a container.
The supported resources include
GPU
,MEMORY
, andVCPU
.- Parameters:
type (
str
) – The type of resource to assign to a container. The supported resources includeGPU
,MEMORY
, andVCPU
.value (
str
) –The quantity of the specified resource to reserve for the container. The values vary based on the
type
specified. - type=”GPU” - The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn’t exceed the number of available GPUs on the compute resource that the job is launched on. .. epigraph:: GPUs aren’t available for jobs that are running on Fargate resources. - type=”MEMORY” - The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps toMemory
in the Create a container section of the Docker Remote API and the--memory
option to docker run . You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps toMemory
in the Create a container section of the Docker Remote API and the--memory
option to docker run . .. epigraph:: If you’re trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see Memory management in the AWS Batch User Guide . For jobs that are running on Fargate resources, thenvalue
is the hard limit (in MiB), and must match one of the supported values and theVCPU
values must be one of the values supported for that memory value. - value = 512 -VCPU
= 0.25 - value = 1024 -VCPU
= 0.25 or 0.5 - value = 2048 -VCPU
= 0.25, 0.5, or 1 - value = 3072 -VCPU
= 0.5, or 1 - value = 4096 -VCPU
= 0.5, 1, or 2 - value = 5120, 6144, or 7168 -VCPU
= 1 or 2 - value = 8192 -VCPU
= 1, 2, 4, or 8 - value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360 -VCPU
= 2 or 4 - value = 16384 -VCPU
= 2, 4, or 8 - value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720 -VCPU
= 4 - value = 20480, 24576, or 28672 -VCPU
= 4 or 8 - value = 36864, 45056, 53248, or 61440 -VCPU
= 8 - value = 32768, 40960, 49152, or 57344 -VCPU
= 8 or 16 - value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880 -VCPU
= 16 - type=”VCPU” - The number of vCPUs reserved for the container. This parameter maps toCpuShares
in the Create a container section of the Docker Remote API and the--cpu-shares
option to docker run . Each vCPU is equivalent to 1,024 CPU shares. For EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once. The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see AWS Fargate quotas in the AWS General Reference . For jobs that are running on Fargate resources, thenvalue
must match one of the supported values and theMEMORY
values must be one of the values supported for thatVCPU
value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16 - value = 0.25 -MEMORY
= 512, 1024, or 2048 - value = 0.5 -MEMORY
= 1024, 2048, 3072, or 4096 - value = 1 -MEMORY
= 2048, 3072, 4096, 5120, 6144, 7168, or 8192 - value = 2 -MEMORY
= 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384 - value = 4 -MEMORY
= 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720 - value = 8 -MEMORY
= 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440 - value = 16 -MEMORY
= 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes batch_resource_requirement_property = pipes.CfnPipe.BatchResourceRequirementProperty( type="type", value="value" )
Attributes
- type
The type of resource to assign to a container.
The supported resources include
GPU
,MEMORY
, andVCPU
.
- value
The quantity of the specified resource to reserve for the container. The values vary based on the
type
specified.type=”GPU” - The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn’t exceed the number of available GPUs on the compute resource that the job is launched on.
GPUs aren’t available for jobs that are running on Fargate resources.
type=”MEMORY” - The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to
Memory
in the Create a container section of the Docker Remote API and the--memory
option to docker run . You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps toMemory
in the Create a container section of the Docker Remote API and the--memory
option to docker run .
If you’re trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see Memory management in the AWS Batch User Guide .
For jobs that are running on Fargate resources, then
value
is the hard limit (in MiB), and must match one of the supported values and theVCPU
values must be one of the values supported for that memory value.value = 512 -
VCPU
= 0.25value = 1024 -
VCPU
= 0.25 or 0.5value = 2048 -
VCPU
= 0.25, 0.5, or 1value = 3072 -
VCPU
= 0.5, or 1value = 4096 -
VCPU
= 0.5, 1, or 2value = 5120, 6144, or 7168 -
VCPU
= 1 or 2value = 8192 -
VCPU
= 1, 2, 4, or 8value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360 -
VCPU
= 2 or 4value = 16384 -
VCPU
= 2, 4, or 8value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720 -
VCPU
= 4value = 20480, 24576, or 28672 -
VCPU
= 4 or 8value = 36864, 45056, 53248, or 61440 -
VCPU
= 8value = 32768, 40960, 49152, or 57344 -
VCPU
= 8 or 16value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880 -
VCPU
= 16type=”VCPU” - The number of vCPUs reserved for the container. This parameter maps to
CpuShares
in the Create a container section of the Docker Remote API and the--cpu-shares
option to docker run . Each vCPU is equivalent to 1,024 CPU shares. For EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once.
The default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see AWS Fargate quotas in the AWS General Reference .
For jobs that are running on Fargate resources, then
value
must match one of the supported values and theMEMORY
values must be one of the values supported for thatVCPU
value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16value = 0.25 -
MEMORY
= 512, 1024, or 2048value = 0.5 -
MEMORY
= 1024, 2048, 3072, or 4096value = 1 -
MEMORY
= 2048, 3072, 4096, 5120, 6144, 7168, or 8192value = 2 -
MEMORY
= 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384value = 4 -
MEMORY
= 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720value = 8 -
MEMORY
= 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440value = 16 -
MEMORY
= 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880
BatchRetryStrategyProperty
- class CfnPipe.BatchRetryStrategyProperty(*, attempts=None)
Bases:
object
The retry strategy that’s associated with a job.
For more information, see Automated job retries in the AWS Batch User Guide .
- Parameters:
attempts (
Union
[int
,float
,None
]) – The number of times to move a job to theRUNNABLE
status. If the value ofattempts
is greater than one, the job is retried on failure the same number of attempts as the value.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes batch_retry_strategy_property = pipes.CfnPipe.BatchRetryStrategyProperty( attempts=123 )
Attributes
- attempts
The number of times to move a job to the
RUNNABLE
status.If the value of
attempts
is greater than one, the job is retried on failure the same number of attempts as the value.
CapacityProviderStrategyItemProperty
- class CfnPipe.CapacityProviderStrategyItemProperty(*, capacity_provider, base=None, weight=None)
Bases:
object
The details of a capacity provider strategy.
To learn more, see CapacityProviderStrategyItem in the Amazon ECS API Reference.
- Parameters:
capacity_provider (
str
) – The short name of the capacity provider.base (
Union
[int
,float
,None
]) – The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.weight (
Union
[int
,float
,None
]) – The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The weight value is taken into consideration after the base value, if defined, is satisfied.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes capacity_provider_strategy_item_property = pipes.CfnPipe.CapacityProviderStrategyItemProperty( capacity_provider="capacityProvider", # the properties below are optional base=123, weight=123 )
Attributes
- base
The base value designates how many tasks, at a minimum, to run on the specified capacity provider.
Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of 0 is used.
- capacity_provider
The short name of the capacity provider.
- weight
The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider.
The weight value is taken into consideration after the base value, if defined, is satisfied.
DeadLetterConfigProperty
- class CfnPipe.DeadLetterConfigProperty(*, arn=None)
Bases:
object
A
DeadLetterConfig
object that contains information about a dead-letter queue configuration.- Parameters:
arn (
Optional
[str
]) – The ARN of the Amazon SQS queue specified as the target for the dead-letter queue.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes dead_letter_config_property = pipes.CfnPipe.DeadLetterConfigProperty( arn="arn" )
Attributes
- arn
The ARN of the Amazon SQS queue specified as the target for the dead-letter queue.
EcsContainerOverrideProperty
- class CfnPipe.EcsContainerOverrideProperty(*, command=None, cpu=None, environment=None, environment_files=None, memory=None, memory_reservation=None, name=None, resource_requirements=None)
Bases:
object
The overrides that are sent to a container.
An empty container override can be passed in. An example of an empty container override is
{"containerOverrides": [ ] }
. If a non-empty container override is specified, thename
parameter must be included.- Parameters:
command (
Optional
[Sequence
[str
]]) – The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.cpu (
Union
[int
,float
,None
]) – The number ofcpu
units reserved for the container, instead of the default value from the task definition. You must also specify a container name.environment (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,EcsEnvironmentVariableProperty
,Dict
[str
,Any
]]],None
]) – The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.environment_files (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,EcsEnvironmentFileProperty
,Dict
[str
,Any
]]],None
]) – A list of files containing the environment variables to pass to a container, instead of the value from the container definition.memory (
Union
[int
,float
,None
]) – The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition. If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.memory_reservation (
Union
[int
,float
,None
]) – The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition. You must also specify a container name.name (
Optional
[str
]) – The name of the container that receives the override. This parameter is required if any override is specified.resource_requirements (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,EcsResourceRequirementProperty
,Dict
[str
,Any
]]],None
]) – The type and amount of a resource to assign to a container, instead of the default value from the task definition. The only supported resource is a GPU.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes ecs_container_override_property = pipes.CfnPipe.EcsContainerOverrideProperty( command=["command"], cpu=123, environment=[pipes.CfnPipe.EcsEnvironmentVariableProperty( name="name", value="value" )], environment_files=[pipes.CfnPipe.EcsEnvironmentFileProperty( type="type", value="value" )], memory=123, memory_reservation=123, name="name", resource_requirements=[pipes.CfnPipe.EcsResourceRequirementProperty( type="type", value="value" )] )
Attributes
- command
The command to send to the container that overrides the default command from the Docker image or the task definition.
You must also specify a container name.
- cpu
The number of
cpu
units reserved for the container, instead of the default value from the task definition.You must also specify a container name.
- environment
The environment variables to send to the container.
You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.
- environment_files
A list of files containing the environment variables to pass to a container, instead of the value from the container definition.
- memory
The hard limit (in MiB) of memory to present to the container, instead of the default value from the task definition.
If your container attempts to exceed the memory specified here, the container is killed. You must also specify a container name.
- memory_reservation
The soft limit (in MiB) of memory to reserve for the container, instead of the default value from the task definition.
You must also specify a container name.
- name
The name of the container that receives the override.
This parameter is required if any override is specified.
- resource_requirements
The type and amount of a resource to assign to a container, instead of the default value from the task definition.
The only supported resource is a GPU.
EcsEnvironmentFileProperty
- class CfnPipe.EcsEnvironmentFileProperty(*, type, value)
Bases:
object
A list of files containing the environment variables to pass to a container.
You can specify up to ten environment files. The file must have a
.env
file extension. Each line in an environment file should contain an environment variable inVARIABLE=VALUE
format. Lines beginning with#
are treated as comments and are ignored. For more information about the environment variable file syntax, see Declare default environment variables in file .If there are environment variables specified using the
environment
parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they’re processed from the top down. We recommend that you use unique variable names. For more information, see Specifying environment variables in the Amazon Elastic Container Service Developer Guide .This parameter is only supported for tasks hosted on Fargate using the following platform versions:
Linux platform version
1.4.0
or later.Windows platform version
1.0.0
or later.
- Parameters:
type (
str
) – The file type to use. The only supported value iss3
.value (
str
) – The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes ecs_environment_file_property = pipes.CfnPipe.EcsEnvironmentFileProperty( type="type", value="value" )
Attributes
- type
The file type to use.
The only supported value is
s3
.
- value
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
EcsEnvironmentVariableProperty
- class CfnPipe.EcsEnvironmentVariableProperty(*, name=None, value=None)
Bases:
object
The environment variables to send to the container.
You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the task definition. You must also specify a container name.
- Parameters:
name (
Optional
[str
]) – The name of the key-value pair. For environment variables, this is the name of the environment variable.value (
Optional
[str
]) – The value of the key-value pair. For environment variables, this is the value of the environment variable.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes ecs_environment_variable_property = pipes.CfnPipe.EcsEnvironmentVariableProperty( name="name", value="value" )
Attributes
- name
The name of the key-value pair.
For environment variables, this is the name of the environment variable.
- value
The value of the key-value pair.
For environment variables, this is the value of the environment variable.
EcsEphemeralStorageProperty
- class CfnPipe.EcsEphemeralStorageProperty(*, size_in_gib)
Bases:
object
The amount of ephemeral storage to allocate for the task.
This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on Fargate . For more information, see Fargate task storage in the Amazon ECS User Guide for Fargate . .. epigraph:
This parameter is only supported for tasks hosted on Fargate using Linux platform version ``1.4.0`` or later. This parameter is not supported for Windows containers on Fargate .
- Parameters:
size_in_gib (
Union
[int
,float
]) – The total amount, in GiB, of ephemeral storage to set for the task. The minimum supported value is21
GiB and the maximum supported value is200
GiB.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes ecs_ephemeral_storage_property = pipes.CfnPipe.EcsEphemeralStorageProperty( size_in_gi_b=123 )
Attributes
- size_in_gib
The total amount, in GiB, of ephemeral storage to set for the task.
The minimum supported value is
21
GiB and the maximum supported value is200
GiB.
EcsInferenceAcceleratorOverrideProperty
- class CfnPipe.EcsInferenceAcceleratorOverrideProperty(*, device_name=None, device_type=None)
Bases:
object
Details on an Elastic Inference accelerator task override.
This parameter is used to override the Elastic Inference accelerator specified in the task definition. For more information, see Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide .
- Parameters:
device_name (
Optional
[str
]) – The Elastic Inference accelerator device name to override for the task. This parameter must match adeviceName
specified in the task definition.device_type (
Optional
[str
]) – The Elastic Inference accelerator type to use.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes ecs_inference_accelerator_override_property = pipes.CfnPipe.EcsInferenceAcceleratorOverrideProperty( device_name="deviceName", device_type="deviceType" )
Attributes
- device_name
The Elastic Inference accelerator device name to override for the task.
This parameter must match a
deviceName
specified in the task definition.
- device_type
The Elastic Inference accelerator type to use.
EcsResourceRequirementProperty
- class CfnPipe.EcsResourceRequirementProperty(*, type, value)
Bases:
object
The type and amount of a resource to assign to a container.
The supported resource types are GPUs and Elastic Inference accelerators. For more information, see Working with GPUs on Amazon ECS or Working with Amazon Elastic Inference on Amazon ECS in the Amazon Elastic Container Service Developer Guide
- Parameters:
type (
str
) – The type of resource to assign to a container. The supported values areGPU
orInferenceAccelerator
.value (
str
) – The value for the specified resource type. If theGPU
type is used, the value is the number of physicalGPUs
the Amazon ECS container agent reserves for the container. The number of GPUs that’s reserved for all containers in a task can’t exceed the number of available GPUs on the container instance that the task is launched on. If theInferenceAccelerator
type is used, thevalue
matches thedeviceName
for an InferenceAccelerator specified in a task definition.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes ecs_resource_requirement_property = pipes.CfnPipe.EcsResourceRequirementProperty( type="type", value="value" )
Attributes
- type
The type of resource to assign to a container.
The supported values are
GPU
orInferenceAccelerator
.
- value
The value for the specified resource type.
If the
GPU
type is used, the value is the number of physicalGPUs
the Amazon ECS container agent reserves for the container. The number of GPUs that’s reserved for all containers in a task can’t exceed the number of available GPUs on the container instance that the task is launched on.If the
InferenceAccelerator
type is used, thevalue
matches thedeviceName
for an InferenceAccelerator specified in a task definition.
EcsTaskOverrideProperty
- class CfnPipe.EcsTaskOverrideProperty(*, container_overrides=None, cpu=None, ephemeral_storage=None, execution_role_arn=None, inference_accelerator_overrides=None, memory=None, task_role_arn=None)
Bases:
object
The overrides that are associated with a task.
- Parameters:
container_overrides (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,EcsContainerOverrideProperty
,Dict
[str
,Any
]]],None
]) – One or more container overrides that are sent to a task.cpu (
Optional
[str
]) – The cpu override for the task.ephemeral_storage (
Union
[IResolvable
,EcsEphemeralStorageProperty
,Dict
[str
,Any
],None
]) – The ephemeral storage setting override for the task. .. epigraph:: This parameter is only supported for tasks hosted on Fargate that use the following platform versions: - Linux platform version1.4.0
or later. - Windows platform version1.0.0
or later.execution_role_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) of the task execution IAM role override for the task. For more information, see Amazon ECS task execution IAM role in the Amazon Elastic Container Service Developer Guide .inference_accelerator_overrides (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,EcsInferenceAcceleratorOverrideProperty
,Dict
[str
,Any
]]],None
]) – The Elastic Inference accelerator override for the task.memory (
Optional
[str
]) – The memory override for the task.task_role_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) of the IAM role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the Amazon Elastic Container Service Developer Guide .
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes ecs_task_override_property = pipes.CfnPipe.EcsTaskOverrideProperty( container_overrides=[pipes.CfnPipe.EcsContainerOverrideProperty( command=["command"], cpu=123, environment=[pipes.CfnPipe.EcsEnvironmentVariableProperty( name="name", value="value" )], environment_files=[pipes.CfnPipe.EcsEnvironmentFileProperty( type="type", value="value" )], memory=123, memory_reservation=123, name="name", resource_requirements=[pipes.CfnPipe.EcsResourceRequirementProperty( type="type", value="value" )] )], cpu="cpu", ephemeral_storage=pipes.CfnPipe.EcsEphemeralStorageProperty( size_in_gi_b=123 ), execution_role_arn="executionRoleArn", inference_accelerator_overrides=[pipes.CfnPipe.EcsInferenceAcceleratorOverrideProperty( device_name="deviceName", device_type="deviceType" )], memory="memory", task_role_arn="taskRoleArn" )
Attributes
- container_overrides
One or more container overrides that are sent to a task.
- cpu
The cpu override for the task.
- ephemeral_storage
The ephemeral storage setting override for the task.
This parameter is only supported for tasks hosted on Fargate that use the following platform versions:
Linux platform version
1.4.0
or later.Windows platform version
1.0.0
or later.
- execution_role_arn
The Amazon Resource Name (ARN) of the task execution IAM role override for the task.
For more information, see Amazon ECS task execution IAM role in the Amazon Elastic Container Service Developer Guide .
- inference_accelerator_overrides
The Elastic Inference accelerator override for the task.
- memory
The memory override for the task.
- task_role_arn
The Amazon Resource Name (ARN) of the IAM role that containers in this task can assume.
All containers in this task are granted the permissions that are specified in this role. For more information, see IAM Role for Tasks in the Amazon Elastic Container Service Developer Guide .
FilterCriteriaProperty
- class CfnPipe.FilterCriteriaProperty(*, filters=None)
Bases:
object
The collection of event patterns used to filter events.
To remove a filter, specify a
FilterCriteria
object with an empty array ofFilter
objects.For more information, see Events and Event Patterns in the Amazon EventBridge User Guide .
- Parameters:
filters (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,FilterProperty
,Dict
[str
,Any
]]],None
]) – The event patterns.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes filter_criteria_property = pipes.CfnPipe.FilterCriteriaProperty( filters=[pipes.CfnPipe.FilterProperty( pattern="pattern" )] )
Attributes
FilterProperty
- class CfnPipe.FilterProperty(*, pattern=None)
Bases:
object
Filter events using an event pattern.
For more information, see Events and Event Patterns in the Amazon EventBridge User Guide .
- Parameters:
pattern (
Optional
[str
]) – The event pattern.- Link:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-pipes-pipe-filter.html
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes filter_property = pipes.CfnPipe.FilterProperty( pattern="pattern" )
Attributes
- pattern
The event pattern.
MQBrokerAccessCredentialsProperty
- class CfnPipe.MQBrokerAccessCredentialsProperty(*, basic_auth)
Bases:
object
The AWS Secrets Manager secret that stores your broker credentials.
- Parameters:
basic_auth (
str
) – The ARN of the Secrets Manager secret.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes m_qBroker_access_credentials_property = pipes.CfnPipe.MQBrokerAccessCredentialsProperty( basic_auth="basicAuth" )
Attributes
- basic_auth
The ARN of the Secrets Manager secret.
MSKAccessCredentialsProperty
- class CfnPipe.MSKAccessCredentialsProperty(*, client_certificate_tls_auth=None, sasl_scram512_auth=None)
Bases:
object
The AWS Secrets Manager secret that stores your stream credentials.
- Parameters:
client_certificate_tls_auth (
Optional
[str
]) – The ARN of the Secrets Manager secret.sasl_scram512_auth (
Optional
[str
]) – The ARN of the Secrets Manager secret.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes m_sKAccess_credentials_property = pipes.CfnPipe.MSKAccessCredentialsProperty( client_certificate_tls_auth="clientCertificateTlsAuth", sasl_scram512_auth="saslScram512Auth" )
Attributes
- client_certificate_tls_auth
The ARN of the Secrets Manager secret.
- sasl_scram512_auth
The ARN of the Secrets Manager secret.
NetworkConfigurationProperty
- class CfnPipe.NetworkConfigurationProperty(*, awsvpc_configuration=None)
Bases:
object
This structure specifies the network configuration for an Amazon ECS task.
- Parameters:
awsvpc_configuration (
Union
[IResolvable
,AwsVpcConfigurationProperty
,Dict
[str
,Any
],None
]) – Use this structure to specify the VPC subnets and security groups for the task, and whether a public IP address is to be used. This structure is relevant only for ECS tasks that use theawsvpc
network mode.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes network_configuration_property = pipes.CfnPipe.NetworkConfigurationProperty( awsvpc_configuration=pipes.CfnPipe.AwsVpcConfigurationProperty( subnets=["subnets"], # the properties below are optional assign_public_ip="assignPublicIp", security_groups=["securityGroups"] ) )
Attributes
- awsvpc_configuration
Use this structure to specify the VPC subnets and security groups for the task, and whether a public IP address is to be used.
This structure is relevant only for ECS tasks that use the
awsvpc
network mode.
PipeEnrichmentHttpParametersProperty
- class CfnPipe.PipeEnrichmentHttpParametersProperty(*, header_parameters=None, path_parameter_values=None, query_string_parameters=None)
Bases:
object
These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations.
In the latter case, these are merged with any InvocationParameters specified on the Connection, with any values from the Connection taking precedence.
- Parameters:
header_parameters (
Union
[IResolvable
,Mapping
[str
,str
],None
]) – The headers that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.path_parameter_values (
Optional
[Sequence
[str
]]) – The path parameter values to be used to populate API Gateway REST API or EventBridge ApiDestination path wildcards (“*”).query_string_parameters (
Union
[IResolvable
,Mapping
[str
,str
],None
]) – The query string keys/values that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_enrichment_http_parameters_property = pipes.CfnPipe.PipeEnrichmentHttpParametersProperty( header_parameters={ "header_parameters_key": "headerParameters" }, path_parameter_values=["pathParameterValues"], query_string_parameters={ "query_string_parameters_key": "queryStringParameters" } )
Attributes
- header_parameters
The headers that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.
- path_parameter_values
The path parameter values to be used to populate API Gateway REST API or EventBridge ApiDestination path wildcards (“*”).
- query_string_parameters
The query string keys/values that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.
PipeEnrichmentParametersProperty
- class CfnPipe.PipeEnrichmentParametersProperty(*, http_parameters=None, input_template=None)
Bases:
object
The parameters required to set up enrichment on your pipe.
- Parameters:
http_parameters (
Union
[IResolvable
,PipeEnrichmentHttpParametersProperty
,Dict
[str
,Any
],None
]) – Contains the HTTP parameters to use when the target is a API Gateway REST endpoint or EventBridge ApiDestination. If you specify an API Gateway REST API or EventBridge ApiDestination as a target, you can use this parameter to specify headers, path parameters, and query string keys/values as part of your target invoking request. If you’re using ApiDestinations, the corresponding Connection can also have these values configured. In case of any conflicting keys, values from the Connection take precedence.input_template (
Optional
[str
]) – Valid JSON text passed to the enrichment. In this case, nothing from the event itself is passed to the enrichment. For more information, see The JavaScript Object Notation (JSON) Data Interchange Format . To remove an input template, specify an empty string.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_enrichment_parameters_property = pipes.CfnPipe.PipeEnrichmentParametersProperty( http_parameters=pipes.CfnPipe.PipeEnrichmentHttpParametersProperty( header_parameters={ "header_parameters_key": "headerParameters" }, path_parameter_values=["pathParameterValues"], query_string_parameters={ "query_string_parameters_key": "queryStringParameters" } ), input_template="inputTemplate" )
Attributes
- http_parameters
Contains the HTTP parameters to use when the target is a API Gateway REST endpoint or EventBridge ApiDestination.
If you specify an API Gateway REST API or EventBridge ApiDestination as a target, you can use this parameter to specify headers, path parameters, and query string keys/values as part of your target invoking request. If you’re using ApiDestinations, the corresponding Connection can also have these values configured. In case of any conflicting keys, values from the Connection take precedence.
- input_template
Valid JSON text passed to the enrichment.
In this case, nothing from the event itself is passed to the enrichment. For more information, see The JavaScript Object Notation (JSON) Data Interchange Format .
To remove an input template, specify an empty string.
PipeSourceActiveMQBrokerParametersProperty
- class CfnPipe.PipeSourceActiveMQBrokerParametersProperty(*, credentials, queue_name, batch_size=None, maximum_batching_window_in_seconds=None)
Bases:
object
The parameters for using an Active MQ broker as a source.
- Parameters:
credentials (
Union
[IResolvable
,MQBrokerAccessCredentialsProperty
,Dict
[str
,Any
]]) – The credentials needed to access the resource.queue_name (
str
) – The name of the destination queue to consume.batch_size (
Union
[int
,float
,None
]) – The maximum number of records to include in each batch.maximum_batching_window_in_seconds (
Union
[int
,float
,None
]) – The maximum length of a time to wait for events.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_source_active_mQBroker_parameters_property = pipes.CfnPipe.PipeSourceActiveMQBrokerParametersProperty( credentials=pipes.CfnPipe.MQBrokerAccessCredentialsProperty( basic_auth="basicAuth" ), queue_name="queueName", # the properties below are optional batch_size=123, maximum_batching_window_in_seconds=123 )
Attributes
- batch_size
The maximum number of records to include in each batch.
- credentials
The credentials needed to access the resource.
- maximum_batching_window_in_seconds
The maximum length of a time to wait for events.
- queue_name
The name of the destination queue to consume.
PipeSourceDynamoDBStreamParametersProperty
- class CfnPipe.PipeSourceDynamoDBStreamParametersProperty(*, starting_position, batch_size=None, dead_letter_config=None, maximum_batching_window_in_seconds=None, maximum_record_age_in_seconds=None, maximum_retry_attempts=None, on_partial_batch_item_failure=None, parallelization_factor=None)
Bases:
object
The parameters for using a DynamoDB stream as a source.
- Parameters:
starting_position (
str
) – (Streams only) The position in a stream from which to start reading. Valid values :TRIM_HORIZON | LATEST
batch_size (
Union
[int
,float
,None
]) – The maximum number of records to include in each batch.dead_letter_config (
Union
[IResolvable
,DeadLetterConfigProperty
,Dict
[str
,Any
],None
]) – Define the target queue to send dead-letter queue events to.maximum_batching_window_in_seconds (
Union
[int
,float
,None
]) – The maximum length of a time to wait for events.maximum_record_age_in_seconds (
Union
[int
,float
,None
]) – (Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, EventBridge never discards old records.maximum_retry_attempts (
Union
[int
,float
,None
]) – (Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, EventBridge retries failed records until the record expires in the event source.on_partial_batch_item_failure (
Optional
[str
]) – (Streams only) Define how to handle item process failures.AUTOMATIC_BISECT
halves each batch and retry each half until all the records are processed or there is one failed message left in the batch.parallelization_factor (
Union
[int
,float
,None
]) – (Streams only) The number of batches to process concurrently from each shard. The default value is 1.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_source_dynamo_dBStream_parameters_property = pipes.CfnPipe.PipeSourceDynamoDBStreamParametersProperty( starting_position="startingPosition", # the properties below are optional batch_size=123, dead_letter_config=pipes.CfnPipe.DeadLetterConfigProperty( arn="arn" ), maximum_batching_window_in_seconds=123, maximum_record_age_in_seconds=123, maximum_retry_attempts=123, on_partial_batch_item_failure="onPartialBatchItemFailure", parallelization_factor=123 )
Attributes
- batch_size
The maximum number of records to include in each batch.
- dead_letter_config
Define the target queue to send dead-letter queue events to.
- maximum_batching_window_in_seconds
The maximum length of a time to wait for events.
- maximum_record_age_in_seconds
(Streams only) Discard records older than the specified age.
The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, EventBridge never discards old records.
- maximum_retry_attempts
(Streams only) Discard records after the specified number of retries.
The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, EventBridge retries failed records until the record expires in the event source.
- on_partial_batch_item_failure
(Streams only) Define how to handle item process failures.
AUTOMATIC_BISECT
halves each batch and retry each half until all the records are processed or there is one failed message left in the batch.
- parallelization_factor
(Streams only) The number of batches to process concurrently from each shard.
The default value is 1.
- starting_position
(Streams only) The position in a stream from which to start reading.
Valid values :
TRIM_HORIZON | LATEST
PipeSourceKinesisStreamParametersProperty
- class CfnPipe.PipeSourceKinesisStreamParametersProperty(*, starting_position, batch_size=None, dead_letter_config=None, maximum_batching_window_in_seconds=None, maximum_record_age_in_seconds=None, maximum_retry_attempts=None, on_partial_batch_item_failure=None, parallelization_factor=None, starting_position_timestamp=None)
Bases:
object
The parameters for using a Kinesis stream as a source.
- Parameters:
starting_position (
str
) – (Streams only) The position in a stream from which to start reading.batch_size (
Union
[int
,float
,None
]) – The maximum number of records to include in each batch.dead_letter_config (
Union
[IResolvable
,DeadLetterConfigProperty
,Dict
[str
,Any
],None
]) – Define the target queue to send dead-letter queue events to.maximum_batching_window_in_seconds (
Union
[int
,float
,None
]) – The maximum length of a time to wait for events.maximum_record_age_in_seconds (
Union
[int
,float
,None
]) – (Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, EventBridge never discards old records.maximum_retry_attempts (
Union
[int
,float
,None
]) – (Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, EventBridge retries failed records until the record expires in the event source.on_partial_batch_item_failure (
Optional
[str
]) – (Streams only) Define how to handle item process failures.AUTOMATIC_BISECT
halves each batch and retry each half until all the records are processed or there is one failed message left in the batch.parallelization_factor (
Union
[int
,float
,None
]) – (Streams only) The number of batches to process concurrently from each shard. The default value is 1.starting_position_timestamp (
Optional
[str
]) – WithStartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_source_kinesis_stream_parameters_property = pipes.CfnPipe.PipeSourceKinesisStreamParametersProperty( starting_position="startingPosition", # the properties below are optional batch_size=123, dead_letter_config=pipes.CfnPipe.DeadLetterConfigProperty( arn="arn" ), maximum_batching_window_in_seconds=123, maximum_record_age_in_seconds=123, maximum_retry_attempts=123, on_partial_batch_item_failure="onPartialBatchItemFailure", parallelization_factor=123, starting_position_timestamp="startingPositionTimestamp" )
Attributes
- batch_size
The maximum number of records to include in each batch.
- dead_letter_config
Define the target queue to send dead-letter queue events to.
- maximum_batching_window_in_seconds
The maximum length of a time to wait for events.
- maximum_record_age_in_seconds
(Streams only) Discard records older than the specified age.
The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, EventBridge never discards old records.
- maximum_retry_attempts
(Streams only) Discard records after the specified number of retries.
The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, EventBridge retries failed records until the record expires in the event source.
- on_partial_batch_item_failure
(Streams only) Define how to handle item process failures.
AUTOMATIC_BISECT
halves each batch and retry each half until all the records are processed or there is one failed message left in the batch.
- parallelization_factor
(Streams only) The number of batches to process concurrently from each shard.
The default value is 1.
- starting_position
(Streams only) The position in a stream from which to start reading.
- starting_position_timestamp
With
StartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.
PipeSourceManagedStreamingKafkaParametersProperty
- class CfnPipe.PipeSourceManagedStreamingKafkaParametersProperty(*, topic_name, batch_size=None, consumer_group_id=None, credentials=None, maximum_batching_window_in_seconds=None, starting_position=None)
Bases:
object
The parameters for using an MSK stream as a source.
- Parameters:
topic_name (
str
) – The name of the topic that the pipe will read from.batch_size (
Union
[int
,float
,None
]) – The maximum number of records to include in each batch.consumer_group_id (
Optional
[str
]) – The name of the destination queue to consume.credentials (
Union
[IResolvable
,MSKAccessCredentialsProperty
,Dict
[str
,Any
],None
]) – The credentials needed to access the resource.maximum_batching_window_in_seconds (
Union
[int
,float
,None
]) – The maximum length of a time to wait for events.starting_position (
Optional
[str
]) – (Streams only) The position in a stream from which to start reading.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_source_managed_streaming_kafka_parameters_property = pipes.CfnPipe.PipeSourceManagedStreamingKafkaParametersProperty( topic_name="topicName", # the properties below are optional batch_size=123, consumer_group_id="consumerGroupId", credentials=pipes.CfnPipe.MSKAccessCredentialsProperty( client_certificate_tls_auth="clientCertificateTlsAuth", sasl_scram512_auth="saslScram512Auth" ), maximum_batching_window_in_seconds=123, starting_position="startingPosition" )
Attributes
- batch_size
The maximum number of records to include in each batch.
- consumer_group_id
The name of the destination queue to consume.
- credentials
The credentials needed to access the resource.
- maximum_batching_window_in_seconds
The maximum length of a time to wait for events.
- starting_position
(Streams only) The position in a stream from which to start reading.
- topic_name
The name of the topic that the pipe will read from.
PipeSourceParametersProperty
- class CfnPipe.PipeSourceParametersProperty(*, active_mq_broker_parameters=None, dynamo_db_stream_parameters=None, filter_criteria=None, kinesis_stream_parameters=None, managed_streaming_kafka_parameters=None, rabbit_mq_broker_parameters=None, self_managed_kafka_parameters=None, sqs_queue_parameters=None)
Bases:
object
The parameters required to set up a source for your pipe.
- Parameters:
active_mq_broker_parameters (
Union
[IResolvable
,PipeSourceActiveMQBrokerParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using an Active MQ broker as a source.dynamo_db_stream_parameters (
Union
[IResolvable
,PipeSourceDynamoDBStreamParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a DynamoDB stream as a source.filter_criteria (
Union
[IResolvable
,FilterCriteriaProperty
,Dict
[str
,Any
],None
]) –The collection of event patterns used to filter events. To remove a filter, specify a
FilterCriteria
object with an empty array ofFilter
objects. For more information, see Events and Event Patterns in the Amazon EventBridge User Guide .kinesis_stream_parameters (
Union
[IResolvable
,PipeSourceKinesisStreamParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a Kinesis stream as a source.managed_streaming_kafka_parameters (
Union
[IResolvable
,PipeSourceManagedStreamingKafkaParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using an MSK stream as a source.rabbit_mq_broker_parameters (
Union
[IResolvable
,PipeSourceRabbitMQBrokerParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a Rabbit MQ broker as a source.self_managed_kafka_parameters (
Union
[IResolvable
,PipeSourceSelfManagedKafkaParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a self-managed Apache Kafka stream as a source.sqs_queue_parameters (
Union
[IResolvable
,PipeSourceSqsQueueParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a Amazon SQS stream as a source.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_source_parameters_property = pipes.CfnPipe.PipeSourceParametersProperty( active_mq_broker_parameters=pipes.CfnPipe.PipeSourceActiveMQBrokerParametersProperty( credentials=pipes.CfnPipe.MQBrokerAccessCredentialsProperty( basic_auth="basicAuth" ), queue_name="queueName", # the properties below are optional batch_size=123, maximum_batching_window_in_seconds=123 ), dynamo_db_stream_parameters=pipes.CfnPipe.PipeSourceDynamoDBStreamParametersProperty( starting_position="startingPosition", # the properties below are optional batch_size=123, dead_letter_config=pipes.CfnPipe.DeadLetterConfigProperty( arn="arn" ), maximum_batching_window_in_seconds=123, maximum_record_age_in_seconds=123, maximum_retry_attempts=123, on_partial_batch_item_failure="onPartialBatchItemFailure", parallelization_factor=123 ), filter_criteria=pipes.CfnPipe.FilterCriteriaProperty( filters=[pipes.CfnPipe.FilterProperty( pattern="pattern" )] ), kinesis_stream_parameters=pipes.CfnPipe.PipeSourceKinesisStreamParametersProperty( starting_position="startingPosition", # the properties below are optional batch_size=123, dead_letter_config=pipes.CfnPipe.DeadLetterConfigProperty( arn="arn" ), maximum_batching_window_in_seconds=123, maximum_record_age_in_seconds=123, maximum_retry_attempts=123, on_partial_batch_item_failure="onPartialBatchItemFailure", parallelization_factor=123, starting_position_timestamp="startingPositionTimestamp" ), managed_streaming_kafka_parameters=pipes.CfnPipe.PipeSourceManagedStreamingKafkaParametersProperty( topic_name="topicName", # the properties below are optional batch_size=123, consumer_group_id="consumerGroupId", credentials=pipes.CfnPipe.MSKAccessCredentialsProperty( client_certificate_tls_auth="clientCertificateTlsAuth", sasl_scram512_auth="saslScram512Auth" ), maximum_batching_window_in_seconds=123, starting_position="startingPosition" ), rabbit_mq_broker_parameters=pipes.CfnPipe.PipeSourceRabbitMQBrokerParametersProperty( credentials=pipes.CfnPipe.MQBrokerAccessCredentialsProperty( basic_auth="basicAuth" ), queue_name="queueName", # the properties below are optional batch_size=123, maximum_batching_window_in_seconds=123, virtual_host="virtualHost" ), self_managed_kafka_parameters=pipes.CfnPipe.PipeSourceSelfManagedKafkaParametersProperty( topic_name="topicName", # the properties below are optional additional_bootstrap_servers=["additionalBootstrapServers"], batch_size=123, consumer_group_id="consumerGroupId", credentials=pipes.CfnPipe.SelfManagedKafkaAccessConfigurationCredentialsProperty( basic_auth="basicAuth", client_certificate_tls_auth="clientCertificateTlsAuth", sasl_scram256_auth="saslScram256Auth", sasl_scram512_auth="saslScram512Auth" ), maximum_batching_window_in_seconds=123, server_root_ca_certificate="serverRootCaCertificate", starting_position="startingPosition", vpc=pipes.CfnPipe.SelfManagedKafkaAccessConfigurationVpcProperty( security_group=["securityGroup"], subnets=["subnets"] ) ), sqs_queue_parameters=pipes.CfnPipe.PipeSourceSqsQueueParametersProperty( batch_size=123, maximum_batching_window_in_seconds=123 ) )
Attributes
- active_mq_broker_parameters
The parameters for using an Active MQ broker as a source.
- dynamo_db_stream_parameters
The parameters for using a DynamoDB stream as a source.
- filter_criteria
The collection of event patterns used to filter events.
To remove a filter, specify a
FilterCriteria
object with an empty array ofFilter
objects.For more information, see Events and Event Patterns in the Amazon EventBridge User Guide .
- kinesis_stream_parameters
The parameters for using a Kinesis stream as a source.
- managed_streaming_kafka_parameters
The parameters for using an MSK stream as a source.
- rabbit_mq_broker_parameters
The parameters for using a Rabbit MQ broker as a source.
- self_managed_kafka_parameters
The parameters for using a self-managed Apache Kafka stream as a source.
- sqs_queue_parameters
The parameters for using a Amazon SQS stream as a source.
PipeSourceRabbitMQBrokerParametersProperty
- class CfnPipe.PipeSourceRabbitMQBrokerParametersProperty(*, credentials, queue_name, batch_size=None, maximum_batching_window_in_seconds=None, virtual_host=None)
Bases:
object
The parameters for using a Rabbit MQ broker as a source.
- Parameters:
credentials (
Union
[IResolvable
,MQBrokerAccessCredentialsProperty
,Dict
[str
,Any
]]) – The credentials needed to access the resource.queue_name (
str
) – The name of the destination queue to consume.batch_size (
Union
[int
,float
,None
]) – The maximum number of records to include in each batch.maximum_batching_window_in_seconds (
Union
[int
,float
,None
]) – The maximum length of a time to wait for events.virtual_host (
Optional
[str
]) – The name of the virtual host associated with the source broker.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_source_rabbit_mQBroker_parameters_property = pipes.CfnPipe.PipeSourceRabbitMQBrokerParametersProperty( credentials=pipes.CfnPipe.MQBrokerAccessCredentialsProperty( basic_auth="basicAuth" ), queue_name="queueName", # the properties below are optional batch_size=123, maximum_batching_window_in_seconds=123, virtual_host="virtualHost" )
Attributes
- batch_size
The maximum number of records to include in each batch.
- credentials
The credentials needed to access the resource.
- maximum_batching_window_in_seconds
The maximum length of a time to wait for events.
- queue_name
The name of the destination queue to consume.
- virtual_host
The name of the virtual host associated with the source broker.
PipeSourceSelfManagedKafkaParametersProperty
- class CfnPipe.PipeSourceSelfManagedKafkaParametersProperty(*, topic_name, additional_bootstrap_servers=None, batch_size=None, consumer_group_id=None, credentials=None, maximum_batching_window_in_seconds=None, server_root_ca_certificate=None, starting_position=None, vpc=None)
Bases:
object
The parameters for using a self-managed Apache Kafka stream as a source.
- Parameters:
topic_name (
str
) – The name of the topic that the pipe will read from.additional_bootstrap_servers (
Optional
[Sequence
[str
]]) – An array of server URLs.batch_size (
Union
[int
,float
,None
]) – The maximum number of records to include in each batch.consumer_group_id (
Optional
[str
]) – The name of the destination queue to consume.credentials (
Union
[IResolvable
,SelfManagedKafkaAccessConfigurationCredentialsProperty
,Dict
[str
,Any
],None
]) – The credentials needed to access the resource.maximum_batching_window_in_seconds (
Union
[int
,float
,None
]) – The maximum length of a time to wait for events.server_root_ca_certificate (
Optional
[str
]) – The ARN of the Secrets Manager secret used for certification.starting_position (
Optional
[str
]) – (Streams only) The position in a stream from which to start reading.vpc (
Union
[IResolvable
,SelfManagedKafkaAccessConfigurationVpcProperty
,Dict
[str
,Any
],None
]) – This structure specifies the VPC subnets and security groups for the stream, and whether a public IP address is to be used.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_source_self_managed_kafka_parameters_property = pipes.CfnPipe.PipeSourceSelfManagedKafkaParametersProperty( topic_name="topicName", # the properties below are optional additional_bootstrap_servers=["additionalBootstrapServers"], batch_size=123, consumer_group_id="consumerGroupId", credentials=pipes.CfnPipe.SelfManagedKafkaAccessConfigurationCredentialsProperty( basic_auth="basicAuth", client_certificate_tls_auth="clientCertificateTlsAuth", sasl_scram256_auth="saslScram256Auth", sasl_scram512_auth="saslScram512Auth" ), maximum_batching_window_in_seconds=123, server_root_ca_certificate="serverRootCaCertificate", starting_position="startingPosition", vpc=pipes.CfnPipe.SelfManagedKafkaAccessConfigurationVpcProperty( security_group=["securityGroup"], subnets=["subnets"] ) )
Attributes
- additional_bootstrap_servers
An array of server URLs.
- batch_size
The maximum number of records to include in each batch.
- consumer_group_id
The name of the destination queue to consume.
- credentials
The credentials needed to access the resource.
- maximum_batching_window_in_seconds
The maximum length of a time to wait for events.
- server_root_ca_certificate
The ARN of the Secrets Manager secret used for certification.
- starting_position
(Streams only) The position in a stream from which to start reading.
- topic_name
The name of the topic that the pipe will read from.
- vpc
This structure specifies the VPC subnets and security groups for the stream, and whether a public IP address is to be used.
PipeSourceSqsQueueParametersProperty
- class CfnPipe.PipeSourceSqsQueueParametersProperty(*, batch_size=None, maximum_batching_window_in_seconds=None)
Bases:
object
The parameters for using a Amazon SQS stream as a source.
- Parameters:
batch_size (
Union
[int
,float
,None
]) – The maximum number of records to include in each batch.maximum_batching_window_in_seconds (
Union
[int
,float
,None
]) – The maximum length of a time to wait for events.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_source_sqs_queue_parameters_property = pipes.CfnPipe.PipeSourceSqsQueueParametersProperty( batch_size=123, maximum_batching_window_in_seconds=123 )
Attributes
- batch_size
The maximum number of records to include in each batch.
- maximum_batching_window_in_seconds
The maximum length of a time to wait for events.
PipeTargetBatchJobParametersProperty
- class CfnPipe.PipeTargetBatchJobParametersProperty(*, job_definition, job_name, array_properties=None, container_overrides=None, depends_on=None, parameters=None, retry_strategy=None)
Bases:
object
The parameters for using an AWS Batch job as a target.
- Parameters:
job_definition (
str
) – The job definition used by this job. This value can be one ofname
,name:revision
, or the Amazon Resource Name (ARN) for the job definition. If name is specified without a revision then the latest active revision is used.job_name (
str
) – The name of the job. It can be up to 128 letters long. The first character must be alphanumeric, can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).array_properties (
Union
[IResolvable
,BatchArrayPropertiesProperty
,Dict
[str
,Any
],None
]) – The array properties for the submitted job, such as the size of the array. The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. This parameter is used only if the target is an AWS Batch job.container_overrides (
Union
[IResolvable
,BatchContainerOverridesProperty
,Dict
[str
,Any
],None
]) – The overrides that are sent to a container.depends_on (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,BatchJobDependencyProperty
,Dict
[str
,Any
]]],None
]) – A list of dependencies for the job. A job can depend upon a maximum of 20 jobs. You can specify aSEQUENTIAL
type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify anN_TO_N
type dependency with a job ID for array jobs. In that case, each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin.parameters (
Union
[IResolvable
,Mapping
[str
,str
],None
]) – Additional parameters passed to the job that replace parameter substitution placeholders that are set in the job definition. Parameters are specified as a key and value pair mapping. Parameters included here override any corresponding parameter defaults from the job definition.retry_strategy (
Union
[IResolvable
,BatchRetryStrategyProperty
,Dict
[str
,Any
],None
]) – The retry strategy to use for failed jobs. When a retry strategy is specified here, it overrides the retry strategy defined in the job definition.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_batch_job_parameters_property = pipes.CfnPipe.PipeTargetBatchJobParametersProperty( job_definition="jobDefinition", job_name="jobName", # the properties below are optional array_properties=pipes.CfnPipe.BatchArrayPropertiesProperty( size=123 ), container_overrides=pipes.CfnPipe.BatchContainerOverridesProperty( command=["command"], environment=[pipes.CfnPipe.BatchEnvironmentVariableProperty( name="name", value="value" )], instance_type="instanceType", resource_requirements=[pipes.CfnPipe.BatchResourceRequirementProperty( type="type", value="value" )] ), depends_on=[pipes.CfnPipe.BatchJobDependencyProperty( job_id="jobId", type="type" )], parameters={ "parameters_key": "parameters" }, retry_strategy=pipes.CfnPipe.BatchRetryStrategyProperty( attempts=123 ) )
Attributes
- array_properties
The array properties for the submitted job, such as the size of the array.
The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. This parameter is used only if the target is an AWS Batch job.
- container_overrides
The overrides that are sent to a container.
- depends_on
A list of dependencies for the job.
A job can depend upon a maximum of 20 jobs. You can specify a
SEQUENTIAL
type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify anN_TO_N
type dependency with a job ID for array jobs. In that case, each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin.
- job_definition
The job definition used by this job.
This value can be one of
name
,name:revision
, or the Amazon Resource Name (ARN) for the job definition. If name is specified without a revision then the latest active revision is used.
- job_name
The name of the job.
It can be up to 128 letters long. The first character must be alphanumeric, can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
- parameters
Additional parameters passed to the job that replace parameter substitution placeholders that are set in the job definition.
Parameters are specified as a key and value pair mapping. Parameters included here override any corresponding parameter defaults from the job definition.
- retry_strategy
The retry strategy to use for failed jobs.
When a retry strategy is specified here, it overrides the retry strategy defined in the job definition.
PipeTargetCloudWatchLogsParametersProperty
- class CfnPipe.PipeTargetCloudWatchLogsParametersProperty(*, log_stream_name=None, timestamp=None)
Bases:
object
The parameters for using an CloudWatch Logs log stream as a target.
- Parameters:
log_stream_name (
Optional
[str
]) – The name of the log stream.timestamp (
Optional
[str
]) – The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_cloud_watch_logs_parameters_property = pipes.CfnPipe.PipeTargetCloudWatchLogsParametersProperty( log_stream_name="logStreamName", timestamp="timestamp" )
Attributes
- log_stream_name
The name of the log stream.
- timestamp
00 UTC.
- Link:
- Type:
The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00
- Type:
00
PipeTargetEcsTaskParametersProperty
- class CfnPipe.PipeTargetEcsTaskParametersProperty(*, task_definition_arn, capacity_provider_strategy=None, enable_ecs_managed_tags=None, enable_execute_command=None, group=None, launch_type=None, network_configuration=None, overrides=None, placement_constraints=None, placement_strategy=None, platform_version=None, propagate_tags=None, reference_id=None, tags=None, task_count=None)
Bases:
object
The parameters for using an Amazon ECS task as a target.
- Parameters:
task_definition_arn (
str
) – The ARN of the task definition to use if the event target is an Amazon ECS task.capacity_provider_strategy (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,CapacityProviderStrategyItemProperty
,Dict
[str
,Any
]]],None
]) – The capacity provider strategy to use for the task. If acapacityProviderStrategy
is specified, thelaunchType
parameter must be omitted. If nocapacityProviderStrategy
or launchType is specified, thedefaultCapacityProviderStrategy
for the cluster is used.enable_ecs_managed_tags (
Union
[bool
,IResolvable
,None
]) – Specifies whether to enable Amazon ECS managed tags for the task. For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.enable_execute_command (
Union
[bool
,IResolvable
,None
]) – Whether or not to enable the execute command functionality for the containers in this task. If true, this enables execute command functionality on all containers in the task.group (
Optional
[str
]) – Specifies an Amazon ECS task group for the task. The maximum length is 255 characters.launch_type (
Optional
[str
]) – Specifies the launch type on which your task is running. The launch type that you specify here must match one of the launch type (compatibilities) of the target task. TheFARGATE
value is supported only in the Regions where AWS Fargate with Amazon ECS is supported. For more information, see AWS Fargate on Amazon ECS in the Amazon Elastic Container Service Developer Guide .network_configuration (
Union
[IResolvable
,NetworkConfigurationProperty
,Dict
[str
,Any
],None
]) – Use this structure if the Amazon ECS task uses theawsvpc
network mode. This structure specifies the VPC subnets and security groups associated with the task, and whether a public IP address is to be used. This structure is required ifLaunchType
isFARGATE
because theawsvpc
mode is required for Fargate tasks. If you specifyNetworkConfiguration
when the target ECS task does not use theawsvpc
network mode, the task fails.overrides (
Union
[IResolvable
,EcsTaskOverrideProperty
,Dict
[str
,Any
],None
]) – The overrides that are associated with a task.placement_constraints (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,PlacementConstraintProperty
,Dict
[str
,Any
]]],None
]) – An array of placement constraint objects to use for the task. You can specify up to 10 constraints per task (including constraints in the task definition and those specified at runtime).placement_strategy (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,PlacementStrategyProperty
,Dict
[str
,Any
]]],None
]) – The placement strategy objects to use for the task. You can specify a maximum of five strategy rules per task.platform_version (
Optional
[str
]) – Specifies the platform version for the task. Specify only the numeric portion of the platform version, such as1.1.0
. This structure is used only ifLaunchType
isFARGATE
. For more information about valid platform versions, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide .propagate_tags (
Optional
[str
]) – Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags are not propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use theTagResource
API action.reference_id (
Optional
[str
]) – The reference ID to use for the task.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – The metadata that you apply to the task to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. To learn more, see RunTask in the Amazon ECS API Reference.task_count (
Union
[int
,float
,None
]) – The number of tasks to create based onTaskDefinition
. The default is 1.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_ecs_task_parameters_property = pipes.CfnPipe.PipeTargetEcsTaskParametersProperty( task_definition_arn="taskDefinitionArn", # the properties below are optional capacity_provider_strategy=[pipes.CfnPipe.CapacityProviderStrategyItemProperty( capacity_provider="capacityProvider", # the properties below are optional base=123, weight=123 )], enable_ecs_managed_tags=False, enable_execute_command=False, group="group", launch_type="launchType", network_configuration=pipes.CfnPipe.NetworkConfigurationProperty( awsvpc_configuration=pipes.CfnPipe.AwsVpcConfigurationProperty( subnets=["subnets"], # the properties below are optional assign_public_ip="assignPublicIp", security_groups=["securityGroups"] ) ), overrides=pipes.CfnPipe.EcsTaskOverrideProperty( container_overrides=[pipes.CfnPipe.EcsContainerOverrideProperty( command=["command"], cpu=123, environment=[pipes.CfnPipe.EcsEnvironmentVariableProperty( name="name", value="value" )], environment_files=[pipes.CfnPipe.EcsEnvironmentFileProperty( type="type", value="value" )], memory=123, memory_reservation=123, name="name", resource_requirements=[pipes.CfnPipe.EcsResourceRequirementProperty( type="type", value="value" )] )], cpu="cpu", ephemeral_storage=pipes.CfnPipe.EcsEphemeralStorageProperty( size_in_gi_b=123 ), execution_role_arn="executionRoleArn", inference_accelerator_overrides=[pipes.CfnPipe.EcsInferenceAcceleratorOverrideProperty( device_name="deviceName", device_type="deviceType" )], memory="memory", task_role_arn="taskRoleArn" ), placement_constraints=[pipes.CfnPipe.PlacementConstraintProperty( expression="expression", type="type" )], placement_strategy=[pipes.CfnPipe.PlacementStrategyProperty( field="field", type="type" )], platform_version="platformVersion", propagate_tags="propagateTags", reference_id="referenceId", tags=[CfnTag( key="key", value="value" )], task_count=123 )
Attributes
- capacity_provider_strategy
The capacity provider strategy to use for the task.
If a
capacityProviderStrategy
is specified, thelaunchType
parameter must be omitted. If nocapacityProviderStrategy
or launchType is specified, thedefaultCapacityProviderStrategy
for the cluster is used.
- enable_ecs_managed_tags
Specifies whether to enable Amazon ECS managed tags for the task.
For more information, see Tagging Your Amazon ECS Resources in the Amazon Elastic Container Service Developer Guide.
- enable_execute_command
Whether or not to enable the execute command functionality for the containers in this task.
If true, this enables execute command functionality on all containers in the task.
- group
Specifies an Amazon ECS task group for the task.
The maximum length is 255 characters.
- launch_type
Specifies the launch type on which your task is running.
The launch type that you specify here must match one of the launch type (compatibilities) of the target task. The
FARGATE
value is supported only in the Regions where AWS Fargate with Amazon ECS is supported. For more information, see AWS Fargate on Amazon ECS in the Amazon Elastic Container Service Developer Guide .
- network_configuration
Use this structure if the Amazon ECS task uses the
awsvpc
network mode.This structure specifies the VPC subnets and security groups associated with the task, and whether a public IP address is to be used. This structure is required if
LaunchType
isFARGATE
because theawsvpc
mode is required for Fargate tasks.If you specify
NetworkConfiguration
when the target ECS task does not use theawsvpc
network mode, the task fails.
- overrides
The overrides that are associated with a task.
- placement_constraints
An array of placement constraint objects to use for the task.
You can specify up to 10 constraints per task (including constraints in the task definition and those specified at runtime).
- placement_strategy
The placement strategy objects to use for the task.
You can specify a maximum of five strategy rules per task.
- platform_version
Specifies the platform version for the task.
Specify only the numeric portion of the platform version, such as
1.1.0
.This structure is used only if
LaunchType
isFARGATE
. For more information about valid platform versions, see AWS Fargate Platform Versions in the Amazon Elastic Container Service Developer Guide .
- propagate_tags
Specifies whether to propagate the tags from the task definition to the task.
If no value is specified, the tags are not propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the
TagResource
API action.
- reference_id
The reference ID to use for the task.
- tags
The metadata that you apply to the task to help you categorize and organize them.
Each tag consists of a key and an optional value, both of which you define. To learn more, see RunTask in the Amazon ECS API Reference.
- task_count
The number of tasks to create based on
TaskDefinition
.The default is 1.
- task_definition_arn
The ARN of the task definition to use if the event target is an Amazon ECS task.
PipeTargetEventBridgeEventBusParametersProperty
- class CfnPipe.PipeTargetEventBridgeEventBusParametersProperty(*, detail_type=None, endpoint_id=None, resources=None, source=None, time=None)
Bases:
object
The parameters for using an EventBridge event bus as a target.
- Parameters:
detail_type (
Optional
[str
]) – A free-form string, with a maximum of 128 characters, used to decide what fields to expect in the event detail.endpoint_id (
Optional
[str
]) – The URL subdomain of the endpoint. For example, if the URL for Endpoint is https://abcde.veo.endpoints.event.amazonaws.com, then the EndpointId isabcde.veo
.resources (
Optional
[Sequence
[str
]]) – AWS resources, identified by Amazon Resource Name (ARN), which the event primarily concerns. Any number, including zero, may be present.source (
Optional
[str
]) – The source of the event.time (
Optional
[str
]) – The time stamp of the event, per RFC3339 . If no time stamp is provided, the time stamp of the PutEvents call is used.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_event_bridge_event_bus_parameters_property = pipes.CfnPipe.PipeTargetEventBridgeEventBusParametersProperty( detail_type="detailType", endpoint_id="endpointId", resources=["resources"], source="source", time="time" )
Attributes
- detail_type
A free-form string, with a maximum of 128 characters, used to decide what fields to expect in the event detail.
- endpoint_id
The URL subdomain of the endpoint.
For example, if the URL for Endpoint is https://abcde.veo.endpoints.event.amazonaws.com, then the EndpointId is
abcde.veo
.
- resources
AWS resources, identified by Amazon Resource Name (ARN), which the event primarily concerns.
Any number, including zero, may be present.
- source
The source of the event.
PipeTargetHttpParametersProperty
- class CfnPipe.PipeTargetHttpParametersProperty(*, header_parameters=None, path_parameter_values=None, query_string_parameters=None)
Bases:
object
These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations.
- Parameters:
header_parameters (
Union
[IResolvable
,Mapping
[str
,str
],None
]) – The headers that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.path_parameter_values (
Optional
[Sequence
[str
]]) – The path parameter values to be used to populate API Gateway REST API or EventBridge ApiDestination path wildcards (“*”).query_string_parameters (
Union
[IResolvable
,Mapping
[str
,str
],None
]) – The query string keys/values that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_http_parameters_property = pipes.CfnPipe.PipeTargetHttpParametersProperty( header_parameters={ "header_parameters_key": "headerParameters" }, path_parameter_values=["pathParameterValues"], query_string_parameters={ "query_string_parameters_key": "queryStringParameters" } )
Attributes
- header_parameters
The headers that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.
- path_parameter_values
The path parameter values to be used to populate API Gateway REST API or EventBridge ApiDestination path wildcards (“*”).
- query_string_parameters
The query string keys/values that need to be sent as part of request invoking the API Gateway REST API or EventBridge ApiDestination.
PipeTargetKinesisStreamParametersProperty
- class CfnPipe.PipeTargetKinesisStreamParametersProperty(*, partition_key)
Bases:
object
The parameters for using a Kinesis stream as a source.
- Parameters:
partition_key (
str
) – Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_kinesis_stream_parameters_property = pipes.CfnPipe.PipeTargetKinesisStreamParametersProperty( partition_key="partitionKey" )
Attributes
- partition_key
Determines which shard in the stream the data record is assigned to.
Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
PipeTargetLambdaFunctionParametersProperty
- class CfnPipe.PipeTargetLambdaFunctionParametersProperty(*, invocation_type=None)
Bases:
object
The parameters for using a Lambda function as a target.
- Parameters:
invocation_type (
Optional
[str
]) –Specify whether to invoke the function synchronously or asynchronously. -
REQUEST_RESPONSE
(default) - Invoke synchronously. This corresponds to theRequestResponse
option in theInvocationType
parameter for the Lambda Invoke API. -FIRE_AND_FORGET
- Invoke asynchronously. This corresponds to theEvent
option in theInvocationType
parameter for the Lambda Invoke API. For more information, see Invocation types in the Amazon EventBridge User Guide .- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_lambda_function_parameters_property = pipes.CfnPipe.PipeTargetLambdaFunctionParametersProperty( invocation_type="invocationType" )
Attributes
- invocation_type
Specify whether to invoke the function synchronously or asynchronously.
REQUEST_RESPONSE
(default) - Invoke synchronously. This corresponds to theRequestResponse
option in theInvocationType
parameter for the Lambda Invoke API.FIRE_AND_FORGET
- Invoke asynchronously. This corresponds to theEvent
option in theInvocationType
parameter for the Lambda Invoke API.
For more information, see Invocation types in the Amazon EventBridge User Guide .
PipeTargetParametersProperty
- class CfnPipe.PipeTargetParametersProperty(*, batch_job_parameters=None, cloud_watch_logs_parameters=None, ecs_task_parameters=None, event_bridge_event_bus_parameters=None, http_parameters=None, input_template=None, kinesis_stream_parameters=None, lambda_function_parameters=None, redshift_data_parameters=None, sage_maker_pipeline_parameters=None, sqs_queue_parameters=None, step_function_state_machine_parameters=None)
Bases:
object
The parameters required to set up a target for your pipe.
For more information about pipe target parameters, including how to use dynamic path parameters, see Target parameters in the Amazon EventBridge User Guide .
- Parameters:
batch_job_parameters (
Union
[IResolvable
,PipeTargetBatchJobParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using an AWS Batch job as a target.cloud_watch_logs_parameters (
Union
[IResolvable
,PipeTargetCloudWatchLogsParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using an CloudWatch Logs log stream as a target.ecs_task_parameters (
Union
[IResolvable
,PipeTargetEcsTaskParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using an Amazon ECS task as a target.event_bridge_event_bus_parameters (
Union
[IResolvable
,PipeTargetEventBridgeEventBusParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using an EventBridge event bus as a target.http_parameters (
Union
[IResolvable
,PipeTargetHttpParametersProperty
,Dict
[str
,Any
],None
]) – These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations.input_template (
Optional
[str
]) –Valid JSON text passed to the target. In this case, nothing from the event itself is passed to the target. For more information, see The JavaScript Object Notation (JSON) Data Interchange Format . To remove an input template, specify an empty string.
kinesis_stream_parameters (
Union
[IResolvable
,PipeTargetKinesisStreamParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a Kinesis stream as a source.lambda_function_parameters (
Union
[IResolvable
,PipeTargetLambdaFunctionParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a Lambda function as a target.redshift_data_parameters (
Union
[IResolvable
,PipeTargetRedshiftDataParametersProperty
,Dict
[str
,Any
],None
]) – These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API BatchExecuteStatement.sage_maker_pipeline_parameters (
Union
[IResolvable
,PipeTargetSageMakerPipelineParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a SageMaker pipeline as a target.sqs_queue_parameters (
Union
[IResolvable
,PipeTargetSqsQueueParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a Amazon SQS stream as a source.step_function_state_machine_parameters (
Union
[IResolvable
,PipeTargetStateMachineParametersProperty
,Dict
[str
,Any
],None
]) – The parameters for using a Step Functions state machine as a target.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_parameters_property = pipes.CfnPipe.PipeTargetParametersProperty( batch_job_parameters=pipes.CfnPipe.PipeTargetBatchJobParametersProperty( job_definition="jobDefinition", job_name="jobName", # the properties below are optional array_properties=pipes.CfnPipe.BatchArrayPropertiesProperty( size=123 ), container_overrides=pipes.CfnPipe.BatchContainerOverridesProperty( command=["command"], environment=[pipes.CfnPipe.BatchEnvironmentVariableProperty( name="name", value="value" )], instance_type="instanceType", resource_requirements=[pipes.CfnPipe.BatchResourceRequirementProperty( type="type", value="value" )] ), depends_on=[pipes.CfnPipe.BatchJobDependencyProperty( job_id="jobId", type="type" )], parameters={ "parameters_key": "parameters" }, retry_strategy=pipes.CfnPipe.BatchRetryStrategyProperty( attempts=123 ) ), cloud_watch_logs_parameters=pipes.CfnPipe.PipeTargetCloudWatchLogsParametersProperty( log_stream_name="logStreamName", timestamp="timestamp" ), ecs_task_parameters=pipes.CfnPipe.PipeTargetEcsTaskParametersProperty( task_definition_arn="taskDefinitionArn", # the properties below are optional capacity_provider_strategy=[pipes.CfnPipe.CapacityProviderStrategyItemProperty( capacity_provider="capacityProvider", # the properties below are optional base=123, weight=123 )], enable_ecs_managed_tags=False, enable_execute_command=False, group="group", launch_type="launchType", network_configuration=pipes.CfnPipe.NetworkConfigurationProperty( awsvpc_configuration=pipes.CfnPipe.AwsVpcConfigurationProperty( subnets=["subnets"], # the properties below are optional assign_public_ip="assignPublicIp", security_groups=["securityGroups"] ) ), overrides=pipes.CfnPipe.EcsTaskOverrideProperty( container_overrides=[pipes.CfnPipe.EcsContainerOverrideProperty( command=["command"], cpu=123, environment=[pipes.CfnPipe.EcsEnvironmentVariableProperty( name="name", value="value" )], environment_files=[pipes.CfnPipe.EcsEnvironmentFileProperty( type="type", value="value" )], memory=123, memory_reservation=123, name="name", resource_requirements=[pipes.CfnPipe.EcsResourceRequirementProperty( type="type", value="value" )] )], cpu="cpu", ephemeral_storage=pipes.CfnPipe.EcsEphemeralStorageProperty( size_in_gi_b=123 ), execution_role_arn="executionRoleArn", inference_accelerator_overrides=[pipes.CfnPipe.EcsInferenceAcceleratorOverrideProperty( device_name="deviceName", device_type="deviceType" )], memory="memory", task_role_arn="taskRoleArn" ), placement_constraints=[pipes.CfnPipe.PlacementConstraintProperty( expression="expression", type="type" )], placement_strategy=[pipes.CfnPipe.PlacementStrategyProperty( field="field", type="type" )], platform_version="platformVersion", propagate_tags="propagateTags", reference_id="referenceId", tags=[CfnTag( key="key", value="value" )], task_count=123 ), event_bridge_event_bus_parameters=pipes.CfnPipe.PipeTargetEventBridgeEventBusParametersProperty( detail_type="detailType", endpoint_id="endpointId", resources=["resources"], source="source", time="time" ), http_parameters=pipes.CfnPipe.PipeTargetHttpParametersProperty( header_parameters={ "header_parameters_key": "headerParameters" }, path_parameter_values=["pathParameterValues"], query_string_parameters={ "query_string_parameters_key": "queryStringParameters" } ), input_template="inputTemplate", kinesis_stream_parameters=pipes.CfnPipe.PipeTargetKinesisStreamParametersProperty( partition_key="partitionKey" ), lambda_function_parameters=pipes.CfnPipe.PipeTargetLambdaFunctionParametersProperty( invocation_type="invocationType" ), redshift_data_parameters=pipes.CfnPipe.PipeTargetRedshiftDataParametersProperty( database="database", sqls=["sqls"], # the properties below are optional db_user="dbUser", secret_manager_arn="secretManagerArn", statement_name="statementName", with_event=False ), sage_maker_pipeline_parameters=pipes.CfnPipe.PipeTargetSageMakerPipelineParametersProperty( pipeline_parameter_list=[pipes.CfnPipe.SageMakerPipelineParameterProperty( name="name", value="value" )] ), sqs_queue_parameters=pipes.CfnPipe.PipeTargetSqsQueueParametersProperty( message_deduplication_id="messageDeduplicationId", message_group_id="messageGroupId" ), step_function_state_machine_parameters=pipes.CfnPipe.PipeTargetStateMachineParametersProperty( invocation_type="invocationType" ) )
Attributes
- batch_job_parameters
The parameters for using an AWS Batch job as a target.
- cloud_watch_logs_parameters
The parameters for using an CloudWatch Logs log stream as a target.
- ecs_task_parameters
The parameters for using an Amazon ECS task as a target.
- event_bridge_event_bus_parameters
The parameters for using an EventBridge event bus as a target.
- http_parameters
These are custom parameter to be used when the target is an API Gateway REST APIs or EventBridge ApiDestinations.
- input_template
Valid JSON text passed to the target.
In this case, nothing from the event itself is passed to the target. For more information, see The JavaScript Object Notation (JSON) Data Interchange Format .
To remove an input template, specify an empty string.
- kinesis_stream_parameters
The parameters for using a Kinesis stream as a source.
- lambda_function_parameters
The parameters for using a Lambda function as a target.
- redshift_data_parameters
These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API BatchExecuteStatement.
- sage_maker_pipeline_parameters
The parameters for using a SageMaker pipeline as a target.
- sqs_queue_parameters
The parameters for using a Amazon SQS stream as a source.
- step_function_state_machine_parameters
The parameters for using a Step Functions state machine as a target.
PipeTargetRedshiftDataParametersProperty
- class CfnPipe.PipeTargetRedshiftDataParametersProperty(*, database, sqls, db_user=None, secret_manager_arn=None, statement_name=None, with_event=None)
Bases:
object
These are custom parameters to be used when the target is a Amazon Redshift cluster to invoke the Amazon Redshift Data API BatchExecuteStatement.
- Parameters:
database (
str
) – The name of the database. Required when authenticating using temporary credentials.sqls (
Sequence
[str
]) – The SQL statement text to run.db_user (
Optional
[str
]) – The database user name. Required when authenticating using temporary credentials.secret_manager_arn (
Optional
[str
]) – The name or ARN of the secret that enables access to the database. Required when authenticating using Secrets Manager .statement_name (
Optional
[str
]) – The name of the SQL statement. You can name the SQL statement when you create it to identify the query.with_event (
Union
[bool
,IResolvable
,None
]) – Indicates whether to send an event back to EventBridge after the SQL statement runs.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_redshift_data_parameters_property = pipes.CfnPipe.PipeTargetRedshiftDataParametersProperty( database="database", sqls=["sqls"], # the properties below are optional db_user="dbUser", secret_manager_arn="secretManagerArn", statement_name="statementName", with_event=False )
Attributes
- database
The name of the database.
Required when authenticating using temporary credentials.
- db_user
The database user name.
Required when authenticating using temporary credentials.
- secret_manager_arn
The name or ARN of the secret that enables access to the database.
Required when authenticating using Secrets Manager .
- sqls
The SQL statement text to run.
- statement_name
The name of the SQL statement.
You can name the SQL statement when you create it to identify the query.
- with_event
Indicates whether to send an event back to EventBridge after the SQL statement runs.
PipeTargetSageMakerPipelineParametersProperty
- class CfnPipe.PipeTargetSageMakerPipelineParametersProperty(*, pipeline_parameter_list=None)
Bases:
object
The parameters for using a SageMaker pipeline as a target.
- Parameters:
pipeline_parameter_list (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,SageMakerPipelineParameterProperty
,Dict
[str
,Any
]]],None
]) – List of Parameter names and values for SageMaker Model Building Pipeline execution.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_sage_maker_pipeline_parameters_property = pipes.CfnPipe.PipeTargetSageMakerPipelineParametersProperty( pipeline_parameter_list=[pipes.CfnPipe.SageMakerPipelineParameterProperty( name="name", value="value" )] )
Attributes
- pipeline_parameter_list
List of Parameter names and values for SageMaker Model Building Pipeline execution.
PipeTargetSqsQueueParametersProperty
- class CfnPipe.PipeTargetSqsQueueParametersProperty(*, message_deduplication_id=None, message_group_id=None)
Bases:
object
The parameters for using a Amazon SQS stream as a source.
- Parameters:
message_deduplication_id (
Optional
[str
]) – This parameter applies only to FIFO (first-in-first-out) queues. The token used for deduplication of sent messages.message_group_id (
Optional
[str
]) – The FIFO message group ID to use as the target.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_sqs_queue_parameters_property = pipes.CfnPipe.PipeTargetSqsQueueParametersProperty( message_deduplication_id="messageDeduplicationId", message_group_id="messageGroupId" )
Attributes
- message_deduplication_id
This parameter applies only to FIFO (first-in-first-out) queues.
The token used for deduplication of sent messages.
- message_group_id
The FIFO message group ID to use as the target.
PipeTargetStateMachineParametersProperty
- class CfnPipe.PipeTargetStateMachineParametersProperty(*, invocation_type=None)
Bases:
object
The parameters for using a Step Functions state machine as a target.
- Parameters:
invocation_type (
Optional
[str
]) –Specify whether to invoke the Step Functions state machine synchronously or asynchronously. -
REQUEST_RESPONSE
(default) - Invoke synchronously. For more information, see StartSyncExecution in the AWS Step Functions API Reference . .. epigraph::REQUEST_RESPONSE
is not supported forSTANDARD
state machine workflows. -FIRE_AND_FORGET
- Invoke asynchronously. For more information, see StartExecution in the AWS Step Functions API Reference . For more information, see Invocation types in the Amazon EventBridge User Guide .- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes pipe_target_state_machine_parameters_property = pipes.CfnPipe.PipeTargetStateMachineParametersProperty( invocation_type="invocationType" )
Attributes
- invocation_type
Specify whether to invoke the Step Functions state machine synchronously or asynchronously.
REQUEST_RESPONSE
(default) - Invoke synchronously. For more information, see StartSyncExecution in the AWS Step Functions API Reference .
REQUEST_RESPONSE
is not supported forSTANDARD
state machine workflows.FIRE_AND_FORGET
- Invoke asynchronously. For more information, see StartExecution in the AWS Step Functions API Reference .
For more information, see Invocation types in the Amazon EventBridge User Guide .
PlacementConstraintProperty
- class CfnPipe.PlacementConstraintProperty(*, expression=None, type=None)
Bases:
object
An object representing a constraint on task placement.
To learn more, see Task Placement Constraints in the Amazon Elastic Container Service Developer Guide.
- Parameters:
expression (
Optional
[str
]) – A cluster query language expression to apply to the constraint. You cannot specify an expression if the constraint type isdistinctInstance
. To learn more, see Cluster Query Language in the Amazon Elastic Container Service Developer Guide.type (
Optional
[str
]) – The type of constraint. Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes placement_constraint_property = pipes.CfnPipe.PlacementConstraintProperty( expression="expression", type="type" )
Attributes
- expression
A cluster query language expression to apply to the constraint.
You cannot specify an expression if the constraint type is
distinctInstance
. To learn more, see Cluster Query Language in the Amazon Elastic Container Service Developer Guide.
- type
The type of constraint.
Use distinctInstance to ensure that each task in a particular group is running on a different container instance. Use memberOf to restrict the selection to a group of valid candidates.
PlacementStrategyProperty
- class CfnPipe.PlacementStrategyProperty(*, field=None, type=None)
Bases:
object
The task placement strategy for a task or service.
To learn more, see Task Placement Strategies in the Amazon Elastic Container Service Service Developer Guide.
- Parameters:
field (
Optional
[str
]) – The field to apply the placement strategy against. For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that is applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.type (
Optional
[str
]) – The type of placement strategy. The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that is specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory (but still enough to run the task).
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes placement_strategy_property = pipes.CfnPipe.PlacementStrategyProperty( field="field", type="type" )
Attributes
- field
The field to apply the placement strategy against.
For the spread placement strategy, valid values are instanceId (or host, which has the same effect), or any platform or custom attribute that is applied to a container instance, such as attribute:ecs.availability-zone. For the binpack placement strategy, valid values are cpu and memory. For the random placement strategy, this field is not used.
- type
The type of placement strategy.
The random placement strategy randomly places tasks on available candidates. The spread placement strategy spreads placement across available candidates evenly based on the field parameter. The binpack strategy places tasks on available candidates that have the least available amount of the resource that is specified with the field parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory (but still enough to run the task).
SageMakerPipelineParameterProperty
- class CfnPipe.SageMakerPipelineParameterProperty(*, name, value)
Bases:
object
Name/Value pair of a parameter to start execution of a SageMaker Model Building Pipeline.
- Parameters:
name (
str
) – Name of parameter to start execution of a SageMaker Model Building Pipeline.value (
str
) – Value of parameter to start execution of a SageMaker Model Building Pipeline.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes sage_maker_pipeline_parameter_property = pipes.CfnPipe.SageMakerPipelineParameterProperty( name="name", value="value" )
Attributes
- name
Name of parameter to start execution of a SageMaker Model Building Pipeline.
- value
Value of parameter to start execution of a SageMaker Model Building Pipeline.
SelfManagedKafkaAccessConfigurationCredentialsProperty
- class CfnPipe.SelfManagedKafkaAccessConfigurationCredentialsProperty(*, basic_auth=None, client_certificate_tls_auth=None, sasl_scram256_auth=None, sasl_scram512_auth=None)
Bases:
object
The AWS Secrets Manager secret that stores your stream credentials.
- Parameters:
basic_auth (
Optional
[str
]) – The ARN of the Secrets Manager secret.client_certificate_tls_auth (
Optional
[str
]) – The ARN of the Secrets Manager secret.sasl_scram256_auth (
Optional
[str
]) – The ARN of the Secrets Manager secret.sasl_scram512_auth (
Optional
[str
]) – The ARN of the Secrets Manager secret.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes self_managed_kafka_access_configuration_credentials_property = pipes.CfnPipe.SelfManagedKafkaAccessConfigurationCredentialsProperty( basic_auth="basicAuth", client_certificate_tls_auth="clientCertificateTlsAuth", sasl_scram256_auth="saslScram256Auth", sasl_scram512_auth="saslScram512Auth" )
Attributes
- basic_auth
The ARN of the Secrets Manager secret.
- client_certificate_tls_auth
The ARN of the Secrets Manager secret.
- sasl_scram256_auth
The ARN of the Secrets Manager secret.
- sasl_scram512_auth
The ARN of the Secrets Manager secret.
SelfManagedKafkaAccessConfigurationVpcProperty
- class CfnPipe.SelfManagedKafkaAccessConfigurationVpcProperty(*, security_group=None, subnets=None)
Bases:
object
This structure specifies the VPC subnets and security groups for the stream, and whether a public IP address is to be used.
- Parameters:
security_group (
Optional
[Sequence
[str
]]) – Specifies the security groups associated with the stream. These security groups must all be in the same VPC. You can specify as many as five security groups. If you do not specify a security group, the default security group for the VPC is used.subnets (
Optional
[Sequence
[str
]]) – Specifies the subnets associated with the stream. These subnets must all be in the same VPC. You can specify as many as 16 subnets.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_pipes as pipes self_managed_kafka_access_configuration_vpc_property = pipes.CfnPipe.SelfManagedKafkaAccessConfigurationVpcProperty( security_group=["securityGroup"], subnets=["subnets"] )
Attributes
- security_group
Specifies the security groups associated with the stream.
These security groups must all be in the same VPC. You can specify as many as five security groups. If you do not specify a security group, the default security group for the VPC is used.
- subnets
Specifies the subnets associated with the stream.
These subnets must all be in the same VPC. You can specify as many as 16 subnets.