CfnEventSourceMapping
- class aws_cdk.aws_lambda.CfnEventSourceMapping(scope, id, *, function_name, amazon_managed_kafka_event_source_config=None, batch_size=None, bisect_batch_on_function_error=None, destination_config=None, document_db_event_source_config=None, enabled=None, event_source_arn=None, filter_criteria=None, function_response_types=None, maximum_batching_window_in_seconds=None, maximum_record_age_in_seconds=None, maximum_retry_attempts=None, parallelization_factor=None, queues=None, scaling_config=None, self_managed_event_source=None, self_managed_kafka_event_source_config=None, source_access_configurations=None, starting_position=None, starting_position_timestamp=None, topics=None, tumbling_window_in_seconds=None)
Bases:
CfnResource
A CloudFormation
AWS::Lambda::EventSourceMapping
.The
AWS::Lambda::EventSourceMapping
resource creates a mapping between an event source and an AWS Lambda function. Lambda reads items from the event source and triggers the function.For details about each event source type, see the following topics. In particular, each of the topics describes the required and optional parameters for the specific event source.
- CloudformationResource:
AWS::Lambda::EventSourceMapping
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ cfn_event_source_mapping = lambda_.CfnEventSourceMapping(self, "MyCfnEventSourceMapping", function_name="functionName", # the properties below are optional amazon_managed_kafka_event_source_config=lambda.CfnEventSourceMapping.AmazonManagedKafkaEventSourceConfigProperty( consumer_group_id="consumerGroupId" ), batch_size=123, bisect_batch_on_function_error=False, destination_config=lambda.CfnEventSourceMapping.DestinationConfigProperty( on_failure=lambda.CfnEventSourceMapping.OnFailureProperty( destination="destination" ) ), document_db_event_source_config=lambda.CfnEventSourceMapping.DocumentDBEventSourceConfigProperty( collection_name="collectionName", database_name="databaseName", full_document="fullDocument" ), enabled=False, event_source_arn="eventSourceArn", filter_criteria=lambda.CfnEventSourceMapping.FilterCriteriaProperty( filters=[lambda.CfnEventSourceMapping.FilterProperty( pattern="pattern" )] ), function_response_types=["functionResponseTypes"], maximum_batching_window_in_seconds=123, maximum_record_age_in_seconds=123, maximum_retry_attempts=123, parallelization_factor=123, queues=["queues"], scaling_config=lambda.CfnEventSourceMapping.ScalingConfigProperty( maximum_concurrency=123 ), self_managed_event_source=lambda.CfnEventSourceMapping.SelfManagedEventSourceProperty( endpoints=lambda.CfnEventSourceMapping.EndpointsProperty( kafka_bootstrap_servers=["kafkaBootstrapServers"] ) ), self_managed_kafka_event_source_config=lambda.CfnEventSourceMapping.SelfManagedKafkaEventSourceConfigProperty( consumer_group_id="consumerGroupId" ), source_access_configurations=[lambda.CfnEventSourceMapping.SourceAccessConfigurationProperty( type="type", uri="uri" )], starting_position="startingPosition", starting_position_timestamp=123, topics=["topics"], tumbling_window_in_seconds=123 )
Create a new
AWS::Lambda::EventSourceMapping
.- Parameters:
scope (
Construct
) –scope in which this resource is defined.
id (
str
) –scoped id of the resource.
function_name (
str
) – The name of the Lambda function. Name formats - Function name –MyFunction
. - Function ARN –arn:aws:lambda:us-west-2:123456789012:function:MyFunction
. - Version or Alias ARN –arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
. - Partial ARN –123456789012:function:MyFunction
. The length constraint applies only to the full ARN. If you specify only the function name, it’s limited to 64 characters in length.amazon_managed_kafka_event_source_config (
Union
[IResolvable
,AmazonManagedKafkaEventSourceConfigProperty
,Dict
[str
,Any
],None
]) – Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.batch_size (
Union
[int
,float
,None
]) – The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB). - Amazon Kinesis – Default 100. Max 10,000. - Amazon DynamoDB Streams – Default 100. Max 10,000. - Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10. - Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000. - Self-managed Apache Kafka – Default 100. Max 10,000. - Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000. - DocumentDB – Default 100. Max 10,000.bisect_batch_on_function_error (
Union
[bool
,IResolvable
,None
]) – (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.destination_config (
Union
[IResolvable
,DestinationConfigProperty
,Dict
[str
,Any
],None
]) – (Kinesis and DynamoDB Streams only) An Amazon SQS queue or Amazon SNS topic destination for discarded records.document_db_event_source_config (
Union
[IResolvable
,DocumentDBEventSourceConfigProperty
,Dict
[str
,Any
],None
]) – Specific configuration settings for a DocumentDB event source.enabled (
Union
[bool
,IResolvable
,None
]) – When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: Trueevent_source_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) of the event source. - Amazon Kinesis – The ARN of the data stream or a stream consumer. - Amazon DynamoDB Streams – The ARN of the stream. - Amazon Simple Queue Service – The ARN of the queue. - Amazon Managed Streaming for Apache Kafka – The ARN of the cluster. - Amazon MQ – The ARN of the broker. - Amazon DocumentDB – The ARN of the DocumentDB change stream.filter_criteria (
Union
[IResolvable
,FilterCriteriaProperty
,Dict
[str
,Any
],None
]) – An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering .function_response_types (
Optional
[Sequence
[str
]]) – (Streams and SQS) A list of current response type enums applied to the event source mapping. Valid Values:ReportBatchItemFailures
maximum_batching_window_in_seconds (
Union
[int
,float
,None
]) – The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. Default ( Kinesis , DynamoDB , Amazon SQS event sources) : 0 Default ( Amazon MSK , Kafka, Amazon MQ , Amazon DocumentDB event sources) : 500 ms Related setting: For Amazon SQS event sources, when you setBatchSize
to a value greater than 10, you must setMaximumBatchingWindowInSeconds
to at least 1.maximum_record_age_in_seconds (
Union
[int
,float
,None
]) – (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. .. epigraph:: The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter’s absolute range, they are not allowedmaximum_retry_attempts (
Union
[int
,float
,None
]) – (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.parallelization_factor (
Union
[int
,float
,None
]) – (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.queues (
Optional
[Sequence
[str
]]) – (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.scaling_config (
Union
[IResolvable
,ScalingConfigProperty
,Dict
[str
,Any
],None
]) – (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources .self_managed_event_source (
Union
[IResolvable
,SelfManagedEventSourceProperty
,Dict
[str
,Any
],None
]) – The self-managed Apache Kafka cluster for your event source.self_managed_kafka_event_source_config (
Union
[IResolvable
,SelfManagedKafkaEventSourceConfigProperty
,Dict
[str
,Any
],None
]) – Specific configuration settings for a self-managed Apache Kafka event source.source_access_configurations (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,SourceAccessConfigurationProperty
,Dict
[str
,Any
]]],None
]) – An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.starting_position (
Optional
[str
]) – The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB. - LATEST - Read only new records. - TRIM_HORIZON - Process all available records. - AT_TIMESTAMP - Specify a time from which to start reading records.starting_position_timestamp (
Union
[int
,float
,None
]) – WithStartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.topics (
Optional
[Sequence
[str
]]) – The name of the Kafka topic.tumbling_window_in_seconds (
Union
[int
,float
,None
]) – (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_depends_on(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermdediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
).- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resoure, please consult that specific resource’s documentation.
- Return type:
None
- get_att(attribute_name)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) –tree inspector to collect and process attributes.
- Return type:
None
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::Lambda::EventSourceMapping'
- amazon_managed_kafka_event_source_config
Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
- attr_id
The event source mapping’s ID.
- CloudformationAttribute:
Id
- batch_size
The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function.
Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
Amazon Kinesis – Default 100. Max 10,000.
Amazon DynamoDB Streams – Default 100. Max 10,000.
Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
Self-managed Apache Kafka – Default 100. Max 10,000.
Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
DocumentDB – Default 100. Max 10,000.
- bisect_batch_on_function_error
(Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry.
The default value is false.
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- destination_config
(Kinesis and DynamoDB Streams only) An Amazon SQS queue or Amazon SNS topic destination for discarded records.
- document_db_event_source_config
Specific configuration settings for a DocumentDB event source.
- enabled
When true, the event source mapping is active. When false, Lambda pauses polling and invocation.
Default: True
- event_source_arn
The Amazon Resource Name (ARN) of the event source.
Amazon Kinesis – The ARN of the data stream or a stream consumer.
Amazon DynamoDB Streams – The ARN of the stream.
Amazon Simple Queue Service – The ARN of the queue.
Amazon Managed Streaming for Apache Kafka – The ARN of the cluster.
Amazon MQ – The ARN of the broker.
Amazon DocumentDB – The ARN of the DocumentDB change stream.
- filter_criteria
An object that defines the filter criteria that determine whether Lambda should process an event.
For more information, see Lambda event filtering .
- function_name
The name of the Lambda function.
Name formats - Function name –
MyFunction
.Function ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
.Version or Alias ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
.Partial ARN –
123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it’s limited to 64 characters in length.
- function_response_types
(Streams and SQS) A list of current response type enums applied to the event source mapping.
Valid Values:
ReportBatchItemFailures
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- maximum_batching_window_in_seconds
The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default ( Kinesis , DynamoDB , Amazon SQS event sources) : 0
Default ( Amazon MSK , Kafka, Amazon MQ , Amazon DocumentDB event sources) : 500 ms
Related setting: For Amazon SQS event sources, when you set
BatchSize
to a value greater than 10, you must setMaximumBatchingWindowInSeconds
to at least 1.
- maximum_record_age_in_seconds
(Kinesis and DynamoDB Streams only) Discard records older than the specified age.
The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. .. epigraph:
The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
- maximum_retry_attempts
(Kinesis and DynamoDB Streams only) Discard records after the specified number of retries.
The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
- node
The construct tree node associated with this construct.
- parallelization_factor
(Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard.
The default value is 1.
- queues
(Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- scaling_config
(Amazon SQS only) The scaling configuration for the event source.
For more information, see Configuring maximum concurrency for Amazon SQS event sources .
- self_managed_event_source
The self-managed Apache Kafka cluster for your event source.
- self_managed_kafka_event_source_config
Specific configuration settings for a self-managed Apache Kafka event source.
- source_access_configurations
An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- starting_position
The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.
LATEST - Read only new records.
TRIM_HORIZON - Process all available records.
AT_TIMESTAMP - Specify a time from which to start reading records.
- starting_position_timestamp
With
StartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.
- topics
The name of the Kafka topic.
- tumbling_window_in_seconds
(Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources.
A value of 0 seconds indicates no tumbling window.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(construct)
Check whether the given construct is a CfnResource.
- Parameters:
construct (
IConstruct
) –- Return type:
bool
- classmethod is_construct(x)
Return whether the given object is a Construct.
- Parameters:
x (
Any
) –- Return type:
bool
AmazonManagedKafkaEventSourceConfigProperty
- class CfnEventSourceMapping.AmazonManagedKafkaEventSourceConfigProperty(*, consumer_group_id=None)
Bases:
object
Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
- Parameters:
consumer_group_id (
Optional
[str
]) – The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID .- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ amazon_managed_kafka_event_source_config_property = lambda.CfnEventSourceMapping.AmazonManagedKafkaEventSourceConfigProperty( consumer_group_id="consumerGroupId" )
Attributes
- consumer_group_id
The identifier for the Kafka consumer group to join.
The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID .
DestinationConfigProperty
- class CfnEventSourceMapping.DestinationConfigProperty(*, on_failure=None)
Bases:
object
A configuration object that specifies the destination of an event after Lambda processes it.
- Parameters:
on_failure (
Union
[IResolvable
,OnFailureProperty
,Dict
[str
,Any
],None
]) – The destination configuration for failed invocations.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ destination_config_property = lambda.CfnEventSourceMapping.DestinationConfigProperty( on_failure=lambda.CfnEventSourceMapping.OnFailureProperty( destination="destination" ) )
Attributes
- on_failure
The destination configuration for failed invocations.
DocumentDBEventSourceConfigProperty
- class CfnEventSourceMapping.DocumentDBEventSourceConfigProperty(*, collection_name=None, database_name=None, full_document=None)
Bases:
object
Specific configuration settings for a DocumentDB event source.
- Parameters:
collection_name (
Optional
[str
]) – The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.database_name (
Optional
[str
]) – The name of the database to consume within the DocumentDB cluster.full_document (
Optional
[str
]) – Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ document_dBEvent_source_config_property = lambda.CfnEventSourceMapping.DocumentDBEventSourceConfigProperty( collection_name="collectionName", database_name="databaseName", full_document="fullDocument" )
Attributes
- collection_name
The name of the collection to consume within the database.
If you do not specify a collection, Lambda consumes all collections.
- database_name
The name of the database to consume within the DocumentDB cluster.
- full_document
Determines what DocumentDB sends to your event stream during document update operations.
If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
EndpointsProperty
- class CfnEventSourceMapping.EndpointsProperty(*, kafka_bootstrap_servers=None)
Bases:
object
The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.- Parameters:
kafka_bootstrap_servers (
Optional
[Sequence
[str
]]) – The list of bootstrap servers for your Kafka brokers in the following format:"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ endpoints_property = lambda.CfnEventSourceMapping.EndpointsProperty( kafka_bootstrap_servers=["kafkaBootstrapServers"] )
Attributes
- kafka_bootstrap_servers
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.- Link:
- Type:
The list of bootstrap servers for your Kafka brokers in the following format
FilterCriteriaProperty
- class CfnEventSourceMapping.FilterCriteriaProperty(*, filters=None)
Bases:
object
An object that contains the filters for an event source.
- Parameters:
filters (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,FilterProperty
,Dict
[str
,Any
]]],None
]) – A list of filters.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ filter_criteria_property = lambda.CfnEventSourceMapping.FilterCriteriaProperty( filters=[lambda.CfnEventSourceMapping.FilterProperty( pattern="pattern" )] )
Attributes
FilterProperty
- class CfnEventSourceMapping.FilterProperty(*, pattern=None)
Bases:
object
A structure within a
FilterCriteria
object that defines an event filtering pattern.- Parameters:
pattern (
Optional
[str
]) – A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax .- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ filter_property = lambda.CfnEventSourceMapping.FilterProperty( pattern="pattern" )
Attributes
- pattern
A filter pattern.
For more information on the syntax of a filter pattern, see Filter rule syntax .
OnFailureProperty
- class CfnEventSourceMapping.OnFailureProperty(*, destination=None)
Bases:
object
A destination for events that failed processing.
- Parameters:
destination (
Optional
[str
]) – The Amazon Resource Name (ARN) of the destination resource.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ on_failure_property = lambda.CfnEventSourceMapping.OnFailureProperty( destination="destination" )
Attributes
- destination
The Amazon Resource Name (ARN) of the destination resource.
ScalingConfigProperty
- class CfnEventSourceMapping.ScalingConfigProperty(*, maximum_concurrency=None)
Bases:
object
(Amazon SQS only) The scaling configuration for the event source.
To remove the configuration, pass an empty value.
- Parameters:
maximum_concurrency (
Union
[int
,float
,None
]) – Limits the number of concurrent instances that the Amazon SQS event source can invoke.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ scaling_config_property = lambda.CfnEventSourceMapping.ScalingConfigProperty( maximum_concurrency=123 )
Attributes
- maximum_concurrency
Limits the number of concurrent instances that the Amazon SQS event source can invoke.
SelfManagedEventSourceProperty
- class CfnEventSourceMapping.SelfManagedEventSourceProperty(*, endpoints=None)
Bases:
object
The self-managed Apache Kafka cluster for your event source.
- Parameters:
endpoints (
Union
[IResolvable
,EndpointsProperty
,Dict
[str
,Any
],None
]) – The list of bootstrap servers for your Kafka brokers in the following format:"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ self_managed_event_source_property = lambda.CfnEventSourceMapping.SelfManagedEventSourceProperty( endpoints=lambda.CfnEventSourceMapping.EndpointsProperty( kafka_bootstrap_servers=["kafkaBootstrapServers"] ) )
Attributes
- endpoints
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.- Link:
- Type:
The list of bootstrap servers for your Kafka brokers in the following format
SelfManagedKafkaEventSourceConfigProperty
- class CfnEventSourceMapping.SelfManagedKafkaEventSourceConfigProperty(*, consumer_group_id=None)
Bases:
object
Specific configuration settings for a self-managed Apache Kafka event source.
- Parameters:
consumer_group_id (
Optional
[str
]) –The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID .
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ self_managed_kafka_event_source_config_property = lambda.CfnEventSourceMapping.SelfManagedKafkaEventSourceConfigProperty( consumer_group_id="consumerGroupId" )
Attributes
- consumer_group_id
The identifier for the Kafka consumer group to join.
The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID .
SourceAccessConfigurationProperty
- class CfnEventSourceMapping.SourceAccessConfigurationProperty(*, type=None, uri=None)
Bases:
object
An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
- Parameters:
type (
Optional
[str
]) – The type of authentication protocol, VPC components, or virtual host for your event source. For example:"Type":"SASL_SCRAM_512_AUTH"
. -BASIC_AUTH
– (Amazon MQ) The AWS Secrets Manager secret that stores your broker credentials. -BASIC_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers. -VPC_SUBNET
– (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster. -VPC_SECURITY_GROUP
– (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers. -SASL_SCRAM_256_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers. -SASL_SCRAM_512_AUTH
– (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers. -VIRTUAL_HOST
–- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call. -CLIENT_CERTIFICATE_TLS_AUTH
– (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers. -SERVER_ROOT_CA_CERTIFICATE
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.uri (
Optional
[str
]) – The value for your chosen configuration inType
. For example:"URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName"
.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_lambda as lambda_ source_access_configuration_property = lambda.CfnEventSourceMapping.SourceAccessConfigurationProperty( type="type", uri="uri" )
Attributes
- type
"Type":"SASL_SCRAM_512_AUTH"
.BASIC_AUTH
– (Amazon MQ) The AWS Secrets Manager secret that stores your broker credentials.BASIC_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.VPC_SUBNET
– (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.VPC_SECURITY_GROUP
– (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.SASL_SCRAM_256_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.SASL_SCRAM_512_AUTH
– (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.VIRTUAL_HOST
–- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.CLIENT_CERTIFICATE_TLS_AUTH
– (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.SERVER_ROOT_CA_CERTIFICATE
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
- Link:
- Type:
The type of authentication protocol, VPC components, or virtual host for your event source. For example
- uri
The value for your chosen configuration in
Type
.For example:
"URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName"
.