KafkaEventSourceProps
- class aws_cdk.aws_lambda_event_sources.KafkaEventSourceProps(*, starting_position, batch_size=None, enabled=None, max_batching_window=None, provisioned_poller_config=None, topic, consumer_group_id=None, filter_encryption=None, filters=None, on_failure=None, secret=None)
Bases:
BaseStreamEventSourceProps
Properties for a Kafka event source.
- Parameters:
starting_position (
StartingPosition
) – Where to begin consuming the stream.batch_size (
Union
[int
,float
,None
]) – The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function. Your function receives an event with all the retrieved records. Valid Range: - Minimum value of 1 - Maximum value of: - 1000 forDynamoEventSource
- 10000 forKinesisEventSource
,ManagedKafkaEventSource
andSelfManagedKafkaEventSource
Default: 100enabled (
Optional
[bool
]) – If the stream event source mapping should be enabled. Default: truemax_batching_window (
Optional
[Duration
]) – The maximum amount of time to gather records before invoking the function. Maximum of Duration.minutes(5). Default: - Duration.seconds(0) for Kinesis, DynamoDB, and SQS event sources, Duration.millis(500) for MSK, self-managed Kafka, and Amazon MQ.provisioned_poller_config (
Union
[ProvisionedPollerConfig
,Dict
[str
,Any
],None
]) – Configuration for provisioned pollers that read from the event source. When specified, allows control over the minimum and maximum number of pollers that can be provisioned to process events from the source. Default: - no provisioned pollerstopic (
str
) – The Kafka topic to subscribe to.consumer_group_id (
Optional
[str
]) – The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a lenght between 1 and 200 and full the pattern ‘[a-zA-Z0-9-/:_+=.@-]’. Default: - nonefilter_encryption (
Optional
[IKey
]) – Add Customer managed KMS key to encrypt Filter Criteria. Default: - nonefilters (
Optional
[Sequence
[Mapping
[str
,Any
]]]) – Add filter criteria to Event Source. Default: - noneon_failure (
Optional
[IEventSourceDlq
]) – Add an on Failure Destination for this Kafka event. SNS/SQS/S3 are supported Default: - discarded records are ignoredsecret (
Optional
[ISecret
]) – The secret with the Kafka credentials, see https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet. Default: none
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk as cdk from aws_cdk import aws_kms as kms from aws_cdk import aws_lambda as lambda_ from aws_cdk import aws_lambda_event_sources as lambda_event_sources from aws_cdk import aws_secretsmanager as secretsmanager # event_source_dlq: lambda.IEventSourceDlq # filters: Any # key: kms.Key # secret: secretsmanager.Secret kafka_event_source_props = lambda_event_sources.KafkaEventSourceProps( starting_position=lambda_.StartingPosition.TRIM_HORIZON, topic="topic", # the properties below are optional batch_size=123, consumer_group_id="consumerGroupId", enabled=False, filter_encryption=key, filters=[{ "filters_key": filters }], max_batching_window=cdk.Duration.minutes(30), on_failure=event_source_dlq, provisioned_poller_config=lambda_event_sources.ProvisionedPollerConfig( maximum_pollers=123, minimum_pollers=123 ), secret=secret )
Attributes
- batch_size
The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function.
Your function receives an event with all the retrieved records.
Valid Range:
Minimum value of 1
Maximum value of:
1000 for
DynamoEventSource
10000 for
KinesisEventSource
,ManagedKafkaEventSource
andSelfManagedKafkaEventSource
- Default:
100
- consumer_group_id
The identifier for the Kafka consumer group to join.
The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a lenght between 1 and 200 and full the pattern ‘[a-zA-Z0-9-/:_+=.@-]’.
- enabled
If the stream event source mapping should be enabled.
- Default:
true
- filter_encryption
Add Customer managed KMS key to encrypt Filter Criteria.
- filters
Add filter criteria to Event Source.
- max_batching_window
The maximum amount of time to gather records before invoking the function.
Maximum of Duration.minutes(5).
- Default:
Duration.seconds(0) for Kinesis, DynamoDB, and SQS event sources, Duration.millis(500) for MSK, self-managed Kafka, and Amazon MQ.
- See:
- on_failure
Add an on Failure Destination for this Kafka event.
SNS/SQS/S3 are supported
- Default:
discarded records are ignored
- provisioned_poller_config
Configuration for provisioned pollers that read from the event source.
When specified, allows control over the minimum and maximum number of pollers that can be provisioned to process events from the source.
- Default:
no provisioned pollers
- secret
//docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet.
- Default:
none
- Type:
The secret with the Kafka credentials, see https
- starting_position
Where to begin consuming the stream.
- topic
The Kafka topic to subscribe to.