SelfManagedKafkaEventSourceProps
- class aws_cdk.aws_lambda_event_sources.SelfManagedKafkaEventSourceProps(*, starting_position, batch_size=None, enabled=None, max_batching_window=None, provisioned_poller_config=None, topic, consumer_group_id=None, filter_encryption=None, filters=None, on_failure=None, secret=None, bootstrap_servers, authentication_method=None, root_ca_certificate=None, security_group=None, vpc=None, vpc_subnets=None)
Bases:
KafkaEventSourceProps
Properties for a self managed Kafka cluster event source.
If your Kafka cluster is only reachable via VPC make sure to configure it.
- Parameters:
starting_position (
StartingPosition
) – Where to begin consuming the stream.batch_size (
Union
[int
,float
,None
]) – The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function. Your function receives an event with all the retrieved records. Valid Range: - Minimum value of 1 - Maximum value of: - 1000 forDynamoEventSource
- 10000 forKinesisEventSource
,ManagedKafkaEventSource
andSelfManagedKafkaEventSource
Default: 100enabled (
Optional
[bool
]) – If the stream event source mapping should be enabled. Default: truemax_batching_window (
Optional
[Duration
]) – The maximum amount of time to gather records before invoking the function. Maximum of Duration.minutes(5). Default: - Duration.seconds(0) for Kinesis, DynamoDB, and SQS event sources, Duration.millis(500) for MSK, self-managed Kafka, and Amazon MQ.provisioned_poller_config (
Union
[ProvisionedPollerConfig
,Dict
[str
,Any
],None
]) – Configuration for provisioned pollers that read from the event source. When specified, allows control over the minimum and maximum number of pollers that can be provisioned to process events from the source. Default: - no provisioned pollerstopic (
str
) – The Kafka topic to subscribe to.consumer_group_id (
Optional
[str
]) – The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a lenght between 1 and 200 and full the pattern ‘[a-zA-Z0-9-/:_+=.@-]’. Default: - nonefilter_encryption (
Optional
[IKey
]) – Add Customer managed KMS key to encrypt Filter Criteria. Default: - nonefilters (
Optional
[Sequence
[Mapping
[str
,Any
]]]) – Add filter criteria to Event Source. Default: - noneon_failure (
Optional
[IEventSourceDlq
]) – Add an on Failure Destination for this Kafka event. SNS/SQS/S3 are supported Default: - discarded records are ignoredsecret (
Optional
[ISecret
]) – The secret with the Kafka credentials, see https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet. Default: nonebootstrap_servers (
Sequence
[str
]) – The list of host and port pairs that are the addresses of the Kafka brokers in a “bootstrap” Kafka cluster that a Kafka client connects to initially to bootstrap itself. They are in the formatabc.xyz.com:xxxx
.authentication_method (
Optional
[AuthenticationMethod
]) – The authentication method for your Kafka cluster. Default: AuthenticationMethod.SASL_SCRAM_512_AUTHroot_ca_certificate (
Optional
[ISecret
]) – The secret with the root CA certificate used by your Kafka brokers for TLS encryption This field is required if your Kafka brokers use certificates signed by a private CA. Default: - nonesecurity_group (
Optional
[ISecurityGroup
]) – If your Kafka brokers are only reachable via VPC, provide the security group here. Default: - none, required if setting vpcvpc (
Optional
[IVpc
]) – If your Kafka brokers are only reachable via VPC provide the VPC here. Default: nonevpc_subnets (
Union
[SubnetSelection
,Dict
[str
,Any
],None
]) – If your Kafka brokers are only reachable via VPC, provide the subnets selection here. Default: - none, required if setting vpc
- ExampleMetadata:
infused
Example:
from aws_cdk.aws_secretsmanager import Secret from aws_cdk.aws_lambda_event_sources import SelfManagedKafkaEventSource # The secret that allows access to your self hosted Kafka cluster # secret: Secret # my_function: lambda.Function # The list of Kafka brokers bootstrap_servers = ["kafka-broker:9092"] # The Kafka topic you want to subscribe to topic = "some-cool-topic" # (Optional) The consumer group id to use when connecting to the Kafka broker. If omitted the UUID of the event source mapping will be used. consumer_group_id = "my-consumer-group-id" my_function.add_event_source(SelfManagedKafkaEventSource( bootstrap_servers=bootstrap_servers, topic=topic, consumer_group_id=consumer_group_id, secret=secret, batch_size=100, # default starting_position=lambda_.StartingPosition.TRIM_HORIZON ))
Attributes
- authentication_method
The authentication method for your Kafka cluster.
- Default:
AuthenticationMethod.SASL_SCRAM_512_AUTH
- batch_size
The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function.
Your function receives an event with all the retrieved records.
Valid Range:
Minimum value of 1
Maximum value of:
1000 for
DynamoEventSource
10000 for
KinesisEventSource
,ManagedKafkaEventSource
andSelfManagedKafkaEventSource
- Default:
100
- bootstrap_servers
The list of host and port pairs that are the addresses of the Kafka brokers in a “bootstrap” Kafka cluster that a Kafka client connects to initially to bootstrap itself.
They are in the format
abc.xyz.com:xxxx
.
- consumer_group_id
The identifier for the Kafka consumer group to join.
The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a lenght between 1 and 200 and full the pattern ‘[a-zA-Z0-9-/:_+=.@-]’.
- enabled
If the stream event source mapping should be enabled.
- Default:
true
- filter_encryption
Add Customer managed KMS key to encrypt Filter Criteria.
- filters
Add filter criteria to Event Source.
- max_batching_window
The maximum amount of time to gather records before invoking the function.
Maximum of Duration.minutes(5).
- Default:
Duration.seconds(0) for Kinesis, DynamoDB, and SQS event sources, Duration.millis(500) for MSK, self-managed Kafka, and Amazon MQ.
- See:
- on_failure
Add an on Failure Destination for this Kafka event.
SNS/SQS/S3 are supported
- Default:
discarded records are ignored
- provisioned_poller_config
Configuration for provisioned pollers that read from the event source.
When specified, allows control over the minimum and maximum number of pollers that can be provisioned to process events from the source.
- Default:
no provisioned pollers
- root_ca_certificate
The secret with the root CA certificate used by your Kafka brokers for TLS encryption This field is required if your Kafka brokers use certificates signed by a private CA.
- Default:
none
- secret
//docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet.
- Default:
none
- Type:
The secret with the Kafka credentials, see https
- security_group
If your Kafka brokers are only reachable via VPC, provide the security group here.
- Default:
none, required if setting vpc
- starting_position
Where to begin consuming the stream.
- topic
The Kafka topic to subscribe to.
- vpc
If your Kafka brokers are only reachable via VPC provide the VPC here.
- Default:
none
- vpc_subnets
If your Kafka brokers are only reachable via VPC, provide the subnets selection here.
- Default:
none, required if setting vpc