class EventSourceMapping (construct)
Language | Type name |
---|---|
.NET | Amazon.CDK.AWS.Lambda.EventSourceMapping |
Go | github.com/aws/aws-cdk-go/awscdk/v2/awslambda#EventSourceMapping |
Java | software.amazon.awscdk.services.lambda.EventSourceMapping |
Python | aws_cdk.aws_lambda.EventSourceMapping |
TypeScript (source) | aws-cdk-lib » aws_lambda » EventSourceMapping |
Implements
IConstruct
, IDependable
, IResource
, IEvent
Defines a Lambda EventSourceMapping resource.
Usually, you won't need to define the mapping yourself. This will usually be done by event sources. For example, to add an SQS event source to a function:
import * as sqs from 'aws-cdk-lib/aws-sqs';
import * as eventsources from 'aws-cdk-lib/aws-lambda-event-sources';
declare const handler: lambda.Function;
declare const queue: sqs.Queue;
handler.addEventSource(new eventsources.SqsEventSource(queue));
The SqsEventSource
class will automatically create the mapping, and will also
modify the Lambda's execution role so it can consume messages from the queue.
Example
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import * as cdk from 'aws-cdk-lib';
import { aws_kms as kms } from 'aws-cdk-lib';
import { aws_lambda as lambda } from 'aws-cdk-lib';
declare const eventSourceDlq: lambda.IEventSourceDlq;
declare const filters: any;
declare const function_: lambda.Function;
declare const key: kms.Key;
declare const sourceAccessConfigurationType: lambda.SourceAccessConfigurationType;
const eventSourceMapping = new lambda.EventSourceMapping(this, 'MyEventSourceMapping', {
target: function_,
// the properties below are optional
batchSize: 123,
bisectBatchOnError: false,
enabled: false,
eventSourceArn: 'eventSourceArn',
filterEncryption: key,
filters: [{
filtersKey: filters,
}],
kafkaBootstrapServers: ['kafkaBootstrapServers'],
kafkaConsumerGroupId: 'kafkaConsumerGroupId',
kafkaTopic: 'kafkaTopic',
maxBatchingWindow: cdk.Duration.minutes(30),
maxConcurrency: 123,
maxRecordAge: cdk.Duration.minutes(30),
metricsConfig: {
metrics: [lambda.MetricType.EVENT_COUNT],
},
onFailure: eventSourceDlq,
parallelizationFactor: 123,
reportBatchItemFailures: false,
retryAttempts: 123,
sourceAccessConfigurations: [{
type: sourceAccessConfigurationType,
uri: 'uri',
}],
startingPosition: lambda.StartingPosition.TRIM_HORIZON,
startingPositionTimestamp: 123,
supportS3OnFailureDestination: false,
tumblingWindow: cdk.Duration.minutes(30),
});
Initializer
new EventSourceMapping(scope: Construct, id: string, props: EventSourceMappingProps)
Parameters
- scope
Construct
- id
string
- props
Event
Source Mapping Props
Construct Props
Name | Type | Description |
---|---|---|
target | IFunction | The target AWS Lambda function. |
batch | number | The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function. |
bisect | boolean | If the function returns an error, split the batch in two and retry. |
enabled? | boolean | Set to false to disable the event source upon creation. |
event | string | The Amazon Resource Name (ARN) of the event source. |
filter | IKey | Add Customer managed KMS key to encrypt Filter Criteria. |
filters? | { [string]: any }[] | Add filter criteria to Event Source. |
kafka | string[] | A list of host and port pairs that are the addresses of the Kafka brokers in a self managed "bootstrap" Kafka cluster that a Kafka client connects to initially to bootstrap itself. |
kafka | string | The identifier for the Kafka consumer group to join. |
kafka | string | The name of the Kafka topic. |
max | Duration | The maximum amount of time to gather records before invoking the function. |
max | number | The maximum concurrency setting limits the number of concurrent instances of the function that an Amazon SQS event source can invoke. |
max | Duration | The maximum age of a record that Lambda sends to a function for processing. |
metrics | Metrics | Configuration for enhanced monitoring metrics collection When specified, enables collection of additional metrics for the stream event source. |
on | IEvent | An Amazon SQS queue or Amazon SNS topic destination for discarded records. |
parallelization | number | The number of batches to process from each shard concurrently. |
report | boolean | Allow functions to return partially successful responses for a batch of records. |
retry | number | The maximum number of times to retry when the function returns an error. |
source | Source [] | Specific settings like the authentication protocol or the VPC components to secure access to your event source. |
starting | Starting | The position in the DynamoDB, Kinesis or MSK stream where AWS Lambda should start reading. |
starting | number | The time from which to start reading, in Unix time seconds. |
support | boolean | Check if support S3 onfailure destination(ODF). |
tumbling | Duration | The size of the tumbling windows to group records sent to DynamoDB or Kinesis. |
target
Type:
IFunction
The target AWS Lambda function.
batchSize?
Type:
number
(optional, default: Amazon Kinesis, Amazon DynamoDB, and Amazon MSK is 100 records.
The default for Amazon SQS is 10 messages. For standard SQS queues, the maximum is 10,000. For FIFO SQS queues, the maximum is 10.)
The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function.
Your function receives an event with all the retrieved records.
Valid Range: Minimum value of 1. Maximum value of 10000.
bisectBatchOnError?
Type:
boolean
(optional, default: false)
If the function returns an error, split the batch in two and retry.
enabled?
Type:
boolean
(optional, default: true)
Set to false to disable the event source upon creation.
eventSourceArn?
Type:
string
(optional, default: not set if using a self managed Kafka cluster, throws an error otherwise)
The Amazon Resource Name (ARN) of the event source.
Any record added to this stream can invoke the Lambda function.
filterEncryption?
Type:
IKey
(optional, default: none)
Add Customer managed KMS key to encrypt Filter Criteria.
filters?
Type:
{ [string]: any }[]
(optional, default: none)
Add filter criteria to Event Source.
See also: https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html
kafkaBootstrapServers?
Type:
string[]
(optional, default: none)
A list of host and port pairs that are the addresses of the Kafka brokers in a self managed "bootstrap" Kafka cluster that a Kafka client connects to initially to bootstrap itself.
They are in the format abc.example.com:9096
.
kafkaConsumerGroupId?
Type:
string
(optional, default: none)
The identifier for the Kafka consumer group to join.
The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a lenght between 1 and 200 and full the pattern '[a-zA-Z0-9-/:_+=.@-]'. For more information, see Customizable consumer group ID.
kafkaTopic?
Type:
string
(optional, default: no topic)
The name of the Kafka topic.
maxBatchingWindow?
Type:
Duration
(optional, default: Duration.seconds(0))
The maximum amount of time to gather records before invoking the function.
Maximum of Duration.minutes(5)
maxConcurrency?
Type:
number
(optional, default: No specific limit.)
The maximum concurrency setting limits the number of concurrent instances of the function that an Amazon SQS event source can invoke.
Valid Range: Minimum value of 2. Maximum value of 1000.](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#events-sqs-max-concurrency
Valid Range: Minimum value of 2. Maximum value of 1000.)
maxRecordAge?
Type:
Duration
(optional, default: infinite or until the record expires.)
The maximum age of a record that Lambda sends to a function for processing.
Valid Range:
- Minimum value of 60 seconds
- Maximum value of 7 days
metricsConfig?
Type:
Metrics
(optional, default: Enhanced monitoring is disabled)
Configuration for enhanced monitoring metrics collection When specified, enables collection of additional metrics for the stream event source.
onFailure?
Type:
IEvent
(optional, default: discarded records are ignored)
An Amazon SQS queue or Amazon SNS topic destination for discarded records.
parallelizationFactor?
Type:
number
(optional, default: 1)
The number of batches to process from each shard concurrently.
Valid Range:
- Minimum value of 1
- Maximum value of 10
reportBatchItemFailures?
Type:
boolean
(optional, default: false)
Allow functions to return partially successful responses for a batch of records.
retryAttempts?
Type:
number
(optional, default: infinite or until the record expires.)
The maximum number of times to retry when the function returns an error.
Set to undefined
if you want lambda to keep retrying infinitely or until
the record expires.
Valid Range:
- Minimum value of 0
- Maximum value of 10000
sourceAccessConfigurations?
Type:
Source
[]
(optional, default: none)
Specific settings like the authentication protocol or the VPC components to secure access to your event source.
startingPosition?
Type:
Starting
(optional, default: no starting position)
The position in the DynamoDB, Kinesis or MSK stream where AWS Lambda should start reading.
startingPositionTimestamp?
Type:
number
(optional, default: no timestamp)
The time from which to start reading, in Unix time seconds.
supportS3OnFailureDestination?
Type:
boolean
(optional, default: false)
Check if support S3 onfailure destination(ODF).
Currently only MSK and self managed kafka event support S3 ODF
tumblingWindow?
Type:
Duration
(optional, default: None)
The size of the tumbling windows to group records sent to DynamoDB or Kinesis.
See also: [https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html#services-ddb-windows
Valid Range: 0 - 15 minutes](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html#services-ddb-windows
Valid Range: 0 - 15 minutes)
Properties
Name | Type | Description |
---|---|---|
env | Resource | The environment this resource belongs to. |
event | string | The ARN of the event source mapping (i.e. arn:aws:lambda:region:account-id:event-source-mapping/event-source-mapping-id). |
event | string | The identifier for this EventSourceMapping. |
node | Node | The tree node. |
stack | Stack | The stack in which this resource is defined. |
env
Type:
Resource
The environment this resource belongs to.
For resources that are created and managed by the CDK (generally, those created by creating new class instances like Role, Bucket, etc.), this is always the same as the environment of the stack they belong to; however, for imported resources (those obtained from static methods like fromRoleArn, fromBucketName, etc.), that might be different than the stack they were imported into.
eventSourceMappingArn
Type:
string
The ARN of the event source mapping (i.e. arn:aws:lambda:region:account-id:event-source-mapping/event-source-mapping-id).
eventSourceMappingId
Type:
string
The identifier for this EventSourceMapping.
node
Type:
Node
The tree node.
stack
Type:
Stack
The stack in which this resource is defined.
Methods
Name | Description |
---|---|
apply | Apply the given removal policy to this resource. |
to | Returns a string representation of this construct. |
static from | Import an event source into this stack from its event source id. |
RemovalPolicy(policy)
applypublic applyRemovalPolicy(policy: RemovalPolicy): void
Parameters
- policy
Removal
Policy
Apply the given removal policy to this resource.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you've removed it from the CDK application or because you've made a change that requires the resource to be replaced.
The resource can be deleted (RemovalPolicy.DESTROY
), or left in your AWS
account for data recovery and cleanup later (RemovalPolicy.RETAIN
).
String()
topublic toString(): string
Returns
string
Returns a string representation of this construct.
EventSourceMappingId(scope, id, eventSourceMappingId)
static frompublic static fromEventSourceMappingId(scope: Construct, id: string, eventSourceMappingId: string): IEventSourceMapping
Parameters
- scope
Construct
- id
string
- eventSourceMappingId
string
Returns
Import an event source into this stack from its event source id.