

# Configuring Amazon MSK event sources for Lambda
<a name="with-msk-configure"></a>

To use an Amazon MSK cluster as an event source for your Lambda function, you create an [event source mapping](invocation-eventsourcemapping.md) that connects the two resources. This page describes how to create an event source mapping for Amazon MSK.

This page assumes that you've already properly configured your MSK cluster and the [Amazon Virtual Private Cloud (VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) it resides in. If you need to set up your cluster or VPC, see [Configuring your Amazon MSK cluster and Amazon VPC network for Lambda](with-msk-cluster-network.md). To configure retry behavior for error handling, see [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md).

**Topics**
+ [Using an Amazon MSK cluster as an event source](#msk-esm-overview)
+ [Configuring Amazon MSK cluster authentication methods in Lambda](msk-cluster-auth.md)
+ [Creating a Lambda event source mapping for an Amazon MSK event source](msk-esm-create.md)
+ [Creating cross-account event source mappings in Lambda](msk-cross-account.md)
+ [All Amazon MSK event source configuration parameters in Lambda](msk-esm-parameters.md)

## Using an Amazon MSK cluster as an event source
<a name="msk-esm-overview"></a>

When you add your Apache Kafka or Amazon MSK cluster as a trigger for your Lambda function, the cluster is used as an [event source](invocation-eventsourcemapping.md).

Lambda reads event data from the Kafka topics that you specify as `Topics` in a [CreateEventSourceMapping](https://docs.aws.amazon.com/lambda/latest/api/API_CreateEventSourceMapping.html) request, based on the [starting position](kafka-starting-positions.md) that you specify. After successful processing, your Kafka topic is committed to your Kafka cluster.

Lambda reads messages sequentially for each Kafka topic partition. A single Lambda payload can contain messages from multiple partitions. When more records are available, Lambda continues processing records in batches, based on the BatchSize value that you specify in a [CreateEventSourceMapping](https://docs.aws.amazon.com/lambda/latest/api/API_CreateEventSourceMapping.html) request, until your function catches up with the topic.

After Lambda processes each batch, it commits the offsets of the messages in that batch. If your function returns an error for any of the messages in a batch, Lambda retries the whole batch of messages until processing succeeds or the messages expire. You can send records that fail all retry attempts to an on-failure destination for later processing.

**Note**  
While Lambda functions typically have a maximum timeout limit of 15 minutes, event source mappings for Amazon MSK, self-managed Apache Kafka, Amazon DocumentDB, and Amazon MQ for ActiveMQ and RabbitMQ only support functions with maximum timeout limits of 14 minutes.

# Configuring Amazon MSK cluster authentication methods in Lambda
<a name="msk-cluster-auth"></a>

Lambda needs permission to access your Amazon MSK cluster, retrieve records, and perform other tasks. Amazon MSK supports several ways to authenticate with your MSK cluster.

**Topics**
+ [Unauthenticated access](#msk-unauthenticated)
+ [SASL/SCRAM authentication](#msk-sasl-scram)
+ [Mutual TLS authentication](#msk-mtls)
+ [IAM authentication](#msk-iam-auth)
+ [How Lambda chooses a bootstrap broker](#msk-bootstrap-brokers)

## Unauthenticated access
<a name="msk-unauthenticated"></a>

If no clients access the cluster over the internet, you can use unauthenticated access.

## SASL/SCRAM authentication
<a name="msk-sasl-scram"></a>

Lambda supports [ Simple Authentication and Security Layer/Salted Challenge Response Authentication Mechanism (SASL/SCRAM)](https://docs.aws.amazon.com/msk/latest/developerguide/msk-password-tutorial.html) authentication, with the SHA-512 hash function and Transport Layer Security (TLS) encryption. For Lambda to connect to the cluster, store the authentication credentials (username and password) in a Secrets Manager secret, and reference this secret when configuring your event source mapping.

For more information about using Secrets Manager, see [Sign-in credentials authentication with Secrets Manager](https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html) in the *Amazon Managed Streaming for Apache Kafka Developer Guide*.

**Note**  
Amazon MSK doesn't support SASL/PLAIN authentication.

## Mutual TLS authentication
<a name="msk-mtls"></a>

Mutual TLS (mTLS) provides two-way authentication between the client and the server. The client sends a certificate to the server for the server to verify the client. The server also sends a certificate to the client for the client to verify the server.

For Amazon MSK integrations with Lambda, your MSK cluster acts as the server, and Lambda acts as the client.
+ For Lambda to verify your MSK cluster, you configure a client certificate as a secret in Secrets Manager, and reference this certificate in your event source mapping configuration. The client certificate must be signed by a certificate authority (CA) in the server's trust store.
+ The MSK cluster also sends a server certificate to Lambda. The server certificate must be signed by a certificate authority (CA) in the AWS trust store.

Amazon MSK doesn't support self-signed server certificates. All brokers in Amazon MSK use [public certificates](https://docs.aws.amazon.com/msk/latest/developerguide/msk-encryption.html) signed by [Amazon Trust Services CAs](https://www.amazontrust.com/repository/), which Lambda trusts by default.

### Configuring the mTLS secret
<a name="mtls-auth-secret"></a>

The CLIENT\$1CERTIFICATE\$1TLS\$1AUTH secret requires a certificate field and a private key field. For an encrypted private key, the secret requires a private key password. Both the certificate and private key must be in PEM format.

**Note**  
Lambda supports the [PBES1](https://datatracker.ietf.org/doc/html/rfc2898/#section-6.1) (but not PBES2) private key encryption algorithms.

The certificate field must contain a list of certificates, beginning with the client certificate, followed by any intermediate certificates, and ending with the root certificate. Each certificate must start on a new line with the following structure:

```
-----BEGIN CERTIFICATE-----  
        <certificate contents>
-----END CERTIFICATE-----
```

Secrets Manager supports secrets up to 65,536 bytes, which is enough space for long certificate chains.

The private key must be in [PKCS \$18](https://datatracker.ietf.org/doc/html/rfc5208) format, with the following structure:

```
-----BEGIN PRIVATE KEY-----  
         <private key contents>
-----END PRIVATE KEY-----
```

For an encrypted private key, use the following structure:

```
-----BEGIN ENCRYPTED PRIVATE KEY-----  
          <private key contents>
-----END ENCRYPTED PRIVATE KEY-----
```

The following example shows the contents of a secret for mTLS authentication using an encrypted private key. For an encrypted private key, you include the private key password in the secret.

```
{
 "privateKeyPassword": "testpassword",
 "certificate": "-----BEGIN CERTIFICATE-----
MIIE5DCCAsygAwIBAgIRAPJdwaFaNRrytHBto0j5BA0wDQYJKoZIhvcNAQELBQAw
...
j0Lh4/+1HfgyE2KlmII36dg4IMzNjAFEBZiCRoPimO40s1cRqtFHXoal0QQbIlxk
cmUuiAii9R0=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFgjCCA2qgAwIBAgIQdjNZd6uFf9hbNC5RdfmHrzANBgkqhkiG9w0BAQsFADBb
...
rQoiowbbk5wXCheYSANQIfTZ6weQTgiCHCCbuuMKNVS95FkXm0vqVD/YpXKwA/no
c8PH3PSoAaRwMMgOSA2ALJvbRz8mpg==
-----END CERTIFICATE-----",
 "privateKey": "-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIFKzBVBgkqhkiG9w0BBQ0wSDAnBgkqhkiG9w0BBQwwGgQUiAFcK5hT/X7Kjmgp
...
QrSekqF+kWzmB6nAfSzgO9IaoAaytLvNgGTckWeUkWn/V0Ck+LdGUXzAC4RxZnoQ
zp2mwJn2NYB7AZ7+imp0azDZb+8YG2aUCiyqb6PnnA==
-----END ENCRYPTED PRIVATE KEY-----"
}
```

For more information about mTLS for Amazon MSK, and instructions on how to generate a client certificate, see [Mutual TLS client authentication for Amazon MSK](https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html) in the *Amazon Managed Streaming for Apache Kafka Developer Guide*.

## IAM authentication
<a name="msk-iam-auth"></a>

You can use AWS Identity and Access Management (IAM) to authenticate the identity of clients that connect to the MSK cluster. With IAM auth, Lambda relies on the permissions in your function's [execution role](lambda-intro-execution-role.md) to connect to the cluster, retrieve records, and perform other required actions. For a sample policy that contains the necessary permissions, see [ Create authorization policies for the IAM role](https://docs.aws.amazon.com/msk/latest/developerguide/create-iam-access-control-policies.html) in the *Amazon Managed Streaming for Apache Kafka Developer Guide*.

If IAM auth is active on your MSK cluster, and you don't provide a secret, Lambda automatically defaults to using IAM auth.

For more information about IAM authentication in Amazon MSK, see [IAM access control](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html).

## How Lambda chooses a bootstrap broker
<a name="msk-bootstrap-brokers"></a>

Lambda chooses a [ bootstrap broker](https://docs.aws.amazon.com/msk/latest/developerguide/msk-get-bootstrap-brokers.html) based on the authentication methods available on your cluster, and whether you provide a secret for authentication. If you provide a secret for mTLS or SASL/SCRAM, Lambda automatically chooses that auth method. If you don't provide a secret, Lambda selects the strongest auth method that's active on your cluster. The following is the order of priority in which Lambda selects a broker, from strongest to weakest auth:
+ mTLS (secret provided for mTLS)
+ SASL/SCRAM (secret provided for SASL/SCRAM)
+ SASL IAM (no secret provided, and IAM auth active)
+ Unauthenticated TLS (no secret provided, and IAM auth not active)
+ Plaintext (no secret provided, and both IAM auth and unauthenticated TLS are not active)

**Note**  
If Lambda can't connect to the most secure broker type, Lambda doesn't attempt to connect to a different (weaker) broker type. If you want Lambda to choose a weaker broker type, deactivate all stronger auth methods on your cluster.

# Creating a Lambda event source mapping for an Amazon MSK event source
<a name="msk-esm-create"></a>

To create an event source mapping, you can use the Lambda console, the [AWS Command Line Interface (CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), or an [AWS SDK](https://aws.amazon.com/getting-started/tools-sdks/).

**Note**  
When you create the event source mapping, Lambda creates a [ hyperplane ENI](configuration-vpc.md#configuration-vpc-enis) in the private subnet that contains your MSK cluster, allowing Lambda to establish a secure connection. This hyperplane ENI allows uses the subnet and security group configuration of your MSK cluster, not your Lambda function.

The following console steps add an Amazon MSK cluster as a trigger for your Lambda function. Under the hood, this creates an event source mapping resource.

**To add an Amazon MSK trigger to your Lambda function (console)**

1. Open the [Function page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the name of the Lambda function you want to add an Amazon MSK trigger to.

1. Under **Function overview**, choose **Add trigger**.

1. Under **Trigger configuration**, choose **MSK**.

1. To specify your Kafka cluster details, do the following:

   1. For **MSK cluster**, select your cluster.

   1. For **Topic name**, enter the name of the Kafka topic to consume messages from.

   1. For **Consumer group ID**, enter the ID of a Kafka consumer group to join, if applicable. For more information, see [Customizable consumer group ID in Lambda](kafka-consumer-group-id.md).

1. For **Cluster authentication**, make the necessary configurations. For more information about cluster authentication, see [Configuring Amazon MSK cluster authentication methods in Lambda](msk-cluster-auth.md).
   + Toggle on **Use authentication** if you want Lambda to perform authentication with your MSK cluster when establishing a connection. Authentication is recommended.
   + If you use authentication, for **Authentication method**, choose the authentication method to use.
   + If you use authentication, for **Secrets Manager key**, choose the Secrets Manager key that contains the authentication credentials needed to access your cluster.

1. Under **Event poller configuration**, make the necessary configurations.
   + Choose **Activate trigger** to enable the trigger immediately after creation.
   + Choose whether you want to **Configure provisioned mode** for your event source mapping. For more information, see [Apache Kafka event poller scaling modes in Lambda](kafka-scaling-modes.md).
     + If you configure provisioned mode, enter a value for **Minimum event pollers**, a value for **Maximum event pollers**, and an optional value for PollerGroupName to specify grouping of multiple ESMs within the same event source VPC.
   + For **Starting position**, choose how you want Lambda to start reading from your stream. For more information, see [Apache Kafka polling and stream starting positions in Lambda](kafka-starting-positions.md).

1. Under **Batching**, make the necessary configurations. For more information about batching, see [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching).

   1. For **Batch size**, enter the maximum number of messages to receive in a single batch.

   1. For **Batch window**, enter the maximum number of seconds that Lambda spends gathering records before invoking the function.

1. Under **Filtering**, make the necessary configurations. For more information about filtering, see [Filtering events from Amazon MSK and self-managed Apache Kafka event sources](kafka-filtering.md).
   + For **Filter criteria**, add filter criteria definitions to determine whether or not to process an event.

1. Under **Failure handling**, make the necessary configurations. For more information about failure handling, see [Capturing discarded batches for Amazon MSK and self-managed Apache Kafka event sources](kafka-on-failure.md).
   + For **On-failure destination**, specify the ARN of your on-failure destination.

1. For **Tags**, enter the tags to associate with this event source mapping.

1. To create the trigger, choose **Add**.

You can also create the event source mapping using the AWS CLI with the [ create-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) command. The following example creates an event source mapping to map the Lambda function `my-msk-function` to the `AWSKafkaTopic` topic, starting from the `LATEST` message. This command also uses the [SourceAccessConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_SourceAccessConfiguration.html) object to instruct Lambda to use [SASL/SCRAM](msk-cluster-auth.md#msk-sasl-scram) authentication when connecting to the cluster.

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:cluster/my-cluster/fc2f5bdf-fd1b-45ad-85dd-15b4a5a6247e-2 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function
  --source-access-configurations '[{"Type": "SASL_SCRAM_512_AUTH","URI": "arn:aws:secretsmanager:us-east-1:111122223333:secret:my-secret"}]'
```

If the cluster uses [mTLS authentication](msk-cluster-auth.md#msk-mtls), include a [SourceAccessConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_SourceAccessConfiguration.html) object that specifies `CLIENT_CERTIFICATE_TLS_AUTH` and a Secrets Manager key ARN. This is shown in the following command:

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:cluster/my-cluster/fc2f5bdf-fd1b-45ad-85dd-15b4a5a6247e-2 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function
  --source-access-configurations '[{"Type": "CLIENT_CERTIFICATE_TLS_AUTH","URI": "arn:aws:secretsmanager:us-east-1:111122223333:secret:my-secret"}]'
```

When the cluster uses [IAM authentication](msk-cluster-auth.md#msk-iam-auth), you don’t need a [ SourceAccessConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_SourceAccessConfiguration.html) object. This is shown in the following command:

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:cluster/my-cluster/fc2f5bdf-fd1b-45ad-85dd-15b4a5a6247e-2 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function
```

# Creating cross-account event source mappings in Lambda
<a name="msk-cross-account"></a>

You can use [multi-VPC private connectivity](https://docs.aws.amazon.com/msk/latest/developerguide/aws-access-mult-vpc.html) to connect a Lambda function to a provisioned MSK cluster in a different AWS account. Multi-VPC connectivity uses AWS PrivateLink, which keeps all traffic within the AWS network.

**Note**  
You can't create cross-account event source mappings for serverless MSK clusters.

To create a cross-account event source mapping, you must first [configure multi-VPC connectivity for the MSK cluster](https://docs.aws.amazon.com/msk/latest/developerguide/aws-access-mult-vpc.html#mvpc-cluster-owner-action-turn-on). When you create the event source mapping, use the managed VPC connection ARN instead of the cluster ARN, as shown in the following examples. The [CreateEventSourceMapping](https://docs.aws.amazon.com/lambda/latest/api/API_CreateEventSourceMapping.html) operation also differs depending on which authentication type the MSK cluster uses.

**Example — Create cross-account event source mapping for cluster that uses IAM authentication**  
When the cluster uses [IAM role-based authentication](msk-cluster-auth.md#msk-iam-auth), you don't need a [SourceAccessConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_SourceAccessConfiguration.html) object. Example:  

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:vpc-connection/444455556666/my-cluster-name/51jn98b4-0a61-46cc-b0a6-61g9a3d797d5-7 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function
```

**Example — Create cross-account event source mapping for cluster that uses SASL/SCRAM authentication**  
If the cluster uses [SASL/SCRAM authentication](msk-cluster-auth.md#msk-sasl-scram), you must include a [SourceAccessConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_SourceAccessConfiguration.html) object that specifies `SASL_SCRAM_512_AUTH` and a Secrets Manager secret ARN.  
There are two ways to use secrets for cross-account Amazon MSK event source mappings with SASL/SCRAM authentication:  
+ Create a secret in the Lambda function account and sync it with the cluster secret. [Create a rotation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) to keep the two secrets in sync. This option allows you to control the secret from the function account.
+ Use the secret that's associated with the MSK cluster. This secret must allow cross-account access to the Lambda function account. For more information, see [Permissions to AWS Secrets Manager secrets for users in a different account](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples_cross.html).

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:vpc-connection/444455556666/my-cluster-name/51jn98b4-0a61-46cc-b0a6-61g9a3d797d5-7 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function \
  --source-access-configurations '[{"Type": "SASL_SCRAM_512_AUTH","URI": "arn:aws:secretsmanager:us-east-1:444455556666:secret:my-secret"}]'
```

**Example — Create cross-account event source mapping for cluster that uses mTLS authentication**  
If the cluster uses [mTLS authentication](msk-cluster-auth.md#msk-mtls), you must include a [SourceAccessConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_SourceAccessConfiguration.html) object that specifies `CLIENT_CERTIFICATE_TLS_AUTH` and a Secrets Manager secret ARN. The secret can be stored in the cluster account or the Lambda function account.  

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:vpc-connection/444455556666/my-cluster-name/51jn98b4-0a61-46cc-b0a6-61g9a3d797d5-7 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function \
  --source-access-configurations '[{"Type": "CLIENT_CERTIFICATE_TLS_AUTH","URI": "arn:aws:secretsmanager:us-east-1:444455556666:secret:my-secret"}]'
```

# All Amazon MSK event source configuration parameters in Lambda
<a name="msk-esm-parameters"></a>

All Lambda event source types share the same [CreateEventSourceMapping](https://docs.aws.amazon.com/lambda/latest/api/API_CreateEventSourceMapping.html) and [UpdateEventSourceMapping](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateEventSourceMapping.html) API operations. However, only some of the parameters apply to Amazon MSK, as shown in the following table.


| Parameter | Required | Default | Notes | 
| --- | --- | --- | --- | 
|  AmazonManagedKafkaEventSourceConfig  |  N  |  Contains the ConsumerGroupId field, which defaults to a unique value.  |  Can set only on Create  | 
|  BatchSize  |  N  |  100  |  Maximum: 10,000  | 
|  DestinationConfig  |  N  |  N/A  |  [Capturing discarded batches for Amazon MSK and self-managed Apache Kafka event sources](kafka-on-failure.md)  | 
|  Enabled  |  N  |  True  |    | 
|  BisectBatchOnFunctionError  |  N  |  False  |  [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md)  | 
|  FunctionResponseTypes  |  N  |  N/A  |  [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md)  | 
|  MaximumRecordAgeInSeconds  |  N  |  -1 (infinite)  |  [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md)  | 
|  MaximumRetryAttempts  |  N  |  -1 (infinite)  |  [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md)  | 
|  EventSourceArn  |  Y  | N/A |  Can set only on Create  | 
|  FilterCriteria  |  N  |  N/A  |  [Control which events Lambda sends to your function](invocation-eventfiltering.md)  | 
|  FunctionName  |  Y  |  N/A  |    | 
|  KMSKeyArn  |  N  |  N/A  |  [Encryption of filter criteria](invocation-eventfiltering.md#filter-criteria-encryption)  | 
|  MaximumBatchingWindowInSeconds  |  N  |  500 ms  |  [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching)  | 
|  ProvisionedPollersConfig  |  N  |  `MinimumPollers`: default value of 1 if not specified `MaximumPollers`: default value of 200 if not specified `PollerGroupName`: N/A  |  [Provisioned mode](kafka-scaling-modes.md#kafka-provisioned-mode)  | 
|  SourceAccessConfigurations  |  N  |  No credentials  |  SASL/SCRAM or CLIENT\$1CERTIFICATE\$1TLS\$1AUTH (MutualTLS) authentication credentials for your event source  | 
|  StartingPosition  |  Y  | N/A |  AT\$1TIMESTAMP, TRIM\$1HORIZON, or LATEST Can set only on Create  | 
|  StartingPositionTimestamp  |  N  |  N/A  |  Required if StartingPosition is set to AT\$1TIMESTAMP  | 
|  Tags  |  N  |  N/A  |  [Using tags on event source mappings](tags-esm.md)  | 
|  Topics  |  Y  | N/A |  Kafka topic name Can set only on Create  | 

**Note**  
When you specify a `PollerGroupName`, multiple ESMs within the same Amazon VPC can share Event Poller Unit (EPU) capacity. You can use this option to optimize Provisioned mode costs for your ESMs. Requirements for ESM grouping:  
ESMs must be within the same Amazon VPC
Maximum of 100 ESMs per poller group
Aggregate maximum pollers across all ESMs in a group cannot exceed 2000
You can update the `PollerGroupName` to move an ESM to a different group, or remove an ESM from a group by setting `PollerGroupName` to an empty string ("").