

# What is MSK Serverless?
<a name="serverless"></a>

**Note**  
MSK Serverless is available in the US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Frankfurt), Europe (Stockholm), Europe (Ireland), Europe (Paris), and Europe (London) Regions.

MSK Serverless is a cluster type for Amazon MSK that makes it possible for you to run Apache Kafka without having to manage and scale cluster capacity. It automatically provisions and scales capacity while managing the partitions in your topic, so you can stream data without thinking about right-sizing or scaling clusters. MSK Serverless offers a throughput-based pricing model, so you pay only for what you use. Consider using a serverless cluster if your applications need on-demand streaming capacity that scales up and down automatically.

MSK Serverless is fully compatible with Apache Kafka, so you can use any compatible client applications to produce and consume data. It also integrates with the following services:
+ AWS PrivateLink to provide private connectivity
+ AWS Identity and Access Management (IAM) for authentication and authorization using Java and non-Java languages. For instruction on configuring clients for IAM, see [Configure clients for IAM access control](configure-clients-for-iam-access-control.md).
+ AWS Glue Schema Registry for schema management
+ Amazon Managed Service for Apache Flink for Apache Flink-based stream processing
+  AWS Lambda for event processing

**Note**  
MSK Serverless requires IAM access control for all clusters. Apache Kafka access control lists (ACLs) are not supported. For more information, see [IAM access control](iam-access-control.md).  
For information about the service quota that apply to MSK Serverless, see [MSK Serverless quota](limits.md#serverless-quota).

To help you get started with serverless clusters, and to learn more about configuration and monitoring options for serverless clusters, see the following.

**Topics**
+ [Use MSK Serverless clusters](serverless-getting-started.md)
+ [Configuration properties for MSK Serverless clusters](serverless-config.md)
+ [Configure dual-stack network type](serverless-config-dual-stack.md)
+ [Monitor MSK Serverless clusters](serverless-monitoring.md)

# Use MSK Serverless clusters
<a name="serverless-getting-started"></a>

This tutorial shows you an example of how you can create an MSK Serverless cluster, create a client machine that can access it, and use the client to create topics on the cluster and to write data to those topics. This exercise doesn't represent all the options that you can choose when you create a serverless cluster. In different parts of this exercise, we choose default options for simplicity. This doesn't mean that they're the only options that work for setting up a serverless cluster. You can also use the AWS CLI or the Amazon MSK API. For more information, see the [Amazon MSK API Reference 2.0](https://docs.aws.amazon.com/MSK/2.0/APIReference/what-is-msk.html).

**Topics**
+ [Create an MSK Serverless cluster](create-serverless-cluster.md)
+ [Create an IAM role for topics on MSK Serverless cluster](create-iam-role.md)
+ [Create a client machine to access MSK Serverless cluster](create-serverless-cluster-client.md)
+ [Create an Apache Kafka topic](msk-serverless-create-topic.md)
+ [Produce and consume data in MSK Serverless](msk-serverless-produce-consume.md)
+ [Delete resources that you created for MSK Serverless](delete-resources.md)

# Create an MSK Serverless cluster
<a name="create-serverless-cluster"></a>

In this step, you perform two tasks. First, you create an MSK Serverless cluster with default settings. Second, you gather information about the cluster. This is information that you need in later steps when you create a client that can send data to the cluster.

**To create a serverless cluster**

1. Sign in to the AWS Management Console, and open the Amazon MSK console at [https://console.aws.amazon.com/msk/home](https://console.aws.amazon.com/msk/home).

1. Choose **Create cluster**.

1. For **Creation method**, leave the **Quick create** option selected. The **Quick create** option lets you create a serverless cluster with default settings.

1. For **Cluster name**, enter a descriptive name, such as **msk-serverless-tutorial-cluster**.

1. For **General cluster properties**, choose **Serverless** as the **Cluster type**. Use the default values for the remaining **General cluster** properties.

1. Note the table under **All cluster settings**. This table lists the default values for important settings such as networking and availability, and indicates whether you can change each setting after you create the cluster. To change a setting before you create the cluster, you should choose the **Custom create** option under **Creation method**.
**Note**  
You can connect clients from up to five different VPCs with MSK Serverless clusters. To help client applications switch over to another Availability Zone in the event of an outage, you must specify at least two subnets in each VPC.

1. Choose **Create cluster**.

**To gather information about the cluster**

1. In the **Cluster summary** section, choose **View client information**. This button remains grayed out until Amazon MSK finishes creating the cluster. You might need to wait a few minutes until the button becomes active so you can use it.

1. Copy the string under the label **Endpoint**. This is your bootstrap server string.

1. Choose the **Properties** tab.

1. Under the **Networking settings** section, copy the IDs of the subnets and the security group and save them because you need this information later to create a client machine.

1. Choose any of the subnets. This opens the Amazon VPC Console. Find the ID of the Amazon VPC that is associated with the subnet. Save this Amazon VPC ID for later use.

**Next Step**

[Create an IAM role for topics on MSK Serverless cluster](create-iam-role.md)

# Create an IAM role for topics on MSK Serverless cluster
<a name="create-iam-role"></a>

In this step, you perform two tasks. The first task is to create an IAM policy that grants access to create topics on the cluster and to send data to those topics. The second task is to create an IAM role and associate this policy with it. In a later step, we create a client machine that assumes this role and uses it to create a topic on the cluster and to send data to that topic.

**To create an IAM policy that makes it possible to create topics and write to them**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. On the navigation pane, choose **Policies**.

1. Choose **Create Policy**.

1. Choose the **JSON** tab, then replace the JSON in the editor window with the following JSON. 

   In the following example, replace the following:
   + *region* with the code of the AWS Region where you created your cluster.
   + Example account ID, *123456789012*, with your AWS account ID.
   + *msk-serverless-tutorial-cluster*/*c07c74ea-5146-4a03-add1-9baa787a5b14-s3* and *msk-serverless-tutorial-cluster* with your serverless cluster ID and topic name.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "kafka-cluster:Connect",
                   "kafka-cluster:DescribeCluster"
               ],
               "Resource": [
                   "arn:aws:kafka:us-east-1:123456789012:cluster/msk-serverless-tutorial-cluster/c07c74ea-5146-4a03-add1-9baa787a5b14-s3"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "kafka-cluster:CreateTopic",
                   "kafka-cluster:WriteData",
                   "kafka-cluster:DescribeTopic"
               ],
               "Resource": [
               "arn:aws:kafka:us-east-1:123456789012:topic/msk-serverless-tutorial-cluster/*"
               ]
           }
       ]
   }
   ```

------

   For instructions about how to write secure policies, see [IAM access control](iam-access-control.md).

1. Choose **Next: Tags**.

1. Choose **Next: Review**.

1. For the policy name, enter a descriptive name, such as **msk-serverless-tutorial-policy**.

1. Choose **Create policy**.

**To create an IAM role and attach the policy to it**

1. On the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. Under **Common use cases**, choose **EC2**, then choose **Next: Permissions**.

1. In the search box, enter the name of the policy that you previously created for this tutorial. Then select the box to the left of the policy.

1. Choose **Next: Tags**.

1. Choose **Next: Review**.

1. For the role name, enter a descriptive name, such as **msk-serverless-tutorial-role**.

1. Choose **Create role**.

**Next Step**

[Create a client machine to access MSK Serverless cluster](create-serverless-cluster-client.md)

# Create a client machine to access MSK Serverless cluster
<a name="create-serverless-cluster-client"></a>

In the step, you perform two tasks. The first task is to create an Amazon EC2 instance to use as an Apache Kafka client machine. The second task is to install Java and Apache Kafka tools on the machine.

**To create a client machine**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. Choose **Launch instance**.

1. Enter a descriptive **Name** for your client machine, such as **msk-serverless-tutorial-client**.

1. Leave the **Amazon Linux 2 AMI (HVM) - Kernel 5.10, SSD Volume Type** selected for **Amazon Machine Image (AMI) type**.

1. Leave the **t2.micro** instance type selected.

1. Under **Key pair (login)**, choose **Create a new key pair**. Enter **MSKServerlessKeyPair** for **Key pair name**. Then choose **Download Key Pair**. Alternatively, you can use an existing key pair.

1. For **Network settings**, choose **Edit**.

1. Under **VPC**, enter the ID of the virtual private cloud (VPC) for your serverless cluster . This is the VPC based on the Amazon VPC service whose ID you saved after you created the cluster.

1. For **Subnet**, choose the subnet whose ID you saved after you created the cluster.

1. For **Firewall (security groups)**, select the security group associated with the cluster. This value works if that security group has an inbound rule that allows traffic from the security group to itself. With such a rule, members of the same security group can communicate with each other. For more information, see [Security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#SecurityGroupRules) in the Amazon VPC Developer Guide.

1. Expand the **Advanced details** section and choose the IAM role that you created in [Create an IAM role for topics on MSK Serverless cluster](create-iam-role.md).

1. Choose **Launch**.

1. In the left navigation pane, choose **Instances**. Then choose the check box in the row that represents your newly created Amazon EC2 instance. From this point forward, we call this instance the *client machine*.

1. Choose **Connect** and follow the instructions to connect to the client machine.

**To set up Apache Kafka client tools on the client machine**

1. To install Java, run the following command on the client machine:

   ```
   sudo yum -y install java-11
   ```

1. To get the Apache Kafka tools that we need to create topics and send data, run the following commands:

   ```
   wget https://archive.apache.org/dist/kafka/2.8.1/kafka_2.12-2.8.1.tgz
   ```

   ```
   tar -xzf kafka_2.12-2.8.1.tgz
   ```
**Note**  
After extracting the Kafka archive, make sure that the scripts in the `bin` directory have proper execute permissions. To do this, run the following command.  

   ```
   chmod +x kafka_2.12-2.8.1/bin/*.sh
   ```

1. Go to the `kafka_2.12-2.8.1/libs` directory, then run the following command to download the Amazon MSK IAM JAR file. The Amazon MSK IAM JAR makes it possible for the client machine to access the cluster.

   ```
   wget https://github.com/aws/aws-msk-iam-auth/releases/download/v2.3.0/aws-msk-iam-auth-2.3.0-all.jar
   ```

   Using this command, you can also [download other or newer versions](https://github.com/aws/aws-msk-iam-auth/releases) of Amazon MSK IAM JAR file.

1. Go to the `kafka_2.12-2.8.1/bin` directory. Copy the following property settings and paste them into a new file. Name the file `client.properties` and save it.

   ```
   security.protocol=SASL_SSL
   sasl.mechanism=AWS_MSK_IAM
   sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
   sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
   ```

**Next Step**

[Create an Apache Kafka topic](msk-serverless-create-topic.md)

# Create an Apache Kafka topic
<a name="msk-serverless-create-topic"></a>

In this step, you use the previously created client machine to create a topic on the serverless cluster.

**Topics**
+ [Setting up your environment for creating topics](#msk-serverless-create-topic-prerequisites)
+ [Creating a topic and writing data to it](#msk-serverless-create-topic-procedure)

## Setting up your environment for creating topics
<a name="msk-serverless-create-topic-prerequisites"></a>
+ Before creating a topic, make sure that you've downloaded the AWS MSK IAM JAR file to your Kafka installation's `libs/` directory. If you haven't done this yet, run the following command in your Kafka's `libs/` directory.

  ```
  wget https://github.com/aws/aws-msk-iam-auth/releases/download/v2.3.0/aws-msk-iam-auth-2.3.0-all.jar
  ```

  This JAR file is required for IAM authentication with your MSK Serverless cluster.
+ When running Kafka commands, you might need to make sure the `classpath` includes the AWS MSK IAM JAR file. To do this, do one of the following:
  + Set the `CLASSPATH` environment variable to include your Kafka libraries as shown in the following example.

    ```
    export CLASSPATH=<path-to-your-kafka-installation>/libs/*:<path-to-your-kafka-installation>/libs/aws-msk-iam-auth-2.3.0-all.jar
    ```
  + Run Kafka commands using the full Java command with explicit `classpath`, as shown in the following example.

    ```
    java -cp "<path-to-your-kafka-installation>/libs/*:<path-to-your-kafka-installation>/libs/aws-msk-iam-auth-2.3.0-all.jar" org.apache.kafka.tools.TopicCommand --bootstrap-server $BS --command-config client.properties --create --topic msk-serverless-tutorial --partitions 6
    ```

## Creating a topic and writing data to it
<a name="msk-serverless-create-topic-procedure"></a>

1. In the following `export` command, replace *my-endpoint* with the bootstrap-server string you that you saved after you created the cluster. Then, go to the `kafka_2.12-2.8.1/bin` directory on the client machine and run the `export` command.

   ```
   export BS=my-endpoint
   ```

1. Run the following command to create a topic called `msk-serverless-tutorial`.

   ```
   <path-to-your-kafka-installation>/bin/kafka-topics.sh --bootstrap-server $BS --command-config client.properties --create --topic msk-serverless-tutorial --partitions 6
   ```

**Next Step**

[Produce and consume data in MSK Serverless](msk-serverless-produce-consume.md)

# Produce and consume data in MSK Serverless
<a name="msk-serverless-produce-consume"></a>

In this step, you produce and consume data using the topic that you created in the previous step.

**To produce and consume messages**

1. Run the following command to create a console producer.

   ```
   <path-to-your-kafka-installation>/bin/kafka-console-producer.sh --broker-list $BS --producer.config client.properties --topic msk-serverless-tutorial
   ```

1. Enter any message that you want, and press **Enter**. Repeat this step two or three times. Every time you enter a line and press **Enter**, that line is sent to your cluster as a separate message.

1. Keep the connection to the client machine open, and then open a second, separate connection to that machine in a new window.

1. Use your second connection to the client machine to create a console consumer with the following command. Replace *my-endpoint* with the bootstrap server string that you saved after you created the cluster.

   ```
   <path-to-your-kafka-installation>/bin/kafka-console-consumer.sh --bootstrap-server my-endpoint --consumer.config client.properties --topic msk-serverless-tutorial --from-beginning
   ```

   You start seeing the messages you entered earlier when you used the console producer command.

1. Enter more messages in the producer window, and watch them appear in the consumer window.

If you encounter `classpath` issues while running these commands, make sure that you're running them from the correct directory. Also, make sure that the AWS MSK IAM JAR is in the `libs` directory. Alternatively, you can run Kafka commands using the full Java command with explicit `classpath`, as shown in the following example.

```
java -cp "kafka_2.12-2.8.1/libs/*:kafka_2.12-2.8.1/libs/aws-msk-iam-auth-2.3.0-all.jar" org.apache.kafka.tools.ConsoleProducer —broker-list $BS —producer.config client.properties —topic msk-serverless-tutorial
```

**Next Step**

[Delete resources that you created for MSK Serverless](delete-resources.md)

# Delete resources that you created for MSK Serverless
<a name="delete-resources"></a>

In this step, you delete the resources that you created in this tutorial.

**To delete the cluster**

1. Open the Amazon MSK console at [https://console.aws.amazon.com/msk/home](https://console.aws.amazon.com/msk/home).

1. In the list of clusters, choose the cluster that you created for this tutorial.

1. For **Actions**, choose **Delete cluster**.

1. Enter `delete` in the field, then choose **Delete**.

**To stop the client machine**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the list of Amazon EC2 instances, choose the client machine that you created for this tutorial.

1. Choose **Instance state**, then choose **Terminate instance**.

1. Choose **Terminate**.

**To delete the IAM policy and role**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. On the navigation pane, choose **Roles**.

1. In the search box, enter the name of the IAM role that you created for this tutorial.

1. Choose the role. Then choose **Delete role**, and confirm the deletion.

1. On the navigation pane, choose **Policies**.

1. In the search box, enter the name of the policy that you created for this tutorial.

1. Choose the policy to open its summary page. On the policy's **Summary** page, choose **Delete policy**.

1. Choose **Delete**.

# Configuration properties for MSK Serverless clusters
<a name="serverless-config"></a>

Amazon MSK sets broker configuration properties for serverless clusters. You can't change these broker configuration property settings. However, you can set or modify the following topic-level configuration properties. All other topic-level configuration properties are not configurable.


****  

| Configuration property | Default | Editable | Maximum allowed value | 
| --- | --- | --- | --- | 
| [cleanup.policy](https://kafka.apache.org/documentation/#topicconfigs_cleanup.policy) | Delete | Yes, but only at topic creation time |  | 
|  [compression.type](https://kafka.apache.org/documentation/#topicconfigs_compression.type)  | Producer | Yes |  | 
|  [max.message.bytes](https://kafka.apache.org/documentation/#topicconfigs_max.message.bytes)  | 1048588 | Yes | 8388608 (8MiB) | 
|  [message.timestamp.difference.max.ms](https://kafka.apache.org/documentation/#topicconfigs_message.timestamp.difference.max.ms)  | long.max | Yes |  | 
|  [message.timestamp.type](https://kafka.apache.org/documentation/#topicconfigs_message.timestamp.type)  | CreateTime | Yes |  | 
|  [retention.bytes](https://kafka.apache.org/documentation/#topicconfigs_retention.bytes)  | 250 GiB | Yes | Unlimited; set it to -1 for unlimited retention | 
|  [retention.ms](https://kafka.apache.org/documentation/#topicconfigs_retention.ms)  | 7 days | Yes | Unlimited; set it to -1 for unlimited retention | 

To set or modify these topic-level configuration properties, you can use Apache Kafka command line tools. See [3.2 Topic-level Configs](https://kafka.apache.org/documentation/#topicconfigs) in the official Apache Kafka documentation for more information and examples of how to set them.

**Note**  
You can't modify the segment.bytes configuration for topics in MSK Serverless. However, a Kafka Streams application might attempt to create an internal topic with a segment.bytes configuration value, which is different from what MSK Serverless will allow. For information about configuring Kafka Streams with MSK Serverless, see [Using Kafka Streams with MSK Express brokers and MSK Serverless](use-kafka-streams-express-brokers-msk-serverless.md).

When using the Apache Kafka command line tools with Amazon MSK Serverless, make sure you completed steps 1-4 in the *To set up Apache Kafka client tools on the client machine* section of the [Amazon MSK Serverless Getting Started documentation](https://docs.aws.amazon.com/msk/latest/developerguide/create-serverless-cluster-client.html). Additionally, you must include the `--command-config client.properties` parameter in your commands.

For example, the following command can be used to modify the retention.bytes topic configuration property to set unlimited retention:

```
<path-to-your-kafka-client-installation>/bin/kafka-configs.sh —bootstrap-server <bootstrap_server_string> —command-config client.properties --entity-type topics --entity-name <topic_name> --alter --add-config retention.bytes=-1
```

In this example, replace *<bootstrap server string>* with the bootstrap server endpoint for your Amazon MSK Serverless cluster, and *<topic\$1name>* with the name of the topic you want to modify.

The `--command-config client.properties` parameter ensures that the Kafka command line tool uses the appropriate configuration settings to communicate with your Amazon MSK Serverless cluster.

# Configure dual-stack network type
<a name="serverless-config-dual-stack"></a>

 Amazon MSK supports dual-stack network type for existing MSK Serverless clusters that use Kafka version 3.6.0 or later at no additional cost. With dual-stack networking, your clusters can use both IPv4 and IPv6 addresses. Dual-stack endpoints also support IPv4 thus maintaining backward compatibility. Amazon MSK provides IPv6 support through dual-stack network type, not as IPv6-only.

 By default, clients connect to Amazon MSK clusters using the IPv4 network type. All new clusters that you create also use IPv4 by default. To update a cluster's network type to dual-stack, make sure you’ve fulfilled the prerequisites described in the following section. Then, use the [UpdateConnectivity](https://docs.aws.amazon.com/msk/1.0/apireference/clusters-clusterarn-connectivity.html#UpdateConnectivity) API to update connectivity to dual-stack. 

**Note**  
Once you update your cluster to use the dual-stack network type, you can’t switch it back to the IPv4 network type.

**Topics**
+ [Prerequisites for using dual-stack network type](#msks-ipv6-prerequisites)
+ [IAM permissions for MSK Serverless](#msks-ipv6-iam-permissions)
+ [Use dual-stack network type for a cluster](#update-msks-network-type)
+ [Considerations for using dual-stack network type](#msks-dual-stack-considerations)

## Prerequisites for using dual-stack network type
<a name="msks-ipv6-prerequisites"></a>

Before you configure dual-stack network type for your clusters, make sure you that all subnets you provide during cluster creation must support dual-stack network type. If even one subnet in your cluster doesn’t support dual-stack, you won’t be able to update the network type for your cluster to dual-stack.

## IAM permissions for MSK Serverless
<a name="msks-ipv6-iam-permissions"></a>

You must have the following IAM permissions:
+  `ec2:DescribeSubnets` 
+  `ec2:ModifyVpcEndpoint` 

For a complete list of permissions required to perform all Amazon MSK actions, see AWS managed policy: [ AmazonMSKFullAccess.](https://docs.aws.amazon.com/msk/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonMSKFullAccess)

## Use dual-stack network type for a cluster
<a name="update-msks-network-type"></a>

You can update the network type for an MSK Serverless cluster using the AWS Management Console, AWS CLI, or AWS SDK.

------
#### [ Using AWS Management Console ]

1. Open the Amazon MSK console at [https://console.aws.amazon.com/msk/home?region=us-east-1\$1/home/](https://console.aws.amazon.com/msk/home?region=us-east-1#/home/).

1. Choose the MSK Serverless cluster for which you want to configure the dual-stack network type.

1. On the Cluster details page, choose **Properties**.

1. In **Network settings**, choose **Edit network type**.

1. For **Network type**, choose **Dual stack**.

1. Choose **Save changes**.

------
#### [ Using AWS CLI ]

You can use the [ update-connectivity](https://docs.aws.amazon.com/cli/latest/reference/kafka/update-connectivity.html) API to update the network type of your existing MSK Serverless cluster to dual-stack. The following example uses the ` update-connectivity` command to set the cluster’s network type to dual-stack.

In the following example, replace the sample cluster ARN, arn:aws:kafka:*us-east-1*:* 123456789012*:cluster/*myCluster* /*12345678-1234-1234-1234-123456789012 -1*, with your actual MSK cluster ARN. To get the current cluster version, use the [describe-cluster](https://docs.aws.amazon.com/cli/latest/reference/kafka/describe-cluster.html) command.

```
aws kafka update-connectivity \
    --cluster-arn "arn:aws:kafka:us-east-1:123456789012:cluster/myCluster/12345678-1234-1234-1234-123456789012-1" \
    --current-version "KTVPDKIKX0DER" \
    --connectivity-info '{
        "networkType": "DUAL"
    }
```

------
#### [ Using AWS SDK ]

The following example uses the [UpdateConnectivity](https://docs.aws.amazon.com/msk/1.0/apireference/clusters-clusterarn-connectivity.html#UpdateConnectivity) API to set the cluster’s network type to dual-stack.

In the following example, replace the sample cluster ARN, arn:aws:kafka:*us-east-1*:*123456789012*:cluster/*myCluster*/*12345678-1234-1234-1234-123456789012-1*, with your actual MSK cluster ARN. To get the current cluster version use the [DescribeCluster](https://docs.aws.amazon.com/msk/1.0/apireference/clusters-clusterarn.html#DescribeCluster) API.

```
import boto3

client = boto3.client("kafka")

response = client.update_connectivity(
    ClusterArn="arn:aws:kafka:us-east-1:123456789012:cluster/myCluster/12345678-1234-1234-1234-123456789012-1",
    CurrentVersion="KTVPDKIKX0DER",
    ConnectivityInfo={
        "NetworkType": "DUAL"
    }
)
print("Connectivity update initiated:", response)
```

------

## Considerations for using dual-stack network type
<a name="msks-dual-stack-considerations"></a>
+ IPv6 support is currently available only in dual-stack mode (IPv4 \$1 IPv6), not as IPv6-only.
+ Dual-stack network type is unavailable for multi-VPC private connectivity.
+ You can change the network type from IPv4 to dual-stack for an existing cluster only if all its subnets support the dual-stack network type.
+ You can't revert to the IPv4 network type after enabling dual-stack. To switch back, you must delete and recreate the cluster.
+ You must have the following IAM permissions:
  + `ec2:DescribeSubnets` and ` ec2:ModifyVpcEndpoint`

# Monitor MSK Serverless clusters
<a name="serverless-monitoring"></a>

Amazon MSK integrates with Amazon CloudWatch so that you can collect, view, and analyze metrics for your MSK Serverless cluster. The metrics shown in the following table are available for all serverless clusters. As these metrics are published as individual data points for each partition in the topic, we recommend viewing them as a 'SUM' statistic to get the topic-level view.

Amazon MSK publishes `PerSec` metrics to CloudWatch at a frequency of once per minute. This means that the 'SUM' statistic for a one-minute period accurately represents per-second data for `PerSec` metrics. To collect per-second data for a period of longer than one minute, use the following CloudWatch math expression: `m1 * 60/PERIOD(m1)`.


**Metrics available at the DEFAULT monitoring level**  

| Name | When visible | Dimensions | Description | 
| --- | --- | --- | --- | 
| BytesInPerSec | After a producer writes to a topic | Cluster Name, Topic |  The number of bytes per second received from clients. This metric is available for each topic.  | 
| BytesOutPerSec | After a consumer group consumes from a topic | Cluster Name, Topic |  The number of bytes per second sent to clients. This metric is available for each topic.  | 
| FetchMessageConversionsPerSec | After a consumer group consumes from a topic | Cluster Name, Topic |  The number of fetch message conversions per second for the topic.  | 
| EstimatedMaxTimeLag | After a consumer group consumes from a topic | Cluster Name, Consumer Group, Topic  | A time estimate of the MaxOffsetLag metric. | 
| MaxOffsetLag | After a consumer group consumes from a topic | Cluster Name, Consumer Group, Topic  | The maximum offset lag across all partitions in a topic. | 
| MessagesInPerSec | After a producer writes to a topic | Cluster Name, Topic | The number of incoming messages per second for the topic. | 
| ProduceMessageConversionsPerSec | After a producer writes to a topic | Cluster Name, Topic | The number of produce message conversions per second for the topic. | 
| SumOffsetLag | After a consumer group consumes from a topic | Cluster Name, Consumer Group, Topic  | The aggregated offset lag for all the partitions in a topic. | 

**To view MSK Serverless metrics**

1. Sign in to the AWS Management Console and open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, under **Metrics**, choose **All metrics**.

1. In the metrics search for the term **kafka**.

1. Choose **AWS/Kafka / Cluster Name, Topic** or **AWS/Kafka / Cluster Name, Consumer Group, Topic** to see different metrics.