

# Amazon MSK Provisioned configuration
<a name="msk-configuration"></a>

Amazon MSK provides default configurations for brokers, topics, and metadata nodes. You can also create custom configurations and use them to create new MSK clusters or to update existing clusters. An MSK configuration consists of a set of properties and their corresponding values. Depending on the broker type you use in your cluster, there are a different set of configuration defaults and a different set of configurations you can modify. See the sections below for more details on how to configure your Standard and Express brokers.

**Topics**
+ [

# Standard broker configurations
](msk-configuration-standard.md)
+ [

# Express broker configurations
](msk-configuration-express.md)
+ [

# Broker configuration operations
](msk-configuration-operations.md)

# Standard broker configurations
<a name="msk-configuration-standard"></a>

This section describes configuration properties for Standard brokers.

**Topics**
+ [

# Custom Amazon MSK configurations
](msk-configuration-properties.md)
+ [

# Default Amazon MSK configuration
](msk-default-configuration.md)
+ [

# Guidelines for Amazon MSK tiered storage topic-level configuration
](msk-guidelines-tiered-storage-topic-level-config.md)

# Custom Amazon MSK configurations
<a name="msk-configuration-properties"></a>

You can use Amazon MSK to create a custom MSK configuration where you set the following Apache Kafka configuration properties. Properties that you don't set explicitly get the values they have in [Default Amazon MSK configuration](msk-default-configuration.md). For more information about configuration properties, see [Apache Kafka Configuration](https://kafka.apache.org/documentation/#configuration).


| Name | Description | 
| --- | --- | 
| allow.everyone.if.no.acl.found | If you want to set this property tofalse, first make sure you define Apache Kafka ACLs for your cluster. If you set this property to falseand you don't first define Apache Kafka ACLs, you lose access to the cluster. If that happens, you can update the configuration again and set this property to true to regain access to the cluster. | 
| auto.create.topics.enable | Enables topic auto-creation on the server. | 
| compression.type | The final compression type for a given topic. You can set this property to the standard compression codecs (gzip, snappy, lz4, and zstd). It additionally accepts uncompressed. This value is equivalent to no compression. If you set the value to producer, it means retain the original compression codec that the producer sets. | 
|  connections.max.idle.ms  | Idle connections timeout in milliseconds. The server socket processor threads close the connections that are idle for more than the value that you set for this property. | 
| default.replication.factor | The default replication factor for automatically created topics. | 
| delete.topic.enable | Enables the delete topic operation. If you turn off this setting, you can't delete a topic through the admin tool. | 
| group.initial.rebalance.delay.ms | Amount of time the group coordinator waits for more data consumers to join a new group before the group coordinator performs the first rebalance. A longer delay means potentially fewer rebalances, but this increases the time until processing begins. | 
| group.max.session.timeout.ms | Maximum session timeout for registered consumers. Longer timeouts give consumers more time to process messages between heartbeats at the cost of a longer time to detect failures. | 
| group.min.session.timeout.ms | Minimum session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeats. This can overwhelm broker resources. | 
| leader.imbalance.per.broker.percentage | The ratio of leader imbalance allowed per broker. The controller triggers a leader balance if it exceeds this value per broker. This value is specified in percentage. | 
| log.cleaner.delete.retention.ms | Amount of time that you want Apache Kafka to retain deleted records. The minimum value is 0. | 
| log.cleaner.min.cleanable.ratio |  This configuration property can have values between 0 and 1. This value determines how frequently the log compactor attempts to clean the log (if log compaction is enabled). By default, Apache Kafka avoids cleaning a log if more than 50% of the log has been compacted. This ratio bounds the maximum space that  the log wastes with duplicates (at 50%, this means at most 50% of the log could be duplicates). A higher ratio means fewer, more efficient cleanings, but more wasted space in the log.  | 
| log.cleanup.policy | The default cleanup policy for segments beyond the retention window. A comma-separated list of valid policies. Valid policies are delete and compact. For Tiered Storage enabled clusters, valid policy is delete only. | 
| log.flush.interval.messages | Number of messages that accumulate on a log partition before messages are flushed to disk. | 
| log.flush.interval.ms | Maximum time in milliseconds that a message in any topic remains in memory before flushed to disk. If you don't set this value, the value in log.flush.scheduler.interval.ms is used. The minimum value is 0. | 
| log.message.timestamp.difference.max.ms | This configuration is deprecated in Kafka 3.6.0. Two configurations, log.message.timestamp.before.max.ms and log.message.timestamp.after.max.ms, have been added. The maximum time difference between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message is rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime. | 
| log.message.timestamp.type | Specifies if the timestamp in the message is the message creation time or the log append time. The allowed values are CreateTime and LogAppendTime. | 
| log.retention.bytes | Maximum size of the log before deleting it. | 
| log.retention.hours | Number of hours to keep a log file before deleting it, tertiary to the log.retention.ms property. | 
| log.retention.minutes | Number of minutes to keep a log file before deleting it, secondary to log.retention.ms property. If you don't set this value, the value in log.retention.hours is used. | 
| log.retention.ms | Number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. | 
| log.roll.ms | Maximum time before a new log segment is rolled out (in milliseconds). If you don't set this property, the value in log.roll.hours is used. The minimum possible value for this property is 1. | 
| log.segment.bytes | Maximum size of a single log file. | 
| max.incremental.fetch.session.cache.slots | Maximum number of incremental fetch sessions that are maintained. | 
| message.max.bytes |  Largest record batch size that Kafka allows. If you increase this value and there are consumers older than 0.10.2, you must also increase the fetch size of the consumers so that they can fetch record batches this large. The latest message format version always groups messages into batches for efficiency. Previous message format versions don't group uncompressed records into batches, and in such a case, this limit only applies to a single record. You can set this value per topic with the topic level max.message.bytes config.  | 
| min.insync.replicas |  When a producer sets acks to `"all"` (or `"-1"`), the value in min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, the producer raises an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). You can use values in min.insync.replicas and acks to enforce greater durability guarantees. For example, you might create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of `"all"`. This ensures that the producer raises an exception if a majority of replicas don't receive a write.  | 
| num.io.threads | The number of threads that the server uses for processing requests, which may include disk I/O. | 
| num.network.threads | The number of threads that the server uses to receive requests from the network and send responses to it. | 
| num.partitions | Default number of log partitions per topic. | 
| num.recovery.threads.per.data.dir | The number of threads per data directory to be used to recover logs at startup and and to flush them at shutdown. | 
| num.replica.fetchers | The number of fetcher threads used to replicate messages from a source broker. If you increase this value, you can increase the degree of I/O parallelism in the follower broker. | 
| offsets.retention.minutes | After a consumer group loses all its consumers (that is, it becomes empty) its offsets are kept for this retention period before getting discarded. For standalone consumers (that is,those that use manual assignment), offsets expire after the time of the last commit plus this retention period. | 
| offsets.topic.replication.factor | The replication factor for the offsets topic. Set this value higher to ensure availability. Internal topic creation fails until the cluster size meets this replication factor requirement. | 
| replica.fetch.max.bytes | Number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum. If the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch is returned to ensure progress. The message.max.bytes (broker config) or max.message.bytes (topic config) defines the maximum record batch size that the broker accepts. | 
| replica.fetch.response.max.bytes | The maximum number of bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure progress. This isn't an absolute maximum. The message.max.bytes (broker config) or max.message.bytes (topic config) properties specify the maximum record batch size that the broker accepts. | 
| replica.lag.time.max.ms | If a follower hasn't sent any fetch requests or hasn't consumed up to the leader's log end offset for at least this number of milliseconds, the leader removes the follower from the ISR.MinValue: 10000MaxValue = 30000 | 
| replica.selector.class | The fully-qualified class name that implements ReplicaSelector. The broker uses this value to find the preferred read replica. If you use Apache Kafka version 2.4.1 or higher, and want to allow consumers to fetch from the closest replica, set this property to org.apache.kafka.common.replica.RackAwareReplicaSelector. For more information, see [Apache Kafka version 2.4.1 (use 2.4.1.1 instead)](supported-kafka-versions.md#2.4.1). | 
| replica.socket.receive.buffer.bytes | The socket receive buffer for network requests. | 
| socket.receive.buffer.bytes | The SO\$1RCVBUF buffer of the socket server sockets. The minimum value that you can set for this property is -1. If the value is -1, Amazon MSK uses the OS default. | 
| socket.request.max.bytes | The maximum number of bytes in a socket request. | 
| socket.send.buffer.bytes | The SO\$1SNDBUF buffer of the socket server sockets. The minimum value that you can set for this property is -1. If the value is -1, Amazon MSK uses the OS default. | 
| transaction.max.timeout.ms | Maximum timeout for transactions. If the requested transaction time of a client exceeds this value, the broker returns an error in InitProducerIdRequest. This prevents a client from too large of a timeout, and this can stall consumers that read from topics included in the transaction. | 
| transaction.state.log.min.isr | Overridden min.insync.replicas configuration for the transaction topic. | 
| transaction.state.log.replication.factor | The replication factor for the transaction topic. Set this property to a higher value to increase availability. Internal topic creation fails until the cluster size meets this replication factor requirement. | 
| transactional.id.expiration.ms | The time in milliseconds that the transaction coordinator waits to receive any transaction status updates for the current transaction before the coordinator expires its transactional ID. This setting also influences producer ID expiration because it causes producer IDs expire when this time elapses after the last write with the given producer ID. Producer IDs might expire sooner if the last write from the producer ID is deleted because of the  retention settings for the topic. The minimum value for this property is 1 millisecond. | 
| unclean.leader.election.enable | Indicates if replicas not in the ISR set should serve as leader as a last resort, even though this might result in data loss. | 
| zookeeper.connection.timeout.ms | ZooKeeper mode clusters. Maximum time that the client waits to establish a connection to ZooKeeper. If you don't set this value, the value in zookeeper.session.timeout.ms is used. MinValue = 6000 MaxValue (inclusive) = 18000 We recommend that you set this value to 10,000 on T3.small to avoid cluster downtime.  | 
| zookeeper.session.timeout.ms |  ZooKeeper mode clusters. The Apache ZooKeeper session timeout in milliseconds. MinValue = 6000 MaxValue (inclusive) = 18000  | 

To learn how you can create a custom MSK configuration, list all configurations, or describe them, see [Broker configuration operations](msk-configuration-operations.md). To create an MSK cluster with a custom MSK configuration, or to update a cluster with a new custom configuration, see [Amazon MSK key features and concepts](operations.md).

When you update your existing MSK cluster with a custom MSK configuration, Amazon MSK does rolling restarts when necessary, and uses best practices to minimize customer downtime. For example, after Amazon MSK restarts each broker, Amazon MSK tries to let the broker catch up on data that the broker might have missed during the configuration update before it moves to the next broker.

## Dynamic Amazon MSK configuration
<a name="msk-dynamic-confinguration"></a>

In addition to the configuration properties that Amazon MSK provides, you can dynamically set cluster-level and broker-level configuration properties that don't require a broker restart. You can dynamically set some configuration properties. These are the properties not marked as read-only in the table under [Broker Configs](https://kafka.apache.org/documentation/#brokerconfigs) in the Apache Kafka documentation. For information on dynamic configuration and example commands, see [Updating Broker Configs](https://kafka.apache.org/documentation/#dynamicbrokerconfigs) in the Apache Kafka documentation.

**Note**  
You can set the `advertised.listeners` property, but not the `listeners` property.

## Topic-level Amazon MSK configuration
<a name="msk-topic-confinguration"></a>

You can use Apache Kafka commands to set or modify topic-level configuration properties for new and existing topics. For more information on topic-level configuration properties and examples on how to set them, see [Topic-Level Configs](https://kafka.apache.org/documentation/#topicconfigs) in the Apache Kafka documentation.

# Default Amazon MSK configuration
<a name="msk-default-configuration"></a>

When you create an MSK cluster and don't specify a custom MSK configuration, Amazon MSK creates and uses a default configuration with the values shown in the following table. For properties that aren't in this table, Amazon MSK uses the defaults associated with your version of Apache Kafka. For a list of these default values, see [Apache Kafka Configuration](https://kafka.apache.org/documentation/#configuration). 


| Name | Description | Default value for non-tiered storage cluster | Default value for tiered storage-enabled cluster | 
| --- | --- | --- | --- | 
| allow.everyone.if.no.acl.found | If no resource patterns match a specific resource, the resource has no associated ACLs. In this case, if you set this property to true, all users can access the resource, not just the super users. | true | true | 
| auto.create.topics.enable | Enables autocreation of a topic on the server. | false | false | 
| auto.leader.rebalance.enable | Enables auto leader balancing. A background thread checks and initiates leader balance at regular intervals, if necessary. | true | true | 
| default.replication.factor | Default replication factors for automatically created topics. | 3 for clusters in 3 Availability Zones, and 2 for clusters in 2 Availability Zones. | 3 for clusters in 3 Availability Zones, and 2 for clusters in 2 Availability Zones. | 
|  local.retention.bytes  |  The maximum size of local log segments for a partition before it deletes the old segments. If you don't set this value, the value in log.retention.bytes is used. The effective value should always be less than or equal to the log.retention.bytes value. The default value of -2 indicates that there is no limit on local retention. This corresponds to the retention.ms/bytes setting of -1. The properties local.retention.ms and local.retention.bytes are similar to log.retention as they are used to determine how long the log segments should remain in local storage. Existing log.retention.\$1 configurations are retention configurations for the topic partition. This includes both local and remote storage. Valid values: integers in [-2; \$1Inf]  | -2 for unlimited | -2 for unlimited | 
|  local.retention.ms  | The number of milliseconds to retain the local log segment before deletion. If you don't set this value, Amazon MSK uses the value in log.retention.ms. The effective value should always be less than or equal to the log.retention.bytes value. The default value of -2 indicates that there is no limit on local retention. This corresponds to the retention.ms/bytes setting of -1.The values local.retention.ms and local.retention.bytes are similar to log.retention. MSK uses this configuration to determine how long the log segments should remain in local storage. Existing log.retention.\$1 configurations are retention configurations for the topic partition. This includes both local and remote storage. Valid values are integers greater than 0. | -2 for unlimited | -2 for unlimited | 
|  log.message.timestamp.difference.max.ms  | This configuration is deprecated in Kafka 3.6.0. Two configurations, log.message.timestamp.before.max.ms and log.message.timestamp.after.max.ms, have been added. The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime. The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling. | 9223372036854775807 | 86400000 for Kafka 2.8.2.tiered and Kafka 3.7.x tiered. | 
| log.segment.bytes | The maximum size of a single log file. | 1073741824 | 134217728 | 
| min.insync.replicas |  When a producer sets the value of acks (acknowledgement producer gets from Kafka broker) to `"all"` (or `"-1"`), the value in min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this value doesn't meet this minimum, the producer raises an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When you use the values in min.insync.replicas and acks together, you can enforce greater durability guarantees. For example, you might create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of `"all"`. This ensures that the producer raises an exception if a majority of replicas don't receive a write.  | 2 for clusters in 3 Availability Zones, and 1 for clusters in 2 Availability Zones. | 2 for clusters in 3 Availability Zones, and 1 for clusters in 2 Availability Zones. | 
| num.io.threads | Number of threads that the server uses to produce requests, which may include disk I/O. | 8 | max(8, vCPUs) where vCPUs depends on the instance size of broker | 
| num.network.threads | Number of threads that the server uses to receive requests from the network and send responses to the network. | 5 | max(5, vCPUs / 2) where vCPUs depends on the instance size of broker | 
| num.partitions | Default number of log partitions per topic. | 1 | 1 | 
| num.replica.fetchers | Number of fetcher threads used to replicate messages from a source broker.If you increase this value, you can increase the degree of I/O parallelism in the follower broker. | 2 | max(2, vCPUs / 4) where vCPUs depends on the instance size of broker | 
|  remote.log.msk.disable.policy  |  Used with remote.storage.enable to disable tiered storage. Set this policy to Delete, to indicate that data in tiered storage is deleted when you set remote.storage.enable to false.  | N/A | None | 
| remote.log.reader.threads | Remote log reader thread pool size, which is used in scheduling tasks to fetch data from remote storage. | N/A | max(10, vCPUs \$1 0.67) where vCPUs depends on the instance size of broker | 
|  remote.storage.enable  | Enables tiered (remote) storage for a topic if set to true. Disables topic level tiered storage if set to false and remote.log.msk.disable.policy is set to Delete. When you disable tiered storage, you delete data from remote storage. When you disable tiered storage for a topic, you can't enable it again. | false | false | 
| replica.lag.time.max.ms | If a follower hasn't sent any fetch requests or hasn't consumed up to the leader's log end offset for at least this number of milliseconds, the leader removes the follower from the ISR. | 30000 | 30000 | 
|  retention.ms  |  Mandatory field. Minimum time is 3 days. There is no default because the setting is mandatory. Amazon MSK uses the retention.ms value with local.retention.ms to determine when data moves from local to tiered storage. The local.retention.ms value specifies when to move data from local to tiered storage. The retention.ms value specifies when to remove data from tiered storage (that is, removed from the cluster). Valid values: integers in [-1; \$1Inf]  | Minimum 259,200,000 milliseconds (3 days). -1 for infinite retention. | Minimum 259,200,000 milliseconds (3 days). -1 for infinite retention. | 
| socket.receive.buffer.bytes | The SO\$1RCVBUF buffer of the socket sever sockets. If the value is -1, the OS default is used. | 102400 | 102400 | 
| socket.request.max.bytes | Maximum number of bytes in a socket request. | 104857600 | 104857600 | 
| socket.send.buffer.bytes | The SO\$1SNDBUF buffer of the socket sever sockets. If the value is -1, the OS default is used. | 102400 | 102400 | 
| unclean.leader.election.enable | Indicates if you want replicas not in the ISR set to serve as leader as a last resort, even though this might result in data loss. | true | false | 
| zookeeper.session.timeout.ms |  The Apache ZooKeeper session timeout in milliseconds.  | 18000 | 18000 | 
| zookeeper.set.acl | The set client to use secure ACLs. | false | false | 

For information about how to specify custom configuration values, see [Custom Amazon MSK configurations](msk-configuration-properties.md).

# Guidelines for Amazon MSK tiered storage topic-level configuration
<a name="msk-guidelines-tiered-storage-topic-level-config"></a>

The following are default settings and limitations when you configure tiered storage at the topic level.
+ Amazon MSK doesn't support smaller log segment sizes for topics with tiered storage activated. If you want to create a segment, there is a minimum log segment size of 48 MiB, or a minimum segment roll time of 10 minutes. These values map to the segment.bytes and segment.ms properties.
+ The value of local.retention.ms/bytes can't equal or exceed the retention.ms/bytes. This is the tiered storage retention setting.
+ The default value for for local.retention.ms/bytes is -2. This means that the retention.ms value is used for local.retention.ms/bytes. In this case, data remains in both local storage and tiered storage (one copy in each), and they expire together. For this option, a copy of the local data is persisted to the remote storage. In this case, the data read from consume traffic comes from the local storage.
+ The default value for retention.ms is 7 days. There is no default size limit for retention.bytes.
+ The minimum value for retention.ms/bytes is -1. This means infinite retention.
+ The minimum value for local.retention.ms/bytes is -2. This means infinite retention for local storage. It matches with the retention.ms/bytes setting as -1.
+ The topic-level configuration retention.ms is mandatory for topics with tiered storage activated. The minimum retention.ms is 3 days.

For more information about tiered storage contraints, see [Tiered storage constraints and limitations for Amazon MSK clusters](msk-tiered-storage.md#msk-tiered-storage-constraints).

# Express broker configurations
<a name="msk-configuration-express"></a>

Apache Kafka has hundreds of broker configurations that you can use to tune the performance of your MSK Provisioned cluster. Setting erroneous or sub-optimal values can affect cluster reliability and performance. Express brokers improve the availability and durability of your MSK Provisioned clusters by setting optimal values for critical configurations and protecting them from common misconfiguration. There are three categories of configurations based on read and write access: [read/write (editable)](msk-configuration-express-read-write.md), [read only](msk-configuration-express-read-only.md), and non-read/write configurations. Some configurations still use Apache Kafka’s default value for the Apache Kafka version the cluster is running. We mark those as Apache Kafka Default.

**Topics**
+ [

# Custom MSK Express broker configurations (Read/Write access)
](msk-configuration-express-read-write.md)
+ [

# Express brokers read-only configurations
](msk-configuration-express-read-only.md)

# Custom MSK Express broker configurations (Read/Write access)
<a name="msk-configuration-express-read-write"></a>

You can update read/write broker configurations either by using Amazon MSK’s [update configuration feature](msk-update-cluster-config.md) or using Apache Kafka’s AlterConfig API. Apache Kafka broker configurations are either static or dynamic. Static configurations require a broker restart for the configuration to be applied, while dynamic configurations do not need a broker restart. For more information about configuration properties and update modes, see [Updating broker configs](https://kafka.apache.org/documentation/#dynamicbrokerconfigs).

**Topics**
+ [

## Static configurations on MSK Express brokers
](#msk-configuration-express-static-configuration)
+ [

## Dynamic configurations on Express Brokers
](#msk-configuration-express-dynamic-configuration)
+ [

## Topic-level configurations on Express Brokers
](#msk-configuration-express-topic-configuration)

## Static configurations on MSK Express brokers
<a name="msk-configuration-express-static-configuration"></a>

You can use Amazon MSK to create a custom MSK configuration file to set the following static properties. Amazon MSK sets and manages all other properties that you do not set. You can create and update static configuration files from the MSK console or using the [configurations command](msk-configuration-operations-create.md).


| Property | Description | Default Value | 
| --- | --- | --- | 
|  allow.everyone.if.no.acl.found  |  If you want to set this property to false, first make sure you define Apache Kafka ACLs for your cluster. If you set this property to false and you don't first define Apache Kafka ACLs, you lose access to the cluster. If that happens, you can update the configuration again and set this property to true to regain access to the cluster.  |  true  | 
|  auto.create.topics.enable  |  Enables autocreation of a topic on the server.  |  false  | 
| compression.type |  Specify the final compression type for a given topic. This configuration accepts the standard compression codecs: gzip, snappy, lz4, zstd. This configuration additionally accepts `uncompressed`, which is equivalent to no compression; and `producer`, which means retain the original compression codec set by the producer. | Apache Kafka Default | 
|  connections.max.idle.ms  |  Idle connections timeout in milliseconds. The server socket processor threads close the connections that are idle for more than the value that you set for this property.  |  Apache Kafka Default  | 
|  delete.topic.enable  |  Enables the delete topic operation. If you turn off this setting, you can't delete a topic through the admin tool.  |  Apache Kafka Default  | 
|  group.initial.rebalance.delay.ms  |   Amount of time the group coordinator waits for more data consumers to join a new group before the group coordinator performs the first rebalance. A longer delay means potentially fewer rebalances, but this increases the time until processing begins.  |  Apache Kafka Default  | 
|  group.max.session.timeout.ms  |  Maximum session timeout for registered consumers. Longer timeouts give consumers more time to process messages between heartbeats at the cost of a longer time to detect failures.  |  Apache Kafka Default  | 
|  leader.imbalance.per.broker.percentage  |  The ratio of leader imbalance allowed per broker. The controller triggers a leader balance if it exceeds this value per broker. This value is specified in percentage.  |  Apache Kafka Default  | 
| log.cleanup.policy | The default cleanup policy for segments beyond the retention window. A comma-separated list of valid policies. Valid policies are delete and compact. For tiered storage-enabled clusters, valid policy is delete only. | Apache Kafka Default | 
| log.message.timestamp.after.max.ms |  The allowable timestamp difference between the message timestamp and the broker's timestamp. The message timestamp can be later than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If `log.message.timestamp.type=CreateTime`, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if `log.message.timestamp.type=LogAppendTime`.  | 86400000 (24 \$1 60 \$1 60 \$1 1000 ms, that is, 1 day) | 
| log.message.timestamp.before.max.ms |  The allowable timestamp difference between the broker's timestamp and the message timestamp. The message timestamp can be earlier than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If `log.message.timestamp.type=CreateTime`, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if `log.message.timestamp.type=LogAppendTime`.  | 86400000 (24 \$1 60 \$1 60 \$1 1000 ms, that is, 1 day) | 
| log.message.timestamp.type | Specifies if the timestamp in the message is the message creation time or the log append time. The allowed values are CreateTime and LogAppendTime. | Apache Kafka Default | 
| log.retention.bytes | Maximum size of the log before deleting it. | Apache Kafka Default | 
| log.retention.ms | Number of milliseconds to keep a log file before deleting it. | Apache Kafka Default | 
| max.connections.per.ip | The maximum number of connections allowed from each IP address. This can be set to 0 if there are overrides configured using the max.connections.per.ip.overrides property. New connections from the IP address are dropped if the limit is reached. | Apache Kafka Default | 
|  max.incremental.fetch.session.cache.slots  |  Maximum number of incremental fetch sessions that are maintained.  |  Apache Kafka Default  | 
| message.max.bytes |  Largest record batch size that Kafka allows. If you increase this value and there are consumers older than 0.10.2, you must also increase the fetch size of the consumers so that they can fetch record batches this large. The latest message format version always groups messages into batches for efficiency. Previous message format versions don't group uncompressed records into batches, and in such a case, this limit only applies to a single record. You can set this value per topic with the topic level `max.message.bytes` config.  | Apache Kafka Default | 
|  num.partitions  |  Default number of partitions per topic.  |  1  | 
|  offsets.retention.minutes  |  After a consumer group loses all its consumers (that is, it becomes empty) its offsets are kept for this retention period before getting discarded. For standalone consumers (that is, those that use manual assignment), offsets expire after the time of the last commit plus this retention period.  |  Apache Kafka Default  | 
|  replica.fetch.max.bytes  |  Number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum. If the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch is returned to ensure progress. The message.max.bytes (broker config) or max.message.bytes (topic config) defines the maximum record batch size that the broker accepts.  |  Apache Kafka Default  | 
|  replica.selector.class  |  The fully-qualified class name that implements ReplicaSelector. The broker uses this value to find the preferred read replica. If you want to allow consumers to fetch from the closest replica, set this property to `org.apache.kafka.common.replica.RackAwareReplicaSelector`.  |  Apache Kafka Default  | 
|  socket.receive.buffer.bytes  |  The SO\$1RCVBUF buffer of the socket sever sockets. If the value is -1, the OS default is used.  |  102400  | 
|  socket.request.max.bytes  |  Maximum number of bytes in a socket request.  |  104857600  | 
|  socket.send.buffer.bytes  |  The SO\$1SNDBUF buffer of the socket sever sockets. If the value is -1, the OS default is used.  |  102400  | 
|  transaction.max.timeout.ms  |  Maximum timeout for transactions. If the requested transaction time of a client exceeds this value, the broker returns an error in InitProducerIdRequest. This prevents a client from too large of a timeout, and this can stall consumers that read from topics included in the transaction.  |  Apache Kafka Default  | 
|  transactional.id.expiration.ms  |  The time in milliseconds that the transaction coordinator waits to receive any transaction status updates for the current transaction before the coordinator expires its transactional ID. This setting also influences producer ID expiration because it causes producer IDs to expire when this time elapses after the last write with the given producer ID. Producer IDs might expire sooner if the last write from the producer ID is deleted because of the retention settings for the topic. The minimum value for this property is 1 millisecond.  |  Apache Kafka Default  | 

## Dynamic configurations on Express Brokers
<a name="msk-configuration-express-dynamic-configuration"></a>

You can use Apache Kafka AlterConfig API or the Kafka-configs.sh tool to edit the following dynamic configurations. Amazon MSK sets and manages all other properties that you do not set. You can dynamically set cluster-level and broker-level configuration properties that don't require a broker restart.


| Property | Description | Default value | 
| --- | --- | --- | 
|  advertised.listeners  |  Listeners to publish for clients to use, if different than the `listeners` config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners, it is not valid to advertise the 0.0.0.0 meta-address. Also unlike `listeners`, there can be duplicated ports in this property, so that one listener can be configured to advertise another listener's address. This can be useful in some cases where external load balancers are used. This property is set at a per-broker level.  |  null  | 
|  compression.type  |  The final compression type for a given topic. You can set this property to the standard compression codecs (`gzip`, `snappy`, `lz4`, and `zstd`). It additionally accepts `uncompressed`. This value is equivalent to no compression. If you set the value to `producer`, it means retain the original compression codec that the producer sets.  | Apache Kafka Default | 
| log.cleaner.delete.retention.ms | The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage. Else, delete tombstones might be collected before they complete their scan. | 86400000 (24 \$1 60 \$1 60 \$1 1000 ms, that is, 1 day), Apache Kafka Default | 
| log.cleaner.min.compaction.lag.ms | The minimum time a message will remain uncompacted in the log. This setting is only applicable for logs that are being compacted. | 0, Apache Kafka Default | 
| log.cleaner.max.compaction.lag.ms | The maximum time a message will remain ineligible for compaction in the log. This setting is only applicable for logs that are being compacted. This configuration would be bounded in the range of [7 days, Long.Max]. | 9223372036854775807, Apache Kafka Default | 
|  log.cleanup.policy  |  The default cleanup policy for segments beyond the retention window. A comma-separated list of valid policies. Valid policies are `delete` and `compact`. For tiered storage-enabled clusters, valid policy is `delete` only.  | Apache Kafka Default | 
|  log.message.timestamp.after.max.ms  |  The allowable timestamp difference between the message timestamp and the broker's timestamp. The message timestamp can be later than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If `log.message.timestamp.type=CreateTime`, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if `log.message.timestamp.type=LogAppendTime`.  | 86400000 (24 \$1 60 \$1 60 \$1 1000 ms, that is, 1 day) | 
|  log.message.timestamp.before.max.ms  |  The allowable timestamp difference between the broker's timestamp and the message timestamp. The message timestamp can be earlier than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If `log.message.timestamp.type=CreateTime`, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if `log.message.timestamp.type=LogAppendTime`.  | 86400000 (24 \$1 60 \$1 60 \$1 1000 ms, that is, 1 day) | 
|  log.message.timestamp.type  |  Specifies if the timestamp in the message is the message creation time or the log append time. The allowed values are `CreateTime` and `LogAppendTime`.  | Apache Kafka Default | 
|  log.retention.bytes  |  Maximum size of the log before deleting it.  |  Apache Kafka Default  | 
|  log.retention.ms  |  Number of milliseconds to keep a log file before deleting it.  |  Apache Kafka Default  | 
|  max.connection.creation.rate  |  The maximum connection creation rate allowed in the broker at any time.  |  Apache Kafka Default  | 
|  max.connections  |  The maximum number of connections allowed in the broker at any time. This limit is applied in addition to any per-ip limits configured using `max.connections.per.ip`.  |  Apache Kafka Default  | 
|  max.connections.per.ip  |  The maximum number of connections allowed from each ip address. This can be set to `0` if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached.  |  Apache Kafka Default  | 
|  max.connections.per.ip.overrides  |  A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is `hostName:100,127.0.0.1:200`  | Apache Kafka Default | 
|  message.max.bytes  |  Largest record batch size that Kafka allows. If you increase this value and there are consumers older than 0.10.2, you must also increase the fetch size of the consumers so that they can fetch record batches this large. The latest message format version always groups messages into batches for efficiency. Previous message format versions don't group uncompressed records into batches, and in such a case, this limit only applies to a single record. You can set this value per topic with the topic level `max.message.bytes` config.  | Apache Kafka Default | 
|  producer.id.expiration.ms  |  The time in ms that a topic partition leader will wait before expiring producer IDs. Producer IDs will not expire while a transaction associated to them is still ongoing. Note that producer IDs may expire sooner if the last write from the producer ID is deleted due to the topic's retention settings. Setting this value the same or higher than `delivery.timeout.ms` can help prevent expiration during retries and protect against message duplication, but the default should be reasonable for most use cases.  | Apache Kafka Default | 

## Topic-level configurations on Express Brokers
<a name="msk-configuration-express-topic-configuration"></a>

You can use Apache Kafka commands to set or modify topic-level configuration properties for new and existing topics. If you can't give any topic-level, configuration, Amazon MSK uses the broker default. As with broker-level configurations, Amazon MSK protects some of the topic-level configuration properties from change. Examples include replication factor, `min.insync.replicas` and `unclean.leader.election.enable`. If you try to create a topic with a replication factor value other than `3`, Amazon MSK will create the topic with a replication factor of `3` by default. For more information on topic-level configuration properties and examples on how to set them, see [Topic-Level Configs](https://kafka.apache.org/documentation/#topicconfigs) in the Apache Kafka documentation.


| Property | Description | 
| --- | --- | 
|  cleanup.policy  |  This config designates the retention policy to use on log segments. The "delete" policy (which is the default) will discard old segments when their retention time or size limit has been reached. The "compact" policy will enable log compaction, which retains the latest value for each key. It is also possible to specify both policies in a comma-separated list (for example, "delete,compact"). In this case, old segments will be discarded per the retention time and size configuration, while retained segments will be compacted. Compaction on Express brokers is triggered after the data in a partition reaches 256 MB.  | 
|  compression.type  |  Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (`gzip`, `snappy`, `lz4`, `zstd`). It additionally accepts `uncompressed` which is equivalent to no compression; and `producer` which means retain the original compression codec set by the producer.  | 
| delete.retention.ms |  The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage. Else, delete tombstones might be collected before they complete their scan. The default value for this setting is 86400000 (24 \$1 60 \$1 60 \$1 1000 ms, that is, 1 day), Apache Kafka Default  | 
|  max.message.bytes  |  The largest record batch size allowed by Kafka (after compression, if compression is enabled). If this is increased and there are consumers older than `0.10.2`, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case. This can be set per topic with the topic level `max.message.bytes config`.  | 
|  message.timestamp.after.max.ms  |  This configuration sets the allowable timestamp difference between the message timestamp and the broker's timestamp. The message timestamp can be later than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If `message.timestamp.type=CreateTime`, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if `message.timestamp.type=LogAppendTime`.  | 
|  message.timestamp.before.max.ms  |  This configuration sets the allowable timestamp difference between the broker's timestamp and the message timestamp. The message timestamp can be earlier than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If `message.timestamp.type=CreateTime`, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if `message.timestamp.type=LogAppendTime`.  | 
|  message.timestamp.type  |  Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`  | 
| min.compaction.lag.ms |  The minimum time a message will remain uncompacted in the log. This setting is only applicable for logs that are being compacted. The default value for this setting is 0, Apache Kafka Default  | 
| max.compaction.lag.ms |  The maximum time a message will remain ineligible for compaction in the log. This setting is only applicable for logs that are being compacted. This configuration would be bounded in the range of [7 days, Long.Max]. The default value for this setting is 9223372036854775807, Apache Kafka Default.  | 
|  retention.bytes  |  This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes. Additionally, `retention.bytes configuration` operates independently of `segment.ms` and `segment.bytes` configurations. Moreover, it triggers the rolling of new segment if the `retention.bytes` is configured to zero.  | 
|  retention.ms  |  This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to `-1`, no time limit is applied. Additionally, `retention.ms` configuration operates independently of `segment.ms` and `segment.bytes` configurations. Moreover, it triggers the rolling of new segment if the `retention.ms` condition is satisfied.  | 

# Express brokers read-only configurations
<a name="msk-configuration-express-read-only"></a>

Amazon MSK sets the values for these configurations and protects them from change that may affect the availability of your cluster. These values may change depending on the Apache Kafka version running on the cluster, so remember to check the values from your specific cluster.

The following table lists the read-only configurations for Express brokers.


| Property | Description | Express Broker Value | 
| --- | --- | --- | 
| broker.id | The broker id for this server. | 1,2,3... | 
| broker.rack | Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: `RACK1`, `us-east-1d` | AZ ID or Subnet ID | 
|  default.replication.factor  |  Default replication factors for all topics.  |  3  | 
| fetch.max.bytes | The maximum number of bytes we will return for a fetch request. | Apache Kafka Default | 
| group.max.size | The maximum number of consumers that a single consumer group can accommodate. | Apache Kafka Default | 
| inter.broker.listener.name | Name of listener used for communication between brokers. | REPLICATION\$1SECURE or REPLICATION | 
| inter.broker.protocol.version | Specifies which version of the inter-broker protocol is used. | Apache Kafka Default | 
| listeners | Listener List - Comma-separated list of URIs we will listen on and the listener names. You can set the advertised.listeners property, but not the listeners property. | MSK-generated | 
| log.message.format.version | Specify the message format version the broker will use to append messages to the logs. | Apache Kafka Default | 
| min.insync.replicas | When a producer sets acks to `all` (or `-1`), the value in `min.insync.replicas` specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, the producer raises an exception (either `NotEnoughReplicas` or `NotEnoughReplicasAfterAppend`). You can use value of acks from your producer to enforce greater durability guarantees. By setting acks to "all". This ensures that the producer raises an exception if a majority of replicas don't receive a write. | 2 | 
| num.io.threads | Number of threads that the server uses to produce requests, which may include disk I/O. (m7g.large, 8), (m7g.xlarge, 8), (m7g.2xlarge, 16), (m7g.4xlarge, 32), (m7g.8xlarge, 64), (m7g.12xlarge, 96), (m7g.16xlarge, 128) | Based on instance type. =Math.max(8, 2 \$1 vCPUs) | 
| num.network.threads | Number of threads that the server uses to receive requests from the network and send responses to the network. (m7g.large, 8), (m7g.xlarge, 8), (m7g.2xlarge, 8), (m7g.4xlarge, 16), (m7g.8xlarge, 32), (m7g.12xlarge, 48), (m7g.16xlarge, 64) | Based on instance type. =Math.max(8, vCPUs) | 
| replica.fetch.response.max.bytes | The maximum number of bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure progress. This isn't an absolute maximum. The message.max.bytes (broker config) or max.message.bytes (topic config) properties specify the maximum record batch size that the broker accepts. | Apache Kafka Default | 
| request.timeout.ms | The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses, the client will resend the request if necessary or fail the request if retries are exhausted. | Apache Kafka Default | 
| transaction.state.log.min.isr | Overridden min.insync.replicas configuration for the transaction topic. | 2 | 
| transaction.state.log.replication.factor | The replication factor for the transaction topic. | Apache Kafka Default | 
| unclean.leader.election.enable | Allows replicas not in the ISR set to serve as leader as a last resort, even though this might result in data loss. | FALSE | 

# Broker configuration operations
<a name="msk-configuration-operations"></a>

Apache Kafka broker configurations are either static or dynamic. Static configurations require a broker restart for the configuration to be applied. Dynamic configurations do not need a broker restart for the configuration to be updated. For more information about configuration properties and update modes, see Apache Kafka Configuration. 

This topic describes how to create custom MSK configurations and how to perform operations on them. For information about how to use MSK configurations to create or update clusters, see [Amazon MSK key features and concepts](operations.md).

**Topics**
+ [

# Create a configuration
](msk-configuration-operations-create.md)
+ [

# Update configuration
](msk-configuration-operations-update.md)
+ [

# Delete configuration
](msk-configuration-operations-delete.md)
+ [

# Get configuration metadata
](msk-configuration-operations-describe.md)
+ [

# Get details about configuration revision
](msk-configuration-operations-describe-revision.md)
+ [

# List configurations in your account for the current Region
](msk-configuration-operations-list.md)
+ [

# Amazon MSK configuration states
](msk-configuration-states.md)

# Create a configuration
<a name="msk-configuration-operations-create"></a>

This process describes how to create a custom Amazon MSK configuration and how to perform operations on it.

1. Create a file where you specify the configuration properties that you want to set and the values that you want to assign to them. The following are the contents of an example configuration file.

   ```
   auto.create.topics.enable = true
   
   log.roll.ms = 604800000
   ```

1. Run the following AWS CLI command, and replace *config-file-path* with the path to the file where you saved your configuration in the previous step.
**Note**  
The name that you choose for your configuration must match the following regex: "^[0-9A-Za-z][0-9A-Za-z-]\$10,\$1\$1".

   ```
   aws kafka create-configuration --name "ExampleConfigurationName" --description "Example configuration description." --kafka-versions "1.1.1" --server-properties fileb://config-file-path
   ```

   The following is an example of a successful response after you run this command.

   ```
   {
       "Arn": "arn:aws:kafka:us-east-1:123456789012:configuration/SomeTest/abcdabcd-1234-abcd-1234-abcd123e8e8e-1",
       "CreationTime": "2019-05-21T19:37:40.626Z",
       "LatestRevision": {
           "CreationTime": "2019-05-21T19:37:40.626Z",
           "Description": "Example configuration description.",
           "Revision": 1
       },
       "Name": "ExampleConfigurationName"
   }
   ```

1. The previous command returns an Amazon Resource Name (ARN) for your new configuration. Save this ARN because you need it to refer to this configuration in other commands. If you lose your configuration ARN, you can list all the configurations in your account to find it again.

# Update configuration
<a name="msk-configuration-operations-update"></a>

This process describes how to update a custom Amazon MSK configuration.

1. Create a file where you specify the configuration properties that you want to update and the values that you want to assign to them. The following are the contents of an example configuration file.

   ```
   auto.create.topics.enable = true
   
   min.insync.replicas = 2
   ```

1. Run the following AWS CLI command, and replace *config-file-path* with the path to the file where you saved your configuration in the previous step.

   Replace *configuration-arn* with the ARN that you obtained when you created the configuration. If you didn't save the ARN when you created the configuration, you can use the `list-configurations` command to list all configuration in your account. The configuration that you want in the list appears in the response. The ARN of the configuration also appears in that list.

   ```
   aws kafka update-configuration --arn configuration-arn --description "Example configuration revision description." --server-properties fileb://config-file-path
   ```

1. The following is an example of a successful response after you run this command.

   ```
   {
       "Arn": "arn:aws:kafka:us-east-1:123456789012:configuration/SomeTest/abcdabcd-1234-abcd-1234-abcd123e8e8e-1",
       "LatestRevision": {
           "CreationTime": "2020-08-27T19:37:40.626Z",
           "Description": "Example configuration revision description.",
           "Revision": 2
       }
   }
   ```

# Delete configuration
<a name="msk-configuration-operations-delete"></a>

The following procedure shows how to delete a configuration that isn't attached to a cluster. You can't delete a configuration that's attached to a cluster.

1. To run this example, replace *configuration-arn* with the ARN that you obtained when you created the configuration. If you didn't save the ARN when you created the configuration, you can use the `list-configurations` command to list all configuration in your account. The configuration that you want in the list appears in the response. The ARN of the configuration also appears in that list.

   ```
   aws kafka delete-configuration --arn configuration-arn
   ```

1. The following is an example of a successful response after you run this command.

   ```
   {
       "arn": " arn:aws:kafka:us-east-1:123456789012:configuration/SomeTest/abcdabcd-1234-abcd-1234-abcd123e8e8e-1",
       "state": "DELETING"
   }
   ```

# Get configuration metadata
<a name="msk-configuration-operations-describe"></a>

The following procedure shows how to describe an Amazon MSK configuration to get metadata about the configuration.

1. The following command returns metadata about the configuration. To get a detailed description of the configuration, run the `describe-configuration-revision`.

   To run this example, replace *configuration-arn* with the ARN that you obtained when you created the configuration. If you didn't save the ARN when you created the configuration, you can use the `list-configurations` command to list all configuration in your account. The configuration that you want in the list appears in the response. The ARN of the configuration also appears in that list.

   ```
   aws kafka describe-configuration --arn configuration-arn
   ```

1. The following is an example of a successful response after you run this command.

   ```
   {
       "Arn": "arn:aws:kafka:us-east-1:123456789012:configuration/SomeTest/abcdabcd-abcd-1234-abcd-abcd123e8e8e-1",
       "CreationTime": "2019-05-21T00:54:23.591Z",
       "Description": "Example configuration description.",
       "KafkaVersions": [
           "1.1.1"
       ],
       "LatestRevision": {
           "CreationTime": "2019-05-21T00:54:23.591Z",
           "Description": "Example configuration description.",
           "Revision": 1
       },
       "Name": "SomeTest"
   }
   ```

# Get details about configuration revision
<a name="msk-configuration-operations-describe-revision"></a>

This process gets you a detailed description of the Amazon MSK configuration revision.

If you use the `describe-configuration` command to describe an MSK configuration, you see the metadata of the configuration. To get a description of the configuration, use the command, `describe-configuration-revision`.
+ Run the following command and replace *configuration-arn* with the ARN that you obtained when you created the configuration. If you didn't save the ARN when you created the configuration, you can use the `list-configurations` command to list all configuration in your account. The configuration that you want in the list that appears in the response. The ARN of the configuration also appears in that list.

  ```
  aws kafka describe-configuration-revision --arn configuration-arn --revision 1
  ```

  The following is an example of a successful response after you run this command.

  ```
  {
      "Arn": "arn:aws:kafka:us-east-1:123456789012:configuration/SomeTest/abcdabcd-abcd-1234-abcd-abcd123e8e8e-1",
      "CreationTime": "2019-05-21T00:54:23.591Z",
      "Description": "Example configuration description.",
      "Revision": 1,
      "ServerProperties": "YXV0by5jcmVhdGUudG9waWNzLmVuYWJsZSA9IHRydWUKCgp6b29rZWVwZXIuY29ubmVjdGlvbi50aW1lb3V0Lm1zID0gMTAwMAoKCmxvZy5yb2xsLm1zID0gNjA0ODAwMDAw"
  }
  ```

  The value of `ServerProperties` is encoded with base64. If you use a base64 decoder (for example, https://www.base64decode.org/) to decode it manually, you get the contents of the original configuration file that you used to create the custom configuration. In this case, you get the following:

  ```
  auto.create.topics.enable = true
  
  log.roll.ms = 604800000
  ```

# List configurations in your account for the current Region
<a name="msk-configuration-operations-list"></a>

This process describes how to list all Amazon MSK configurations in your account for the current AWS Region.
+ Run the following command.

  ```
  aws kafka list-configurations
  ```

  The following is an example of a successful response after you run this command.

  ```
  {
      "Configurations": [
          {
              "Arn": "arn:aws:kafka:us-east-1:123456789012:configuration/SomeTest/abcdabcd-abcd-1234-abcd-abcd123e8e8e-1",
              "CreationTime": "2019-05-21T00:54:23.591Z",
              "Description": "Example configuration description.",
              "KafkaVersions": [
                  "1.1.1"
              ],
              "LatestRevision": {
                  "CreationTime": "2019-05-21T00:54:23.591Z",
                  "Description": "Example configuration description.",
                  "Revision": 1
              },
              "Name": "SomeTest"
          },
          {
              "Arn": "arn:aws:kafka:us-east-1:123456789012:configuration/SomeTest/abcdabcd-1234-abcd-1234-abcd123e8e8e-1",
              "CreationTime": "2019-05-03T23:08:29.446Z",
              "Description": "Example configuration description.",
              "KafkaVersions": [
                  "1.1.1"
              ],
              "LatestRevision": {
                  "CreationTime": "2019-05-03T23:08:29.446Z",
                  "Description": "Example configuration description.",
                  "Revision": 1
              },
              "Name": "ExampleConfigurationName"
          }
      ]
  }
  ```

# Amazon MSK configuration states
<a name="msk-configuration-states"></a>

An Amazon MSK configuration can be in one of the following states. To perform an operation on a configuration, the configuration must be in the `ACTIVE` or `DELETE_FAILED` state:
+ `ACTIVE`
+ `DELETING`
+ `DELETE_FAILED`