Package software.amazon.awscdk.services.msk.alpha
Amazon Managed Streaming for Apache Kafka Construct Library
---
The APIs of higher level constructs in this module are experimental and under active development. They are subject to non-backward compatible changes or removal in any future version. These are not subject to the Semantic Versioning model and breaking changes will be announced in the release notes. This means that while you may use them, you may need to update your source code when upgrading to a newer version of this package.
Amazon MSK is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data.
The following example creates an MSK Cluster.
Vpc vpc; Cluster cluster = Cluster.Builder.create(this, "Cluster") .clusterName("myCluster") .kafkaVersion(KafkaVersion.V2_8_1) .vpc(vpc) .build();
Allowing Connections
To control who can access the Cluster, use the .connections
attribute. For a list of ports used by MSK, refer to the MSK documentation.
Vpc vpc; Cluster cluster = Cluster.Builder.create(this, "Cluster") .clusterName("myCluster") .kafkaVersion(KafkaVersion.V2_8_1) .vpc(vpc) .build(); cluster.connections.allowFrom(Peer.ipv4("1.2.3.4/8"), Port.tcp(2181)); cluster.connections.allowFrom(Peer.ipv4("1.2.3.4/8"), Port.tcp(9094));
Cluster Endpoints
You can use the following attributes to get a list of the Kafka broker or ZooKeeper node endpoints
Cluster cluster; CfnOutput.Builder.create(this, "BootstrapBrokers").value(cluster.getBootstrapBrokers()).build(); CfnOutput.Builder.create(this, "BootstrapBrokersTls").value(cluster.getBootstrapBrokersTls()).build(); CfnOutput.Builder.create(this, "BootstrapBrokersSaslScram").value(cluster.getBootstrapBrokersSaslScram()).build(); CfnOutput.Builder.create(this, "BootstrapBrokerStringSaslIam").value(cluster.getBootstrapBrokersSaslIam()).build(); CfnOutput.Builder.create(this, "ZookeeperConnection").value(cluster.getZookeeperConnectionString()).build(); CfnOutput.Builder.create(this, "ZookeeperConnectionTls").value(cluster.getZookeeperConnectionStringTls()).build();
Importing an existing Cluster
To import an existing MSK cluster into your CDK app use the .fromClusterArn()
method.
ICluster cluster = Cluster.fromClusterArn(this, "Cluster", "arn:aws:kafka:us-west-2:1234567890:cluster/a-cluster/11111111-1111-1111-1111-111111111111-1");
Client Authentication
MSK supports the following authentication mechanisms.
TLS
To enable client authentication with TLS set the certificateAuthorityArns
property to reference your ACM Private CA. More info on Private CAs.
import software.amazon.awscdk.services.acmpca.*; Vpc vpc; Cluster cluster = Cluster.Builder.create(this, "Cluster") .clusterName("myCluster") .kafkaVersion(KafkaVersion.V2_8_1) .vpc(vpc) .encryptionInTransit(EncryptionInTransitConfig.builder() .clientBroker(ClientBrokerEncryption.TLS) .build()) .clientAuthentication(ClientAuthentication.tls(TlsAuthProps.builder() .certificateAuthorities(List.of(CertificateAuthority.fromCertificateAuthorityArn(this, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111"))) .build())) .build();
SASL/SCRAM
Enable client authentication with SASL/SCRAM:
Vpc vpc; Cluster cluster = Cluster.Builder.create(this, "cluster") .clusterName("myCluster") .kafkaVersion(KafkaVersion.V2_8_1) .vpc(vpc) .encryptionInTransit(EncryptionInTransitConfig.builder() .clientBroker(ClientBrokerEncryption.TLS) .build()) .clientAuthentication(ClientAuthentication.sasl(SaslAuthProps.builder() .scram(true) .build())) .build();
SASL/IAM
Enable client authentication with IAM:
Vpc vpc; Cluster cluster = Cluster.Builder.create(this, "cluster") .clusterName("myCluster") .kafkaVersion(KafkaVersion.V2_8_1) .vpc(vpc) .encryptionInTransit(EncryptionInTransitConfig.builder() .clientBroker(ClientBrokerEncryption.TLS) .build()) .clientAuthentication(ClientAuthentication.sasl(SaslAuthProps.builder() .iam(true) .build())) .build();
SASL/IAM + TLS
Enable client authentication with IAM
as well as enable client authentication with TLS by setting the certificateAuthorityArns
property to reference your ACM Private CA. More info on Private CAs.
import software.amazon.awscdk.services.acmpca.*; Vpc vpc; Cluster cluster = Cluster.Builder.create(this, "Cluster") .clusterName("myCluster") .kafkaVersion(KafkaVersion.V2_8_1) .vpc(vpc) .encryptionInTransit(EncryptionInTransitConfig.builder() .clientBroker(ClientBrokerEncryption.TLS) .build()) .clientAuthentication(ClientAuthentication.saslTls(SaslTlsAuthProps.builder() .iam(true) .certificateAuthorities(List.of(CertificateAuthority.fromCertificateAuthorityArn(this, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111"))) .build())) .build();
Logging
You can deliver Apache Kafka broker logs to one or more of the following destination types: Amazon CloudWatch Logs, Amazon S3, Amazon Kinesis Data Firehose.
To configure logs to be sent to an S3 bucket, provide a bucket in the logging
config.
Vpc vpc; IBucket bucket; Cluster cluster = Cluster.Builder.create(this, "cluster") .clusterName("myCluster") .kafkaVersion(KafkaVersion.V2_8_1) .vpc(vpc) .logging(BrokerLogging.builder() .s3(S3LoggingConfiguration.builder() .bucket(bucket) .build()) .build()) .build();
When the S3 destination is configured, AWS will automatically create an S3 bucket policy
that allows the service to write logs to the bucket. This makes it impossible to later update
that bucket policy. To have CDK create the bucket policy so that future updates can be made,
the @aws-cdk/aws-s3:createDefaultLoggingPolicy
feature flag can be used. This can be set
in the cdk.json
file.
{ "context": { "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true } }
Storage Mode
You can configure an MSK cluster storage mode using the storageMode
property.
Tiered storage is a low-cost storage tier for Amazon MSK that scales to virtually unlimited storage, making it cost-effective to build streaming data applications.
Visit Tiered storage to see the list of compatible Kafka versions and for more details.
Vpc vpc; IBucket bucket; Cluster cluster = Cluster.Builder.create(this, "cluster") .clusterName("myCluster") .kafkaVersion(KafkaVersion.V3_6_0) .vpc(vpc) .storageMode(StorageMode.TIERED) .build();
-
ClassDescription(experimental) Configuration details related to broker logs.A builder for
BrokerLogging
An implementation forBrokerLogging
(experimental) Configuration properties for client authentication.(experimental) Indicates the encryption setting for data in transit between clients and brokers.(experimental) Create a MSK Cluster.(experimental) A fluent builder forCluster
.(experimental) The Amazon MSK configuration to use for the cluster.A builder forClusterConfigurationInfo
An implementation forClusterConfigurationInfo
(experimental) The level of monitoring for the MSK cluster.(experimental) Properties for a MSK Cluster.A builder forClusterProps
An implementation forClusterProps
(experimental) EBS volume information.A builder forEbsStorageInfo
An implementation forEbsStorageInfo
(experimental) The settings for encrypting data in transit.A builder forEncryptionInTransitConfig
An implementation forEncryptionInTransitConfig
(experimental) Represents a MSK Cluster.Internal default implementation forICluster
.A proxy class which represents a concrete javascript instance of this type.(experimental) Kafka cluster version.(experimental) Monitoring Configuration.A builder forMonitoringConfiguration
An implementation forMonitoringConfiguration
(experimental) Details of the Amazon S3 destination for broker logs.A builder forS3LoggingConfiguration
An implementation forS3LoggingConfiguration
(experimental) SASL authentication properties.A builder forSaslAuthProps
An implementation forSaslAuthProps
(experimental) SASL + TLS authentication properties.A builder forSaslTlsAuthProps
An implementation forSaslTlsAuthProps
(experimental) The storage mode for the cluster brokers.(experimental) TLS authentication properties.A builder forTlsAuthProps
An implementation forTlsAuthProps