Interface CfnEndpoint.KafkaSettingsProperty
- All Superinterfaces:
software.amazon.jsii.JsiiSerializable
- All Known Implementing Classes:
CfnEndpoint.KafkaSettingsProperty.Jsii$Proxy
- Enclosing class:
CfnEndpoint
This information includes the output format of records applied to the endpoint and details of transaction and control table data information. For more information about other available settings, see Using object mapping to migrate data to a Kafka topic in the AWS Database Migration Service User Guide .
Example:
// The code below shows an example of how to instantiate this type. // The values are placeholders you should change. import software.amazon.awscdk.services.dms.*; KafkaSettingsProperty kafkaSettingsProperty = KafkaSettingsProperty.builder() .broker("broker") .includeControlDetails(false) .includeNullAndEmpty(false) .includePartitionValue(false) .includeTableAlterOperations(false) .includeTransactionDetails(false) .messageFormat("messageFormat") .messageMaxBytes(123) .noHexPrefix(false) .partitionIncludeSchemaTable(false) .saslPassword("saslPassword") .saslUserName("saslUserName") .securityProtocol("securityProtocol") .sslCaCertificateArn("sslCaCertificateArn") .sslClientCertificateArn("sslClientCertificateArn") .sslClientKeyArn("sslClientKeyArn") .sslClientKeyPassword("sslClientKeyPassword") .topic("topic") .build();
- See Also:
-
Nested Class Summary
Modifier and TypeInterfaceDescriptionstatic final class
A builder forCfnEndpoint.KafkaSettingsProperty
static final class
An implementation forCfnEndpoint.KafkaSettingsProperty
-
Method Summary
Modifier and TypeMethodDescriptionbuilder()
default String
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.default Object
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output.default Object
Include NULL and empty columns for records migrated to the endpoint.default Object
Shows the partition value within the Kafka message output unless the partition type isschema-table-type
.default Object
Includes any data definition language (DDL) operations that change the table in the control data, such asrename-table
,drop-table
,add-column
,drop-column
, andrename-column
.default Object
Provides detailed transaction information from the source database.default String
The output format for the records created on the endpoint.default Number
The maximum size in bytes for records created on the endpoint The default is 1,000,000.default Object
Set this optional parameter totrue
to avoid adding a '0x' prefix to raw data in hexadecimal format.default Object
Prefixes schema and table names to partition values, when the partition type isprimary-key-type
.default String
The secure password that you created when you first set up your Amazon MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.default String
The secure user name you created when you first set up your Amazon MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.default String
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS).default String
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that AWS DMS uses to securely connect to your Kafka target endpoint.default String
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.default String
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.default String
The password for the client private key used to securely connect to a Kafka target endpoint.default String
getTopic()
The topic to which you migrate the data.Methods inherited from interface software.amazon.jsii.JsiiSerializable
$jsii$toJson
-
Method Details
-
getBroker
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.Specify each broker location in the form
*broker-hostname-or-ip* : *port*
. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345"
. For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for AWS Database Migration Service in the AWS Database Migration Service User Guide .- See Also:
-
getIncludeControlDetails
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output.The default is
false
.- See Also:
-
getIncludeNullAndEmpty
Include NULL and empty columns for records migrated to the endpoint.The default is
false
.- See Also:
-
getIncludePartitionValue
Shows the partition value within the Kafka message output unless the partition type isschema-table-type
.The default is
false
.- See Also:
-
getIncludeTableAlterOperations
Includes any data definition language (DDL) operations that change the table in the control data, such asrename-table
,drop-table
,add-column
,drop-column
, andrename-column
.The default is
false
.- See Also:
-
getIncludeTransactionDetails
Provides detailed transaction information from the source database.This information includes a commit timestamp, a log position, and values for
transaction_id
, previoustransaction_id
, andtransaction_record_id
(the record offset within a transaction). The default isfalse
.- See Also:
-
getMessageFormat
The output format for the records created on the endpoint.The message format is
JSON
(default) orJSON_UNFORMATTED
(a single line with no tab).- See Also:
-
getMessageMaxBytes
The maximum size in bytes for records created on the endpoint The default is 1,000,000.- See Also:
-
getNoHexPrefix
Set this optional parameter totrue
to avoid adding a '0x' prefix to raw data in hexadecimal format.For example, by default, AWS DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use the
NoHexPrefix
endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.- See Also:
-
getPartitionIncludeSchemaTable
Prefixes schema and table names to partition values, when the partition type isprimary-key-type
.Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is
false
.- See Also:
-
getSaslPassword
The secure password that you created when you first set up your Amazon MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.- See Also:
-
getSaslUserName
The secure user name you created when you first set up your Amazon MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.- See Also:
-
getSecurityProtocol
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS).Options include
ssl-encryption
,ssl-authentication
, andsasl-ssl
.sasl-ssl
requiresSaslUsername
andSaslPassword
.- See Also:
-
getSslCaCertificateArn
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that AWS DMS uses to securely connect to your Kafka target endpoint.- See Also:
-
getSslClientCertificateArn
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.- See Also:
-
getSslClientKeyArn
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.- See Also:
-
getSslClientKeyPassword
The password for the client private key used to securely connect to a Kafka target endpoint.- See Also:
-
getTopic
The topic to which you migrate the data.If you don't specify a topic, AWS DMS specifies
"kafka-default-topic"
as the migration topic.- See Also:
-
builder
-