Interface CfnPipe.PipeSourceParametersProperty
- All Superinterfaces:
software.amazon.jsii.JsiiSerializable
- All Known Implementing Classes:
CfnPipe.PipeSourceParametersProperty.Jsii$Proxy
- Enclosing class:
CfnPipe
@Stability(Stable)
public static interface CfnPipe.PipeSourceParametersProperty
extends software.amazon.jsii.JsiiSerializable
The parameters required to set up a source for your pipe.
Example:
// The code below shows an example of how to instantiate this type. // The values are placeholders you should change. import software.amazon.awscdk.services.pipes.*; PipeSourceParametersProperty pipeSourceParametersProperty = PipeSourceParametersProperty.builder() .activeMqBrokerParameters(PipeSourceActiveMQBrokerParametersProperty.builder() .credentials(MQBrokerAccessCredentialsProperty.builder() .basicAuth("basicAuth") .build()) .queueName("queueName") // the properties below are optional .batchSize(123) .maximumBatchingWindowInSeconds(123) .build()) .dynamoDbStreamParameters(PipeSourceDynamoDBStreamParametersProperty.builder() .startingPosition("startingPosition") // the properties below are optional .batchSize(123) .deadLetterConfig(DeadLetterConfigProperty.builder() .arn("arn") .build()) .maximumBatchingWindowInSeconds(123) .maximumRecordAgeInSeconds(123) .maximumRetryAttempts(123) .onPartialBatchItemFailure("onPartialBatchItemFailure") .parallelizationFactor(123) .build()) .filterCriteria(FilterCriteriaProperty.builder() .filters(List.of(FilterProperty.builder() .pattern("pattern") .build())) .build()) .kinesisStreamParameters(PipeSourceKinesisStreamParametersProperty.builder() .startingPosition("startingPosition") // the properties below are optional .batchSize(123) .deadLetterConfig(DeadLetterConfigProperty.builder() .arn("arn") .build()) .maximumBatchingWindowInSeconds(123) .maximumRecordAgeInSeconds(123) .maximumRetryAttempts(123) .onPartialBatchItemFailure("onPartialBatchItemFailure") .parallelizationFactor(123) .startingPositionTimestamp("startingPositionTimestamp") .build()) .managedStreamingKafkaParameters(PipeSourceManagedStreamingKafkaParametersProperty.builder() .topicName("topicName") // the properties below are optional .batchSize(123) .consumerGroupId("consumerGroupId") .credentials(MSKAccessCredentialsProperty.builder() .clientCertificateTlsAuth("clientCertificateTlsAuth") .saslScram512Auth("saslScram512Auth") .build()) .maximumBatchingWindowInSeconds(123) .startingPosition("startingPosition") .build()) .rabbitMqBrokerParameters(PipeSourceRabbitMQBrokerParametersProperty.builder() .credentials(MQBrokerAccessCredentialsProperty.builder() .basicAuth("basicAuth") .build()) .queueName("queueName") // the properties below are optional .batchSize(123) .maximumBatchingWindowInSeconds(123) .virtualHost("virtualHost") .build()) .selfManagedKafkaParameters(PipeSourceSelfManagedKafkaParametersProperty.builder() .topicName("topicName") // the properties below are optional .additionalBootstrapServers(List.of("additionalBootstrapServers")) .batchSize(123) .consumerGroupId("consumerGroupId") .credentials(SelfManagedKafkaAccessConfigurationCredentialsProperty.builder() .basicAuth("basicAuth") .clientCertificateTlsAuth("clientCertificateTlsAuth") .saslScram256Auth("saslScram256Auth") .saslScram512Auth("saslScram512Auth") .build()) .maximumBatchingWindowInSeconds(123) .serverRootCaCertificate("serverRootCaCertificate") .startingPosition("startingPosition") .vpc(SelfManagedKafkaAccessConfigurationVpcProperty.builder() .securityGroup(List.of("securityGroup")) .subnets(List.of("subnets")) .build()) .build()) .sqsQueueParameters(PipeSourceSqsQueueParametersProperty.builder() .batchSize(123) .maximumBatchingWindowInSeconds(123) .build()) .build();
- See Also:
-
Nested Class Summary
Nested ClassesModifier and TypeInterfaceDescriptionstatic final class
A builder forCfnPipe.PipeSourceParametersProperty
static final class
An implementation forCfnPipe.PipeSourceParametersProperty
-
Method Summary
Modifier and TypeMethodDescriptionbuilder()
default Object
The parameters for using an Active MQ broker as a source.default Object
The parameters for using a DynamoDB stream as a source.default Object
The collection of event patterns used to filter events.default Object
The parameters for using a Kinesis stream as a source.default Object
The parameters for using an MSK stream as a source.default Object
The parameters for using a Rabbit MQ broker as a source.default Object
The parameters for using a self-managed Apache Kafka stream as a source.default Object
The parameters for using a Amazon SQS stream as a source.Methods inherited from interface software.amazon.jsii.JsiiSerializable
$jsii$toJson
-
Method Details
-
getActiveMqBrokerParameters
The parameters for using an Active MQ broker as a source.- See Also:
-
getDynamoDbStreamParameters
The parameters for using a DynamoDB stream as a source.- See Also:
-
getFilterCriteria
The collection of event patterns used to filter events.To remove a filter, specify a
FilterCriteria
object with an empty array ofFilter
objects.For more information, see Events and Event Patterns in the Amazon EventBridge User Guide .
- See Also:
-
getKinesisStreamParameters
The parameters for using a Kinesis stream as a source.- See Also:
-
getManagedStreamingKafkaParameters
The parameters for using an MSK stream as a source.- See Also:
-
getRabbitMqBrokerParameters
The parameters for using a Rabbit MQ broker as a source.- See Also:
-
getSelfManagedKafkaParameters
The parameters for using a self-managed Apache Kafka stream as a source.A self managed cluster refers to any Apache Kafka cluster not hosted by AWS . This includes both clusters you manage yourself, as well as those hosted by a third-party provider, such as Confluent Cloud , CloudKarafka , or Redpanda . For more information, see Apache Kafka streams as a source in the Amazon EventBridge User Guide .
- See Also:
-
getSqsQueueParameters
The parameters for using a Amazon SQS stream as a source.- See Also:
-
builder
-