CfnFlowPropsMixin
- class aws_cdk.mixins_preview.aws_appflow.mixins.CfnFlowPropsMixin(props, *, strategy=None)
Bases:
MixinThe
AWS::AppFlow::Flowresource is an Amazon AppFlow resource type that specifies a new flow.If you want to use CloudFormation to create a connector profile for connectors that implement OAuth (such as Salesforce, Slack, Zendesk, and Google Analytics), you must fetch the access and refresh tokens. You can do this by implementing your own UI for OAuth, or by retrieving the tokens from elsewhere. Alternatively, you can use the Amazon AppFlow console to create the connector profile, and then use that connector profile in the flow creation CloudFormation template.
- See:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-appflow-flow.html
- CloudformationResource:
AWS::AppFlow::Flow
- Mixin:
true
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview import mixins from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins cfn_flow_props_mixin = appflow_mixins.CfnFlowPropsMixin(appflow_mixins.CfnFlowMixinProps( description="description", destination_flow_config_list=[appflow_mixins.CfnFlowPropsMixin.DestinationFlowConfigProperty( api_version="apiVersion", connector_profile_name="connectorProfileName", connector_type="connectorType", destination_connector_properties=appflow_mixins.CfnFlowPropsMixin.DestinationConnectorPropertiesProperty( custom_connector=appflow_mixins.CfnFlowPropsMixin.CustomConnectorDestinationPropertiesProperty( custom_properties={ "custom_properties_key": "customProperties" }, entity_name="entityName", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), event_bridge=appflow_mixins.CfnFlowPropsMixin.EventBridgeDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), object="object" ), lookout_metrics=appflow_mixins.CfnFlowPropsMixin.LookoutMetricsDestinationPropertiesProperty( object="object" ), marketo=appflow_mixins.CfnFlowPropsMixin.MarketoDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), object="object" ), redshift=appflow_mixins.CfnFlowPropsMixin.RedshiftDestinationPropertiesProperty( bucket_prefix="bucketPrefix", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), intermediate_bucket_name="intermediateBucketName", object="object" ), s3=appflow_mixins.CfnFlowPropsMixin.S3DestinationPropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_output_format_config=appflow_mixins.CfnFlowPropsMixin.S3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False ) ), salesforce=appflow_mixins.CfnFlowPropsMixin.SalesforceDestinationPropertiesProperty( data_transfer_api="dataTransferApi", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object="object", write_operation_type="writeOperationType" ), sapo_data=appflow_mixins.CfnFlowPropsMixin.SAPODataDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object_path="objectPath", success_response_handling_config=appflow_mixins.CfnFlowPropsMixin.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" ), snowflake=appflow_mixins.CfnFlowPropsMixin.SnowflakeDestinationPropertiesProperty( bucket_prefix="bucketPrefix", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), intermediate_bucket_name="intermediateBucketName", object="object" ), upsolver=appflow_mixins.CfnFlowPropsMixin.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_output_format_config=appflow_mixins.CfnFlowPropsMixin.UpsolverS3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ) ) ), zendesk=appflow_mixins.CfnFlowPropsMixin.ZendeskDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object="object", write_operation_type="writeOperationType" ) ) )], flow_name="flowName", flow_status="flowStatus", kms_arn="kmsArn", metadata_catalog_config=appflow_mixins.CfnFlowPropsMixin.MetadataCatalogConfigProperty( glue_data_catalog=appflow_mixins.CfnFlowPropsMixin.GlueDataCatalogProperty( database_name="databaseName", role_arn="roleArn", table_prefix="tablePrefix" ) ), source_flow_config=appflow_mixins.CfnFlowPropsMixin.SourceFlowConfigProperty( api_version="apiVersion", connector_profile_name="connectorProfileName", connector_type="connectorType", incremental_pull_config=appflow_mixins.CfnFlowPropsMixin.IncrementalPullConfigProperty( datetime_type_field_name="datetimeTypeFieldName" ), source_connector_properties=appflow_mixins.CfnFlowPropsMixin.SourceConnectorPropertiesProperty( amplitude=appflow_mixins.CfnFlowPropsMixin.AmplitudeSourcePropertiesProperty( object="object" ), custom_connector=appflow_mixins.CfnFlowPropsMixin.CustomConnectorSourcePropertiesProperty( custom_properties={ "custom_properties_key": "customProperties" }, data_transfer_api=appflow_mixins.CfnFlowPropsMixin.DataTransferApiProperty( name="name", type="type" ), entity_name="entityName" ), datadog=appflow_mixins.CfnFlowPropsMixin.DatadogSourcePropertiesProperty( object="object" ), dynatrace=appflow_mixins.CfnFlowPropsMixin.DynatraceSourcePropertiesProperty( object="object" ), google_analytics=appflow_mixins.CfnFlowPropsMixin.GoogleAnalyticsSourcePropertiesProperty( object="object" ), infor_nexus=appflow_mixins.CfnFlowPropsMixin.InforNexusSourcePropertiesProperty( object="object" ), marketo=appflow_mixins.CfnFlowPropsMixin.MarketoSourcePropertiesProperty( object="object" ), pardot=appflow_mixins.CfnFlowPropsMixin.PardotSourcePropertiesProperty( object="object" ), s3=appflow_mixins.CfnFlowPropsMixin.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_input_format_config=appflow_mixins.CfnFlowPropsMixin.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) ), salesforce=appflow_mixins.CfnFlowPropsMixin.SalesforceSourcePropertiesProperty( data_transfer_api="dataTransferApi", enable_dynamic_field_update=False, include_deleted_records=False, object="object" ), sapo_data=appflow_mixins.CfnFlowPropsMixin.SAPODataSourcePropertiesProperty( object_path="objectPath", pagination_config=appflow_mixins.CfnFlowPropsMixin.SAPODataPaginationConfigProperty( max_page_size=123 ), parallelism_config=appflow_mixins.CfnFlowPropsMixin.SAPODataParallelismConfigProperty( max_parallelism=123 ) ), service_now=appflow_mixins.CfnFlowPropsMixin.ServiceNowSourcePropertiesProperty( object="object" ), singular=appflow_mixins.CfnFlowPropsMixin.SingularSourcePropertiesProperty( object="object" ), slack=appflow_mixins.CfnFlowPropsMixin.SlackSourcePropertiesProperty( object="object" ), trendmicro=appflow_mixins.CfnFlowPropsMixin.TrendmicroSourcePropertiesProperty( object="object" ), veeva=appflow_mixins.CfnFlowPropsMixin.VeevaSourcePropertiesProperty( document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False, object="object" ), zendesk=appflow_mixins.CfnFlowPropsMixin.ZendeskSourcePropertiesProperty( object="object" ) ) ), tags=[CfnTag( key="key", value="value" )], tasks=[appflow_mixins.CfnFlowPropsMixin.TaskProperty( connector_operator=appflow_mixins.CfnFlowPropsMixin.ConnectorOperatorProperty( amplitude="amplitude", custom_connector="customConnector", datadog="datadog", dynatrace="dynatrace", google_analytics="googleAnalytics", infor_nexus="inforNexus", marketo="marketo", pardot="pardot", s3="s3", salesforce="salesforce", sapo_data="sapoData", service_now="serviceNow", singular="singular", slack="slack", trendmicro="trendmicro", veeva="veeva", zendesk="zendesk" ), destination_field="destinationField", source_fields=["sourceFields"], task_properties=[appflow_mixins.CfnFlowPropsMixin.TaskPropertiesObjectProperty( key="key", value="value" )], task_type="taskType" )], trigger_config=appflow_mixins.CfnFlowPropsMixin.TriggerConfigProperty( trigger_properties=appflow_mixins.CfnFlowPropsMixin.ScheduledTriggerPropertiesProperty( data_pull_mode="dataPullMode", first_execution_from=123, flow_error_deactivation_threshold=123, schedule_end_time=123, schedule_expression="scheduleExpression", schedule_offset=123, schedule_start_time=123, time_zone="timeZone" ), trigger_type="triggerType" ) ), strategy=mixins.PropertyMergeStrategy.OVERRIDE )
Create a mixin to apply properties to
AWS::AppFlow::Flow.- Parameters:
props (
Union[CfnFlowMixinProps,Dict[str,Any]]) – L1 properties to apply.strategy (
Optional[PropertyMergeStrategy]) – (experimental) Strategy for merging nested properties. Default: - PropertyMergeStrategy.MERGE
Methods
- apply_to(construct)
Apply the mixin properties to the construct.
- Parameters:
construct (
IConstruct)- Return type:
- supports(construct)
Check if this mixin supports the given construct.
- Parameters:
construct (
IConstruct)- Return type:
bool
Attributes
- CFN_PROPERTY_KEYS = ['description', 'destinationFlowConfigList', 'flowName', 'flowStatus', 'kmsArn', 'metadataCatalogConfig', 'sourceFlowConfig', 'tags', 'tasks', 'triggerConfig']
Static Methods
- classmethod is_mixin(x)
(experimental) Checks if
xis a Mixin.- Parameters:
x (
Any) – Any object.- Return type:
bool- Returns:
true if
xis an object created from a class which extendsMixin.- Stability:
experimental
AggregationConfigProperty
- class CfnFlowPropsMixin.AggregationConfigProperty(*, aggregation_type=None, target_file_size=None)
Bases:
objectThe aggregation settings that you can use to customize the output format of your flow data.
- Parameters:
aggregation_type (
Optional[str]) – Specifies whether Amazon AppFlow aggregates the flow records into a single file, or leave them unaggregated.target_file_size (
Union[int,float,None]) – The desired file size, in MB, for each output file that Amazon AppFlow writes to the flow destination. For each file, Amazon AppFlow attempts to achieve the size that you specify. The actual file sizes might differ from this target based on the number and size of the records that each file contains.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins aggregation_config_property = appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 )
Attributes
- aggregation_type
Specifies whether Amazon AppFlow aggregates the flow records into a single file, or leave them unaggregated.
- target_file_size
The desired file size, in MB, for each output file that Amazon AppFlow writes to the flow destination.
For each file, Amazon AppFlow attempts to achieve the size that you specify. The actual file sizes might differ from this target based on the number and size of the records that each file contains.
AmplitudeSourcePropertiesProperty
- class CfnFlowPropsMixin.AmplitudeSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Amplitude is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the Amplitude flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins amplitude_source_properties_property = appflow_mixins.CfnFlowPropsMixin.AmplitudeSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Amplitude flow source.
ConnectorOperatorProperty
- class CfnFlowPropsMixin.ConnectorOperatorProperty(*, amplitude=None, custom_connector=None, datadog=None, dynatrace=None, google_analytics=None, infor_nexus=None, marketo=None, pardot=None, s3=None, salesforce=None, sapo_data=None, service_now=None, singular=None, slack=None, trendmicro=None, veeva=None, zendesk=None)
Bases:
objectThe operation to be performed on the provided source fields.
- Parameters:
amplitude (
Optional[str]) – The operation to be performed on the provided Amplitude source fields.custom_connector (
Optional[str]) – Operators supported by the custom connector.datadog (
Optional[str]) – The operation to be performed on the provided Datadog source fields.dynatrace (
Optional[str]) – The operation to be performed on the provided Dynatrace source fields.google_analytics (
Optional[str]) – The operation to be performed on the provided Google Analytics source fields.infor_nexus (
Optional[str]) – The operation to be performed on the provided Infor Nexus source fields.marketo (
Optional[str]) – The operation to be performed on the provided Marketo source fields.pardot (
Optional[str]) – The operation to be performed on the provided Salesforce Pardot source fields.s3 (
Optional[str]) – The operation to be performed on the provided Amazon S3 source fields.salesforce (
Optional[str]) – The operation to be performed on the provided Salesforce source fields.sapo_data (
Optional[str]) – The operation to be performed on the provided SAPOData source fields.service_now (
Optional[str]) – The operation to be performed on the provided ServiceNow source fields.singular (
Optional[str]) – The operation to be performed on the provided Singular source fields.slack (
Optional[str]) – The operation to be performed on the provided Slack source fields.trendmicro (
Optional[str]) – The operation to be performed on the provided Trend Micro source fields.veeva (
Optional[str]) – The operation to be performed on the provided Veeva source fields.zendesk (
Optional[str]) – The operation to be performed on the provided Zendesk source fields.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins connector_operator_property = appflow_mixins.CfnFlowPropsMixin.ConnectorOperatorProperty( amplitude="amplitude", custom_connector="customConnector", datadog="datadog", dynatrace="dynatrace", google_analytics="googleAnalytics", infor_nexus="inforNexus", marketo="marketo", pardot="pardot", s3="s3", salesforce="salesforce", sapo_data="sapoData", service_now="serviceNow", singular="singular", slack="slack", trendmicro="trendmicro", veeva="veeva", zendesk="zendesk" )
Attributes
- amplitude
The operation to be performed on the provided Amplitude source fields.
- custom_connector
Operators supported by the custom connector.
- datadog
The operation to be performed on the provided Datadog source fields.
- dynatrace
The operation to be performed on the provided Dynatrace source fields.
- google_analytics
The operation to be performed on the provided Google Analytics source fields.
- infor_nexus
The operation to be performed on the provided Infor Nexus source fields.
- marketo
The operation to be performed on the provided Marketo source fields.
- pardot
The operation to be performed on the provided Salesforce Pardot source fields.
- s3
The operation to be performed on the provided Amazon S3 source fields.
- salesforce
The operation to be performed on the provided Salesforce source fields.
- sapo_data
The operation to be performed on the provided SAPOData source fields.
- service_now
The operation to be performed on the provided ServiceNow source fields.
- singular
The operation to be performed on the provided Singular source fields.
- slack
The operation to be performed on the provided Slack source fields.
- trendmicro
The operation to be performed on the provided Trend Micro source fields.
- veeva
The operation to be performed on the provided Veeva source fields.
- zendesk
The operation to be performed on the provided Zendesk source fields.
CustomConnectorDestinationPropertiesProperty
- class CfnFlowPropsMixin.CustomConnectorDestinationPropertiesProperty(*, custom_properties=None, entity_name=None, error_handling_config=None, id_field_names=None, write_operation_type=None)
Bases:
objectThe properties that are applied when the custom connector is being used as a destination.
- Parameters:
custom_properties (
Union[Mapping[str,str],IResolvable,None]) – The custom properties that are specific to the connector when it’s used as a destination in the flow.entity_name (
Optional[str]) – The entity specified in the custom connector as a destination in the flow.error_handling_config (
Union[IResolvable,ErrorHandlingConfigProperty,Dict[str,Any],None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the custom connector as destination.id_field_names (
Optional[Sequence[str]]) – The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update, delete, or upsert.write_operation_type (
Optional[str]) – Specifies the type of write operation to be performed in the custom connector when it’s used as destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins custom_connector_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.CustomConnectorDestinationPropertiesProperty( custom_properties={ "custom_properties_key": "customProperties" }, entity_name="entityName", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" )
Attributes
- custom_properties
The custom properties that are specific to the connector when it’s used as a destination in the flow.
- entity_name
The entity specified in the custom connector as a destination in the flow.
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the custom connector as destination.
- id_field_names
The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update, delete, or upsert.
- write_operation_type
Specifies the type of write operation to be performed in the custom connector when it’s used as destination.
CustomConnectorSourcePropertiesProperty
- class CfnFlowPropsMixin.CustomConnectorSourcePropertiesProperty(*, custom_properties=None, data_transfer_api=None, entity_name=None)
Bases:
objectThe properties that are applied when the custom connector is being used as a source.
- Parameters:
custom_properties (
Union[Mapping[str,str],IResolvable,None]) – Custom properties that are required to use the custom connector as a source.data_transfer_api (
Union[IResolvable,DataTransferApiProperty,Dict[str,Any],None]) – The API of the connector application that Amazon AppFlow uses to transfer your data.entity_name (
Optional[str]) – The entity specified in the custom connector as a source in the flow.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins custom_connector_source_properties_property = appflow_mixins.CfnFlowPropsMixin.CustomConnectorSourcePropertiesProperty( custom_properties={ "custom_properties_key": "customProperties" }, data_transfer_api=appflow_mixins.CfnFlowPropsMixin.DataTransferApiProperty( name="name", type="type" ), entity_name="entityName" )
Attributes
- custom_properties
Custom properties that are required to use the custom connector as a source.
- data_transfer_api
The API of the connector application that Amazon AppFlow uses to transfer your data.
- entity_name
The entity specified in the custom connector as a source in the flow.
DataTransferApiProperty
- class CfnFlowPropsMixin.DataTransferApiProperty(*, name=None, type=None)
Bases:
objectThe API of the connector application that Amazon AppFlow uses to transfer your data.
- Parameters:
name (
Optional[str]) – The name of the connector application API.type (
Optional[str]) – You can specify one of the following types:. - AUTOMATIC - The default. Optimizes a flow for datasets that fluctuate in size from small to large. For each flow run, Amazon AppFlow chooses to use the SYNC or ASYNC API type based on the amount of data that the run transfers. - SYNC - A synchronous API. This type of API optimizes a flow for small to medium-sized datasets. - ASYNC - An asynchronous API. This type of API optimizes a flow for large datasets.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins data_transfer_api_property = appflow_mixins.CfnFlowPropsMixin.DataTransferApiProperty( name="name", type="type" )
Attributes
- name
The name of the connector application API.
- type
.
AUTOMATIC - The default. Optimizes a flow for datasets that fluctuate in size from small to large. For each flow run, Amazon AppFlow chooses to use the SYNC or ASYNC API type based on the amount of data that the run transfers.
SYNC - A synchronous API. This type of API optimizes a flow for small to medium-sized datasets.
ASYNC - An asynchronous API. This type of API optimizes a flow for large datasets.
- See:
- Type:
You can specify one of the following types
DatadogSourcePropertiesProperty
- class CfnFlowPropsMixin.DatadogSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Datadog is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the Datadog flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins datadog_source_properties_property = appflow_mixins.CfnFlowPropsMixin.DatadogSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Datadog flow source.
DestinationConnectorPropertiesProperty
- class CfnFlowPropsMixin.DestinationConnectorPropertiesProperty(*, custom_connector=None, event_bridge=None, lookout_metrics=None, marketo=None, redshift=None, s3=None, salesforce=None, sapo_data=None, snowflake=None, upsolver=None, zendesk=None)
Bases:
objectThis stores the information that is required to query a particular connector.
- Parameters:
custom_connector (
Union[IResolvable,CustomConnectorDestinationPropertiesProperty,Dict[str,Any],None]) – The properties that are required to query the custom Connector.event_bridge (
Union[IResolvable,EventBridgeDestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query Amazon EventBridge.lookout_metrics (
Union[IResolvable,LookoutMetricsDestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query Amazon Lookout for Metrics.marketo (
Union[IResolvable,MarketoDestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query Marketo.redshift (
Union[IResolvable,RedshiftDestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query Amazon Redshift.s3 (
Union[IResolvable,S3DestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query Amazon S3.salesforce (
Union[IResolvable,SalesforceDestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query Salesforce.sapo_data (
Union[IResolvable,SAPODataDestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query SAPOData.snowflake (
Union[IResolvable,SnowflakeDestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query Snowflake.upsolver (
Union[IResolvable,UpsolverDestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query Upsolver.zendesk (
Union[IResolvable,ZendeskDestinationPropertiesProperty,Dict[str,Any],None]) – The properties required to query Zendesk.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins destination_connector_properties_property = appflow_mixins.CfnFlowPropsMixin.DestinationConnectorPropertiesProperty( custom_connector=appflow_mixins.CfnFlowPropsMixin.CustomConnectorDestinationPropertiesProperty( custom_properties={ "custom_properties_key": "customProperties" }, entity_name="entityName", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), event_bridge=appflow_mixins.CfnFlowPropsMixin.EventBridgeDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), object="object" ), lookout_metrics=appflow_mixins.CfnFlowPropsMixin.LookoutMetricsDestinationPropertiesProperty( object="object" ), marketo=appflow_mixins.CfnFlowPropsMixin.MarketoDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), object="object" ), redshift=appflow_mixins.CfnFlowPropsMixin.RedshiftDestinationPropertiesProperty( bucket_prefix="bucketPrefix", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), intermediate_bucket_name="intermediateBucketName", object="object" ), s3=appflow_mixins.CfnFlowPropsMixin.S3DestinationPropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_output_format_config=appflow_mixins.CfnFlowPropsMixin.S3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False ) ), salesforce=appflow_mixins.CfnFlowPropsMixin.SalesforceDestinationPropertiesProperty( data_transfer_api="dataTransferApi", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object="object", write_operation_type="writeOperationType" ), sapo_data=appflow_mixins.CfnFlowPropsMixin.SAPODataDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object_path="objectPath", success_response_handling_config=appflow_mixins.CfnFlowPropsMixin.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" ), snowflake=appflow_mixins.CfnFlowPropsMixin.SnowflakeDestinationPropertiesProperty( bucket_prefix="bucketPrefix", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), intermediate_bucket_name="intermediateBucketName", object="object" ), upsolver=appflow_mixins.CfnFlowPropsMixin.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_output_format_config=appflow_mixins.CfnFlowPropsMixin.UpsolverS3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ) ) ), zendesk=appflow_mixins.CfnFlowPropsMixin.ZendeskDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object="object", write_operation_type="writeOperationType" ) )
Attributes
- custom_connector
The properties that are required to query the custom Connector.
- event_bridge
The properties required to query Amazon EventBridge.
- lookout_metrics
The properties required to query Amazon Lookout for Metrics.
- marketo
The properties required to query Marketo.
- redshift
The properties required to query Amazon Redshift.
- s3
The properties required to query Amazon S3.
- salesforce
The properties required to query Salesforce.
- sapo_data
The properties required to query SAPOData.
- snowflake
The properties required to query Snowflake.
- upsolver
The properties required to query Upsolver.
- zendesk
The properties required to query Zendesk.
DestinationFlowConfigProperty
- class CfnFlowPropsMixin.DestinationFlowConfigProperty(*, api_version=None, connector_profile_name=None, connector_type=None, destination_connector_properties=None)
Bases:
objectContains information about the configuration of destination connectors present in the flow.
- Parameters:
api_version (
Optional[str]) – The API version that the destination connector uses.connector_profile_name (
Optional[str]) – The name of the connector profile. This name must be unique for each connector profile in the AWS account .connector_type (
Optional[str]) – The type of destination connector, such as Sales force, Amazon S3, and so on.destination_connector_properties (
Union[IResolvable,DestinationConnectorPropertiesProperty,Dict[str,Any],None]) – This stores the information that is required to query a particular connector.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins destination_flow_config_property = appflow_mixins.CfnFlowPropsMixin.DestinationFlowConfigProperty( api_version="apiVersion", connector_profile_name="connectorProfileName", connector_type="connectorType", destination_connector_properties=appflow_mixins.CfnFlowPropsMixin.DestinationConnectorPropertiesProperty( custom_connector=appflow_mixins.CfnFlowPropsMixin.CustomConnectorDestinationPropertiesProperty( custom_properties={ "custom_properties_key": "customProperties" }, entity_name="entityName", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), event_bridge=appflow_mixins.CfnFlowPropsMixin.EventBridgeDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), object="object" ), lookout_metrics=appflow_mixins.CfnFlowPropsMixin.LookoutMetricsDestinationPropertiesProperty( object="object" ), marketo=appflow_mixins.CfnFlowPropsMixin.MarketoDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), object="object" ), redshift=appflow_mixins.CfnFlowPropsMixin.RedshiftDestinationPropertiesProperty( bucket_prefix="bucketPrefix", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), intermediate_bucket_name="intermediateBucketName", object="object" ), s3=appflow_mixins.CfnFlowPropsMixin.S3DestinationPropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_output_format_config=appflow_mixins.CfnFlowPropsMixin.S3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False ) ), salesforce=appflow_mixins.CfnFlowPropsMixin.SalesforceDestinationPropertiesProperty( data_transfer_api="dataTransferApi", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object="object", write_operation_type="writeOperationType" ), sapo_data=appflow_mixins.CfnFlowPropsMixin.SAPODataDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object_path="objectPath", success_response_handling_config=appflow_mixins.CfnFlowPropsMixin.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" ), snowflake=appflow_mixins.CfnFlowPropsMixin.SnowflakeDestinationPropertiesProperty( bucket_prefix="bucketPrefix", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), intermediate_bucket_name="intermediateBucketName", object="object" ), upsolver=appflow_mixins.CfnFlowPropsMixin.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_output_format_config=appflow_mixins.CfnFlowPropsMixin.UpsolverS3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ) ) ), zendesk=appflow_mixins.CfnFlowPropsMixin.ZendeskDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object="object", write_operation_type="writeOperationType" ) ) )
Attributes
- api_version
The API version that the destination connector uses.
- connector_profile_name
The name of the connector profile.
This name must be unique for each connector profile in the AWS account .
- connector_type
The type of destination connector, such as Sales force, Amazon S3, and so on.
- destination_connector_properties
This stores the information that is required to query a particular connector.
DynatraceSourcePropertiesProperty
- class CfnFlowPropsMixin.DynatraceSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Dynatrace is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the Dynatrace flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins dynatrace_source_properties_property = appflow_mixins.CfnFlowPropsMixin.DynatraceSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Dynatrace flow source.
ErrorHandlingConfigProperty
- class CfnFlowPropsMixin.ErrorHandlingConfigProperty(*, bucket_name=None, bucket_prefix=None, fail_on_first_error=None)
Bases:
objectThe settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfigis a part of the destination connector details.- Parameters:
bucket_name (
Optional[str]) – Specifies the name of the Amazon S3 bucket.bucket_prefix (
Optional[str]) – Specifies the Amazon S3 bucket prefix.fail_on_first_error (
Union[bool,IResolvable,None]) – Specifies if the flow should fail after the first instance of a failure when attempting to place data in the destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins error_handling_config_property = appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False )
Attributes
- bucket_name
Specifies the name of the Amazon S3 bucket.
- bucket_prefix
Specifies the Amazon S3 bucket prefix.
- fail_on_first_error
Specifies if the flow should fail after the first instance of a failure when attempting to place data in the destination.
EventBridgeDestinationPropertiesProperty
- class CfnFlowPropsMixin.EventBridgeDestinationPropertiesProperty(*, error_handling_config=None, object=None)
Bases:
objectThe properties that are applied when Amazon EventBridge is being used as a destination.
- Parameters:
error_handling_config (
Union[IResolvable,ErrorHandlingConfigProperty,Dict[str,Any],None]) – The object specified in the Amplitude flow source.object (
Optional[str]) – The object specified in the Amazon EventBridge flow destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins event_bridge_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.EventBridgeDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), object="object" )
Attributes
- error_handling_config
The object specified in the Amplitude flow source.
- object
The object specified in the Amazon EventBridge flow destination.
GlueDataCatalogProperty
- class CfnFlowPropsMixin.GlueDataCatalogProperty(*, database_name=None, role_arn=None, table_prefix=None)
Bases:
objectTrigger settings of the flow.
- Parameters:
database_name (
Optional[str]) – A string containing the value for the tag.role_arn (
Optional[str]) – A string containing the value for the tag.table_prefix (
Optional[str]) – A string containing the value for the tag.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins glue_data_catalog_property = appflow_mixins.CfnFlowPropsMixin.GlueDataCatalogProperty( database_name="databaseName", role_arn="roleArn", table_prefix="tablePrefix" )
Attributes
- database_name
A string containing the value for the tag.
- role_arn
A string containing the value for the tag.
- table_prefix
A string containing the value for the tag.
GoogleAnalyticsSourcePropertiesProperty
- class CfnFlowPropsMixin.GoogleAnalyticsSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Google Analytics is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the Google Analytics flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins google_analytics_source_properties_property = appflow_mixins.CfnFlowPropsMixin.GoogleAnalyticsSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Google Analytics flow source.
IncrementalPullConfigProperty
- class CfnFlowPropsMixin.IncrementalPullConfigProperty(*, datetime_type_field_name=None)
Bases:
objectSpecifies the configuration used when importing incremental records from the source.
- Parameters:
datetime_type_field_name (
Optional[str]) – A field that specifies the date time or timestamp field as the criteria to use when importing incremental records from the source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins incremental_pull_config_property = appflow_mixins.CfnFlowPropsMixin.IncrementalPullConfigProperty( datetime_type_field_name="datetimeTypeFieldName" )
Attributes
- datetime_type_field_name
A field that specifies the date time or timestamp field as the criteria to use when importing incremental records from the source.
InforNexusSourcePropertiesProperty
- class CfnFlowPropsMixin.InforNexusSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Infor Nexus is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the Infor Nexus flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins infor_nexus_source_properties_property = appflow_mixins.CfnFlowPropsMixin.InforNexusSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Infor Nexus flow source.
LookoutMetricsDestinationPropertiesProperty
- class CfnFlowPropsMixin.LookoutMetricsDestinationPropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Amazon Lookout for Metrics is used as a destination.
- Parameters:
object (
Optional[str]) – The object specified in the Amazon Lookout for Metrics flow destination.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins lookout_metrics_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.LookoutMetricsDestinationPropertiesProperty( object="object" )
Attributes
- object
The object specified in the Amazon Lookout for Metrics flow destination.
MarketoDestinationPropertiesProperty
- class CfnFlowPropsMixin.MarketoDestinationPropertiesProperty(*, error_handling_config=None, object=None)
Bases:
objectThe properties that Amazon AppFlow applies when you use Marketo as a flow destination.
- Parameters:
error_handling_config (
Union[IResolvable,ErrorHandlingConfigProperty,Dict[str,Any],None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfigis a part of the destination connector details.object (
Optional[str]) – The object specified in the Marketo flow destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins marketo_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.MarketoDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), object="object" )
Attributes
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfigis a part of the destination connector details.
- object
The object specified in the Marketo flow destination.
MarketoSourcePropertiesProperty
- class CfnFlowPropsMixin.MarketoSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Marketo is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the Marketo flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins marketo_source_properties_property = appflow_mixins.CfnFlowPropsMixin.MarketoSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Marketo flow source.
MetadataCatalogConfigProperty
- class CfnFlowPropsMixin.MetadataCatalogConfigProperty(*, glue_data_catalog=None)
Bases:
objectSpecifies the configuration that Amazon AppFlow uses when it catalogs your data.
When Amazon AppFlow catalogs your data, it stores metadata in a data catalog.
- Parameters:
glue_data_catalog (
Union[IResolvable,GlueDataCatalogProperty,Dict[str,Any],None]) – Specifies the configuration that Amazon AppFlow uses when it catalogs your data with the AWS Glue Data Catalog .- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins metadata_catalog_config_property = appflow_mixins.CfnFlowPropsMixin.MetadataCatalogConfigProperty( glue_data_catalog=appflow_mixins.CfnFlowPropsMixin.GlueDataCatalogProperty( database_name="databaseName", role_arn="roleArn", table_prefix="tablePrefix" ) )
Attributes
- glue_data_catalog
Specifies the configuration that Amazon AppFlow uses when it catalogs your data with the AWS Glue Data Catalog .
PardotSourcePropertiesProperty
- class CfnFlowPropsMixin.PardotSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Salesforce Pardot is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the Salesforce Pardot flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins pardot_source_properties_property = appflow_mixins.CfnFlowPropsMixin.PardotSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Salesforce Pardot flow source.
PrefixConfigProperty
- class CfnFlowPropsMixin.PrefixConfigProperty(*, path_prefix_hierarchy=None, prefix_format=None, prefix_type=None)
Bases:
objectSpecifies elements that Amazon AppFlow includes in the file and folder names in the flow destination.
- Parameters:
path_prefix_hierarchy (
Optional[Sequence[str]]) – Specifies whether the destination file path includes either or both of the following elements:. - EXECUTION_ID - The ID that Amazon AppFlow assigns to the flow run. - SCHEMA_VERSION - The version number of your data schema. Amazon AppFlow assigns this version number. The version number increases by one when you change any of the following settings in your flow configuration: - Source-to-destination field mappings - Field data types - Partition keysprefix_format (
Optional[str]) – Determines the level of granularity for the date and time that’s included in the prefix.prefix_type (
Optional[str]) – Determines the format of the prefix, and whether it applies to the file name, file path, or both.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins prefix_config_property = appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" )
Attributes
- path_prefix_hierarchy
.
EXECUTION_ID - The ID that Amazon AppFlow assigns to the flow run.
SCHEMA_VERSION - The version number of your data schema. Amazon AppFlow assigns this version number. The version number increases by one when you change any of the following settings in your flow configuration:
Source-to-destination field mappings
Field data types
Partition keys
- See:
- Type:
Specifies whether the destination file path includes either or both of the following elements
- prefix_format
Determines the level of granularity for the date and time that’s included in the prefix.
- prefix_type
Determines the format of the prefix, and whether it applies to the file name, file path, or both.
RedshiftDestinationPropertiesProperty
- class CfnFlowPropsMixin.RedshiftDestinationPropertiesProperty(*, bucket_prefix=None, error_handling_config=None, intermediate_bucket_name=None, object=None)
Bases:
objectThe properties that are applied when Amazon Redshift is being used as a destination.
- Parameters:
bucket_prefix (
Optional[str]) – The object key for the bucket in which Amazon AppFlow places the destination files.error_handling_config (
Union[IResolvable,ErrorHandlingConfigProperty,Dict[str,Any],None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Amazon Redshift destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfigis a part of the destination connector details.intermediate_bucket_name (
Optional[str]) – The intermediate bucket that Amazon AppFlow uses when moving data into Amazon Redshift.object (
Optional[str]) – The object specified in the Amazon Redshift flow destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins redshift_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.RedshiftDestinationPropertiesProperty( bucket_prefix="bucketPrefix", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), intermediate_bucket_name="intermediateBucketName", object="object" )
Attributes
- bucket_prefix
The object key for the bucket in which Amazon AppFlow places the destination files.
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the Amazon Redshift destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfigis a part of the destination connector details.
- intermediate_bucket_name
The intermediate bucket that Amazon AppFlow uses when moving data into Amazon Redshift.
- object
The object specified in the Amazon Redshift flow destination.
S3DestinationPropertiesProperty
- class CfnFlowPropsMixin.S3DestinationPropertiesProperty(*, bucket_name=None, bucket_prefix=None, s3_output_format_config=None)
Bases:
objectThe properties that are applied when Amazon S3 is used as a destination.
- Parameters:
bucket_name (
Optional[str]) – The Amazon S3 bucket name in which Amazon AppFlow places the transferred data.bucket_prefix (
Optional[str]) – The object key for the destination bucket in which Amazon AppFlow places the files.s3_output_format_config (
Union[IResolvable,S3OutputFormatConfigProperty,Dict[str,Any],None]) – The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins s3_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.S3DestinationPropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_output_format_config=appflow_mixins.CfnFlowPropsMixin.S3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False ) )
Attributes
- bucket_name
The Amazon S3 bucket name in which Amazon AppFlow places the transferred data.
- bucket_prefix
The object key for the destination bucket in which Amazon AppFlow places the files.
- s3_output_format_config
The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.
S3InputFormatConfigProperty
- class CfnFlowPropsMixin.S3InputFormatConfigProperty(*, s3_input_file_type=None)
Bases:
objectWhen you use Amazon S3 as the source, the configuration format that you provide the flow input data.
- Parameters:
s3_input_file_type (
Optional[str]) – The file type that Amazon AppFlow gets from your Amazon S3 bucket.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins s3_input_format_config_property = appflow_mixins.CfnFlowPropsMixin.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" )
Attributes
- s3_input_file_type
The file type that Amazon AppFlow gets from your Amazon S3 bucket.
S3OutputFormatConfigProperty
- class CfnFlowPropsMixin.S3OutputFormatConfigProperty(*, aggregation_config=None, file_type=None, prefix_config=None, preserve_source_data_typing=None)
Bases:
objectThe configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.
- Parameters:
aggregation_config (
Union[IResolvable,AggregationConfigProperty,Dict[str,Any],None]) – The aggregation settings that you can use to customize the output format of your flow data.file_type (
Optional[str]) – Indicates the file type that Amazon AppFlow places in the Amazon S3 bucket.prefix_config (
Union[IResolvable,PrefixConfigProperty,Dict[str,Any],None]) – Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket. You can name folders according to the flow frequency and date.preserve_source_data_typing (
Union[bool,IResolvable,None]) – If your file output format is Parquet, use this parameter to set whether Amazon AppFlow preserves the data types in your source data when it writes the output to Amazon S3. -true: Amazon AppFlow preserves the data types when it writes to Amazon S3. For example, an integer or1in your source data is still an integer in your output. -false: Amazon AppFlow converts all of the source data into strings when it writes to Amazon S3. For example, an integer of1in your source data becomes the string"1"in the output.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins s3_output_format_config_property = appflow_mixins.CfnFlowPropsMixin.S3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False )
Attributes
- aggregation_config
The aggregation settings that you can use to customize the output format of your flow data.
- file_type
Indicates the file type that Amazon AppFlow places in the Amazon S3 bucket.
- prefix_config
Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket.
You can name folders according to the flow frequency and date.
- preserve_source_data_typing
If your file output format is Parquet, use this parameter to set whether Amazon AppFlow preserves the data types in your source data when it writes the output to Amazon S3.
true: Amazon AppFlow preserves the data types when it writes to Amazon S3. For example, an integer or1in your source data is still an integer in your output.false: Amazon AppFlow converts all of the source data into strings when it writes to Amazon S3. For example, an integer of1in your source data becomes the string"1"in the output.
S3SourcePropertiesProperty
- class CfnFlowPropsMixin.S3SourcePropertiesProperty(*, bucket_name=None, bucket_prefix=None, s3_input_format_config=None)
Bases:
objectThe properties that are applied when Amazon S3 is being used as the flow source.
- Parameters:
bucket_name (
Optional[str]) – The Amazon S3 bucket name where the source files are stored.bucket_prefix (
Optional[str]) – The object key for the Amazon S3 bucket in which the source files are stored.s3_input_format_config (
Union[IResolvable,S3InputFormatConfigProperty,Dict[str,Any],None]) – When you use Amazon S3 as the source, the configuration format that you provide the flow input data.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins s3_source_properties_property = appflow_mixins.CfnFlowPropsMixin.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_input_format_config=appflow_mixins.CfnFlowPropsMixin.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) )
Attributes
- bucket_name
The Amazon S3 bucket name where the source files are stored.
- bucket_prefix
The object key for the Amazon S3 bucket in which the source files are stored.
- s3_input_format_config
When you use Amazon S3 as the source, the configuration format that you provide the flow input data.
SAPODataDestinationPropertiesProperty
- class CfnFlowPropsMixin.SAPODataDestinationPropertiesProperty(*, error_handling_config=None, id_field_names=None, object_path=None, success_response_handling_config=None, write_operation_type=None)
Bases:
objectThe properties that are applied when using SAPOData as a flow destination.
- Parameters:
error_handling_config (
Union[IResolvable,ErrorHandlingConfigProperty,Dict[str,Any],None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfigis a part of the destination connector details.id_field_names (
Optional[Sequence[str]]) – A list of field names that can be used as an ID field when performing a write operation.object_path (
Optional[str]) – The object path specified in the SAPOData flow destination.success_response_handling_config (
Union[IResolvable,SuccessResponseHandlingConfigProperty,Dict[str,Any],None]) – Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data. For example, this setting would determine where to write the response from a destination connector upon a successful insert operation.write_operation_type (
Optional[str]) – The possible write operations in the destination connector. When this value is not provided, this defaults to theINSERToperation.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins s_aPOData_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.SAPODataDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object_path="objectPath", success_response_handling_config=appflow_mixins.CfnFlowPropsMixin.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" )
Attributes
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfigis a part of the destination connector details.
- id_field_names
A list of field names that can be used as an ID field when performing a write operation.
- object_path
The object path specified in the SAPOData flow destination.
- success_response_handling_config
Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data.
For example, this setting would determine where to write the response from a destination connector upon a successful insert operation.
- write_operation_type
The possible write operations in the destination connector.
When this value is not provided, this defaults to the
INSERToperation.
SAPODataPaginationConfigProperty
- class CfnFlowPropsMixin.SAPODataPaginationConfigProperty(*, max_page_size=None)
Bases:
objectSets the page size for each concurrent process that transfers OData records from your SAP instance.
A concurrent process is query that retrieves a batch of records as part of a flow run. Amazon AppFlow can run multiple concurrent processes in parallel to transfer data faster.
- Parameters:
max_page_size (
Union[int,float,None]) – The maximum number of records that Amazon AppFlow receives in each page of the response from your SAP application. For transfers of OData records, the maximum page size is 3,000. For transfers of data that comes from an ODP provider, the maximum page size is 10,000.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins s_aPOData_pagination_config_property = appflow_mixins.CfnFlowPropsMixin.SAPODataPaginationConfigProperty( max_page_size=123 )
Attributes
- max_page_size
The maximum number of records that Amazon AppFlow receives in each page of the response from your SAP application.
For transfers of OData records, the maximum page size is 3,000. For transfers of data that comes from an ODP provider, the maximum page size is 10,000.
SAPODataParallelismConfigProperty
- class CfnFlowPropsMixin.SAPODataParallelismConfigProperty(*, max_parallelism=None)
Bases:
objectSets the number of concurrent processes that transfer OData records from your SAP instance.
A concurrent process is query that retrieves a batch of records as part of a flow run. Amazon AppFlow can run multiple concurrent processes in parallel to transfer data faster.
- Parameters:
max_parallelism (
Union[int,float,None]) – The maximum number of processes that Amazon AppFlow runs at the same time when it retrieves your data from your SAP application.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins s_aPOData_parallelism_config_property = appflow_mixins.CfnFlowPropsMixin.SAPODataParallelismConfigProperty( max_parallelism=123 )
Attributes
- max_parallelism
The maximum number of processes that Amazon AppFlow runs at the same time when it retrieves your data from your SAP application.
SAPODataSourcePropertiesProperty
- class CfnFlowPropsMixin.SAPODataSourcePropertiesProperty(*, object_path=None, pagination_config=None, parallelism_config=None)
Bases:
objectThe properties that are applied when using SAPOData as a flow source.
- Parameters:
object_path (
Optional[str]) – The object path specified in the SAPOData flow source.pagination_config (
Union[IResolvable,SAPODataPaginationConfigProperty,Dict[str,Any],None]) – Sets the page size for each concurrent process that transfers OData records from your SAP instance.parallelism_config (
Union[IResolvable,SAPODataParallelismConfigProperty,Dict[str,Any],None]) – Sets the number of concurrent processes that transfers OData records from your SAP instance.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins s_aPOData_source_properties_property = appflow_mixins.CfnFlowPropsMixin.SAPODataSourcePropertiesProperty( object_path="objectPath", pagination_config=appflow_mixins.CfnFlowPropsMixin.SAPODataPaginationConfigProperty( max_page_size=123 ), parallelism_config=appflow_mixins.CfnFlowPropsMixin.SAPODataParallelismConfigProperty( max_parallelism=123 ) )
Attributes
- object_path
The object path specified in the SAPOData flow source.
- pagination_config
Sets the page size for each concurrent process that transfers OData records from your SAP instance.
- parallelism_config
Sets the number of concurrent processes that transfers OData records from your SAP instance.
SalesforceDestinationPropertiesProperty
- class CfnFlowPropsMixin.SalesforceDestinationPropertiesProperty(*, data_transfer_api=None, error_handling_config=None, id_field_names=None, object=None, write_operation_type=None)
Bases:
objectThe properties that are applied when Salesforce is being used as a destination.
- Parameters:
data_transfer_api (
Optional[str]) – Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data to Salesforce. - AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers to Salesforce. If your flow transfers fewer than 1,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0. Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900 records, and it might use Bulk API 2.0 on the next day to transfer 1,100 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields. By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output. - BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers. Note that Bulk API 2.0 does not transfer Salesforce compound fields. - REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail with a timed out error.error_handling_config (
Union[IResolvable,ErrorHandlingConfigProperty,Dict[str,Any],None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Salesforce destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfigis a part of the destination connector details.id_field_names (
Optional[Sequence[str]]) – The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.object (
Optional[str]) – The object specified in the Salesforce flow destination.write_operation_type (
Optional[str]) – This specifies the type of write operation to be performed in Salesforce. When the value isUPSERT, thenidFieldNamesis required.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins salesforce_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.SalesforceDestinationPropertiesProperty( data_transfer_api="dataTransferApi", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object="object", write_operation_type="writeOperationType" )
Attributes
- data_transfer_api
Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data to Salesforce.
AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers to Salesforce. If your flow transfers fewer than 1,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0.
Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900 records, and it might use Bulk API 2.0 on the next day to transfer 1,100 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields.
By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output.
BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers.
Note that Bulk API 2.0 does not transfer Salesforce compound fields.
REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail with a timed out error.
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the Salesforce destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfigis a part of the destination connector details.
- id_field_names
The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.
- object
The object specified in the Salesforce flow destination.
- write_operation_type
This specifies the type of write operation to be performed in Salesforce.
When the value is
UPSERT, thenidFieldNamesis required.
SalesforceSourcePropertiesProperty
- class CfnFlowPropsMixin.SalesforceSourcePropertiesProperty(*, data_transfer_api=None, enable_dynamic_field_update=None, include_deleted_records=None, object=None)
Bases:
objectThe properties that are applied when Salesforce is being used as a source.
- Parameters:
data_transfer_api (
Optional[str]) – Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data from Salesforce. - AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers from Salesforce. If your flow transfers fewer than 1,000,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0. Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900,000 records, and it might use Bulk API 2.0 on the next day to transfer 1,100,000 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields. By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output. - BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers. Note that Bulk API 2.0 does not transfer Salesforce compound fields. - REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail wituh a timed out error.enable_dynamic_field_update (
Union[bool,IResolvable,None]) – The flag that enables dynamic fetching of new (recently added) fields in the Salesforce objects while running a flow.include_deleted_records (
Union[bool,IResolvable,None]) – Indicates whether Amazon AppFlow includes deleted files in the flow run.object (
Optional[str]) – The object specified in the Salesforce flow source.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins salesforce_source_properties_property = appflow_mixins.CfnFlowPropsMixin.SalesforceSourcePropertiesProperty( data_transfer_api="dataTransferApi", enable_dynamic_field_update=False, include_deleted_records=False, object="object" )
Attributes
- data_transfer_api
Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data from Salesforce.
AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers from Salesforce. If your flow transfers fewer than 1,000,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0.
Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900,000 records, and it might use Bulk API 2.0 on the next day to transfer 1,100,000 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields.
By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output.
BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers.
Note that Bulk API 2.0 does not transfer Salesforce compound fields.
REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail wituh a timed out error.
- enable_dynamic_field_update
The flag that enables dynamic fetching of new (recently added) fields in the Salesforce objects while running a flow.
- include_deleted_records
Indicates whether Amazon AppFlow includes deleted files in the flow run.
- object
The object specified in the Salesforce flow source.
ScheduledTriggerPropertiesProperty
- class CfnFlowPropsMixin.ScheduledTriggerPropertiesProperty(*, data_pull_mode=None, first_execution_from=None, flow_error_deactivation_threshold=None, schedule_end_time=None, schedule_expression=None, schedule_offset=None, schedule_start_time=None, time_zone=None)
Bases:
objectSpecifies the configuration details of a schedule-triggered flow as defined by the user.
Currently, these settings only apply to the
Scheduledtrigger type.- Parameters:
data_pull_mode (
Optional[str]) – Specifies whether a scheduled flow has an incremental data transfer or a complete data transfer for each flow run.first_execution_from (
Union[int,float,None]) – Specifies the date range for the records to import from the connector in the first flow run.flow_error_deactivation_threshold (
Union[int,float,None]) – Defines how many times a scheduled flow fails consecutively before Amazon AppFlow deactivates it.schedule_end_time (
Union[int,float,None]) – The time at which the scheduled flow ends. The time is formatted as a timestamp that follows the ISO 8601 standard, such as2022-04-27T13:00:00-07:00.schedule_expression (
Optional[str]) – The scheduling expression that determines the rate at which the schedule will run, for examplerate(5minutes).schedule_offset (
Union[int,float,None]) – Specifies the optional offset that is added to the time interval for a schedule-triggered flow.schedule_start_time (
Union[int,float,None]) – The time at which the scheduled flow starts. The time is formatted as a timestamp that follows the ISO 8601 standard, such as2022-04-26T13:00:00-07:00.time_zone (
Optional[str]) – Specifies the time zone used when referring to the dates and times of a scheduled flow, such asAmerica/New_York. This time zone is only a descriptive label. It doesn’t affect how Amazon AppFlow interprets the timestamps that you specify to schedule the flow. If you want to schedule a flow by using times in a particular time zone, indicate the time zone as a UTC offset in your timestamps. For example, the UTC offsets for theAmerica/New_Yorktimezone are-04:00EDT and-05:00 EST.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins scheduled_trigger_properties_property = appflow_mixins.CfnFlowPropsMixin.ScheduledTriggerPropertiesProperty( data_pull_mode="dataPullMode", first_execution_from=123, flow_error_deactivation_threshold=123, schedule_end_time=123, schedule_expression="scheduleExpression", schedule_offset=123, schedule_start_time=123, time_zone="timeZone" )
Attributes
- data_pull_mode
Specifies whether a scheduled flow has an incremental data transfer or a complete data transfer for each flow run.
- first_execution_from
Specifies the date range for the records to import from the connector in the first flow run.
- flow_error_deactivation_threshold
Defines how many times a scheduled flow fails consecutively before Amazon AppFlow deactivates it.
- schedule_end_time
The time at which the scheduled flow ends.
The time is formatted as a timestamp that follows the ISO 8601 standard, such as
2022-04-27T13:00:00-07:00.
- schedule_expression
The scheduling expression that determines the rate at which the schedule will run, for example
rate(5minutes).
- schedule_offset
Specifies the optional offset that is added to the time interval for a schedule-triggered flow.
- schedule_start_time
The time at which the scheduled flow starts.
The time is formatted as a timestamp that follows the ISO 8601 standard, such as
2022-04-26T13:00:00-07:00.
- time_zone
Specifies the time zone used when referring to the dates and times of a scheduled flow, such as
America/New_York.This time zone is only a descriptive label. It doesn’t affect how Amazon AppFlow interprets the timestamps that you specify to schedule the flow.
If you want to schedule a flow by using times in a particular time zone, indicate the time zone as a UTC offset in your timestamps. For example, the UTC offsets for the
America/New_Yorktimezone are-04:00EDT and-05:00 EST.
ServiceNowSourcePropertiesProperty
- class CfnFlowPropsMixin.ServiceNowSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when ServiceNow is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the ServiceNow flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins service_now_source_properties_property = appflow_mixins.CfnFlowPropsMixin.ServiceNowSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the ServiceNow flow source.
SingularSourcePropertiesProperty
- class CfnFlowPropsMixin.SingularSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Singular is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the Singular flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins singular_source_properties_property = appflow_mixins.CfnFlowPropsMixin.SingularSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Singular flow source.
SlackSourcePropertiesProperty
- class CfnFlowPropsMixin.SlackSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when Slack is being used as a source.
- Parameters:
object (
Optional[str]) – The object specified in the Slack flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins slack_source_properties_property = appflow_mixins.CfnFlowPropsMixin.SlackSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Slack flow source.
SnowflakeDestinationPropertiesProperty
- class CfnFlowPropsMixin.SnowflakeDestinationPropertiesProperty(*, bucket_prefix=None, error_handling_config=None, intermediate_bucket_name=None, object=None)
Bases:
objectThe properties that are applied when Snowflake is being used as a destination.
- Parameters:
bucket_prefix (
Optional[str]) – The object key for the destination bucket in which Amazon AppFlow places the files.error_handling_config (
Union[IResolvable,ErrorHandlingConfigProperty,Dict[str,Any],None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Snowflake destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfigis a part of the destination connector details.intermediate_bucket_name (
Optional[str]) – The intermediate bucket that Amazon AppFlow uses when moving data into Snowflake.object (
Optional[str]) – The object specified in the Snowflake flow destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins snowflake_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.SnowflakeDestinationPropertiesProperty( bucket_prefix="bucketPrefix", error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), intermediate_bucket_name="intermediateBucketName", object="object" )
Attributes
- bucket_prefix
The object key for the destination bucket in which Amazon AppFlow places the files.
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the Snowflake destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfigis a part of the destination connector details.
- intermediate_bucket_name
The intermediate bucket that Amazon AppFlow uses when moving data into Snowflake.
- object
The object specified in the Snowflake flow destination.
SourceConnectorPropertiesProperty
- class CfnFlowPropsMixin.SourceConnectorPropertiesProperty(*, amplitude=None, custom_connector=None, datadog=None, dynatrace=None, google_analytics=None, infor_nexus=None, marketo=None, pardot=None, s3=None, salesforce=None, sapo_data=None, service_now=None, singular=None, slack=None, trendmicro=None, veeva=None, zendesk=None)
Bases:
objectSpecifies the information that is required to query a particular connector.
- Parameters:
amplitude (
Union[IResolvable,AmplitudeSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Amplitude.custom_connector (
Union[IResolvable,CustomConnectorSourcePropertiesProperty,Dict[str,Any],None]) – The properties that are applied when the custom connector is being used as a source.datadog (
Union[IResolvable,DatadogSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Datadog.dynatrace (
Union[IResolvable,DynatraceSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Dynatrace.google_analytics (
Union[IResolvable,GoogleAnalyticsSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Google Analytics.infor_nexus (
Union[IResolvable,InforNexusSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Infor Nexus.marketo (
Union[IResolvable,MarketoSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Marketo.pardot (
Union[IResolvable,PardotSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Salesforce Pardot.s3 (
Union[IResolvable,S3SourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Amazon S3.salesforce (
Union[IResolvable,SalesforceSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Salesforce.sapo_data (
Union[IResolvable,SAPODataSourcePropertiesProperty,Dict[str,Any],None]) – The properties that are applied when using SAPOData as a flow source.service_now (
Union[IResolvable,ServiceNowSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying ServiceNow.singular (
Union[IResolvable,SingularSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Singular.slack (
Union[IResolvable,SlackSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Slack.trendmicro (
Union[IResolvable,TrendmicroSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Trend Micro.veeva (
Union[IResolvable,VeevaSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Veeva.zendesk (
Union[IResolvable,ZendeskSourcePropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required for querying Zendesk.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins source_connector_properties_property = appflow_mixins.CfnFlowPropsMixin.SourceConnectorPropertiesProperty( amplitude=appflow_mixins.CfnFlowPropsMixin.AmplitudeSourcePropertiesProperty( object="object" ), custom_connector=appflow_mixins.CfnFlowPropsMixin.CustomConnectorSourcePropertiesProperty( custom_properties={ "custom_properties_key": "customProperties" }, data_transfer_api=appflow_mixins.CfnFlowPropsMixin.DataTransferApiProperty( name="name", type="type" ), entity_name="entityName" ), datadog=appflow_mixins.CfnFlowPropsMixin.DatadogSourcePropertiesProperty( object="object" ), dynatrace=appflow_mixins.CfnFlowPropsMixin.DynatraceSourcePropertiesProperty( object="object" ), google_analytics=appflow_mixins.CfnFlowPropsMixin.GoogleAnalyticsSourcePropertiesProperty( object="object" ), infor_nexus=appflow_mixins.CfnFlowPropsMixin.InforNexusSourcePropertiesProperty( object="object" ), marketo=appflow_mixins.CfnFlowPropsMixin.MarketoSourcePropertiesProperty( object="object" ), pardot=appflow_mixins.CfnFlowPropsMixin.PardotSourcePropertiesProperty( object="object" ), s3=appflow_mixins.CfnFlowPropsMixin.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_input_format_config=appflow_mixins.CfnFlowPropsMixin.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) ), salesforce=appflow_mixins.CfnFlowPropsMixin.SalesforceSourcePropertiesProperty( data_transfer_api="dataTransferApi", enable_dynamic_field_update=False, include_deleted_records=False, object="object" ), sapo_data=appflow_mixins.CfnFlowPropsMixin.SAPODataSourcePropertiesProperty( object_path="objectPath", pagination_config=appflow_mixins.CfnFlowPropsMixin.SAPODataPaginationConfigProperty( max_page_size=123 ), parallelism_config=appflow_mixins.CfnFlowPropsMixin.SAPODataParallelismConfigProperty( max_parallelism=123 ) ), service_now=appflow_mixins.CfnFlowPropsMixin.ServiceNowSourcePropertiesProperty( object="object" ), singular=appflow_mixins.CfnFlowPropsMixin.SingularSourcePropertiesProperty( object="object" ), slack=appflow_mixins.CfnFlowPropsMixin.SlackSourcePropertiesProperty( object="object" ), trendmicro=appflow_mixins.CfnFlowPropsMixin.TrendmicroSourcePropertiesProperty( object="object" ), veeva=appflow_mixins.CfnFlowPropsMixin.VeevaSourcePropertiesProperty( document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False, object="object" ), zendesk=appflow_mixins.CfnFlowPropsMixin.ZendeskSourcePropertiesProperty( object="object" ) )
Attributes
- amplitude
Specifies the information that is required for querying Amplitude.
- custom_connector
The properties that are applied when the custom connector is being used as a source.
- datadog
Specifies the information that is required for querying Datadog.
- dynatrace
Specifies the information that is required for querying Dynatrace.
- google_analytics
Specifies the information that is required for querying Google Analytics.
- infor_nexus
Specifies the information that is required for querying Infor Nexus.
- marketo
Specifies the information that is required for querying Marketo.
- pardot
Specifies the information that is required for querying Salesforce Pardot.
- s3
Specifies the information that is required for querying Amazon S3.
- salesforce
Specifies the information that is required for querying Salesforce.
- sapo_data
The properties that are applied when using SAPOData as a flow source.
- service_now
Specifies the information that is required for querying ServiceNow.
- singular
Specifies the information that is required for querying Singular.
- slack
Specifies the information that is required for querying Slack.
- trendmicro
Specifies the information that is required for querying Trend Micro.
- veeva
Specifies the information that is required for querying Veeva.
- zendesk
Specifies the information that is required for querying Zendesk.
SourceFlowConfigProperty
- class CfnFlowPropsMixin.SourceFlowConfigProperty(*, api_version=None, connector_profile_name=None, connector_type=None, incremental_pull_config=None, source_connector_properties=None)
Bases:
objectContains information about the configuration of the source connector used in the flow.
- Parameters:
api_version (
Optional[str]) – The API version of the connector when it’s used as a source in the flow.connector_profile_name (
Optional[str]) – The name of the connector profile. This name must be unique for each connector profile in the AWS account .connector_type (
Optional[str]) – The type of connector, such as Salesforce, Amplitude, and so on.incremental_pull_config (
Union[IResolvable,IncrementalPullConfigProperty,Dict[str,Any],None]) – Defines the configuration for a scheduled incremental data pull. If a valid configuration is provided, the fields specified in the configuration are used when querying for the incremental data pull.source_connector_properties (
Union[IResolvable,SourceConnectorPropertiesProperty,Dict[str,Any],None]) – Specifies the information that is required to query a particular source connector.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins source_flow_config_property = appflow_mixins.CfnFlowPropsMixin.SourceFlowConfigProperty( api_version="apiVersion", connector_profile_name="connectorProfileName", connector_type="connectorType", incremental_pull_config=appflow_mixins.CfnFlowPropsMixin.IncrementalPullConfigProperty( datetime_type_field_name="datetimeTypeFieldName" ), source_connector_properties=appflow_mixins.CfnFlowPropsMixin.SourceConnectorPropertiesProperty( amplitude=appflow_mixins.CfnFlowPropsMixin.AmplitudeSourcePropertiesProperty( object="object" ), custom_connector=appflow_mixins.CfnFlowPropsMixin.CustomConnectorSourcePropertiesProperty( custom_properties={ "custom_properties_key": "customProperties" }, data_transfer_api=appflow_mixins.CfnFlowPropsMixin.DataTransferApiProperty( name="name", type="type" ), entity_name="entityName" ), datadog=appflow_mixins.CfnFlowPropsMixin.DatadogSourcePropertiesProperty( object="object" ), dynatrace=appflow_mixins.CfnFlowPropsMixin.DynatraceSourcePropertiesProperty( object="object" ), google_analytics=appflow_mixins.CfnFlowPropsMixin.GoogleAnalyticsSourcePropertiesProperty( object="object" ), infor_nexus=appflow_mixins.CfnFlowPropsMixin.InforNexusSourcePropertiesProperty( object="object" ), marketo=appflow_mixins.CfnFlowPropsMixin.MarketoSourcePropertiesProperty( object="object" ), pardot=appflow_mixins.CfnFlowPropsMixin.PardotSourcePropertiesProperty( object="object" ), s3=appflow_mixins.CfnFlowPropsMixin.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_input_format_config=appflow_mixins.CfnFlowPropsMixin.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) ), salesforce=appflow_mixins.CfnFlowPropsMixin.SalesforceSourcePropertiesProperty( data_transfer_api="dataTransferApi", enable_dynamic_field_update=False, include_deleted_records=False, object="object" ), sapo_data=appflow_mixins.CfnFlowPropsMixin.SAPODataSourcePropertiesProperty( object_path="objectPath", pagination_config=appflow_mixins.CfnFlowPropsMixin.SAPODataPaginationConfigProperty( max_page_size=123 ), parallelism_config=appflow_mixins.CfnFlowPropsMixin.SAPODataParallelismConfigProperty( max_parallelism=123 ) ), service_now=appflow_mixins.CfnFlowPropsMixin.ServiceNowSourcePropertiesProperty( object="object" ), singular=appflow_mixins.CfnFlowPropsMixin.SingularSourcePropertiesProperty( object="object" ), slack=appflow_mixins.CfnFlowPropsMixin.SlackSourcePropertiesProperty( object="object" ), trendmicro=appflow_mixins.CfnFlowPropsMixin.TrendmicroSourcePropertiesProperty( object="object" ), veeva=appflow_mixins.CfnFlowPropsMixin.VeevaSourcePropertiesProperty( document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False, object="object" ), zendesk=appflow_mixins.CfnFlowPropsMixin.ZendeskSourcePropertiesProperty( object="object" ) ) )
Attributes
- api_version
The API version of the connector when it’s used as a source in the flow.
- connector_profile_name
The name of the connector profile.
This name must be unique for each connector profile in the AWS account .
- connector_type
The type of connector, such as Salesforce, Amplitude, and so on.
- incremental_pull_config
Defines the configuration for a scheduled incremental data pull.
If a valid configuration is provided, the fields specified in the configuration are used when querying for the incremental data pull.
- source_connector_properties
Specifies the information that is required to query a particular source connector.
SuccessResponseHandlingConfigProperty
- class CfnFlowPropsMixin.SuccessResponseHandlingConfigProperty(*, bucket_name=None, bucket_prefix=None)
Bases:
objectDetermines how Amazon AppFlow handles the success response that it gets from the connector after placing data.
For example, this setting would determine where to write the response from the destination connector upon a successful insert operation.
- Parameters:
bucket_name (
Optional[str]) – The name of the Amazon S3 bucket.bucket_prefix (
Optional[str]) – The Amazon S3 bucket prefix.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins success_response_handling_config_property = appflow_mixins.CfnFlowPropsMixin.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" )
Attributes
- bucket_name
The name of the Amazon S3 bucket.
- bucket_prefix
The Amazon S3 bucket prefix.
TaskPropertiesObjectProperty
- class CfnFlowPropsMixin.TaskPropertiesObjectProperty(*, key=None, value=None)
Bases:
objectA map used to store task-related information.
The execution service looks for particular information based on the
TaskType.- Parameters:
key (
Optional[str]) – The task property key.value (
Optional[str]) – The task property value.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins task_properties_object_property = appflow_mixins.CfnFlowPropsMixin.TaskPropertiesObjectProperty( key="key", value="value" )
Attributes
- key
The task property key.
TaskProperty
- class CfnFlowPropsMixin.TaskProperty(*, connector_operator=None, destination_field=None, source_fields=None, task_properties=None, task_type=None)
Bases:
objectA class for modeling different type of tasks.
Task implementation varies based on the
TaskType.- Parameters:
connector_operator (
Union[IResolvable,ConnectorOperatorProperty,Dict[str,Any],None]) – The operation to be performed on the provided source fields.destination_field (
Optional[str]) – A field in a destination connector, or a field value against which Amazon AppFlow validates a source field.source_fields (
Optional[Sequence[str]]) – The source fields to which a particular task is applied.task_properties (
Union[IResolvable,Sequence[Union[IResolvable,TaskPropertiesObjectProperty,Dict[str,Any]]],None]) – A map used to store task-related information. The execution service looks for particular information based on theTaskType.task_type (
Optional[str]) – Specifies the particular task implementation that Amazon AppFlow performs. Allowed values :Arithmetic|Filter|Map|Map_all|Mask|Merge|Truncate|Validate
- See:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-task.html
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins task_property = appflow_mixins.CfnFlowPropsMixin.TaskProperty( connector_operator=appflow_mixins.CfnFlowPropsMixin.ConnectorOperatorProperty( amplitude="amplitude", custom_connector="customConnector", datadog="datadog", dynatrace="dynatrace", google_analytics="googleAnalytics", infor_nexus="inforNexus", marketo="marketo", pardot="pardot", s3="s3", salesforce="salesforce", sapo_data="sapoData", service_now="serviceNow", singular="singular", slack="slack", trendmicro="trendmicro", veeva="veeva", zendesk="zendesk" ), destination_field="destinationField", source_fields=["sourceFields"], task_properties=[appflow_mixins.CfnFlowPropsMixin.TaskPropertiesObjectProperty( key="key", value="value" )], task_type="taskType" )
Attributes
- connector_operator
The operation to be performed on the provided source fields.
- destination_field
A field in a destination connector, or a field value against which Amazon AppFlow validates a source field.
- source_fields
The source fields to which a particular task is applied.
- task_properties
A map used to store task-related information.
The execution service looks for particular information based on the
TaskType.
- task_type
Specifies the particular task implementation that Amazon AppFlow performs.
Allowed values :
Arithmetic|Filter|Map|Map_all|Mask|Merge|Truncate|Validate
TrendmicroSourcePropertiesProperty
- class CfnFlowPropsMixin.TrendmicroSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when using Trend Micro as a flow source.
- Parameters:
object (
Optional[str]) – The object specified in the Trend Micro flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins trendmicro_source_properties_property = appflow_mixins.CfnFlowPropsMixin.TrendmicroSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Trend Micro flow source.
TriggerConfigProperty
- class CfnFlowPropsMixin.TriggerConfigProperty(*, trigger_properties=None, trigger_type=None)
Bases:
objectThe trigger settings that determine how and when Amazon AppFlow runs the specified flow.
- Parameters:
trigger_properties (
Union[IResolvable,ScheduledTriggerPropertiesProperty,Dict[str,Any],None]) – Specifies the configuration details of a schedule-triggered flow as defined by the user. Currently, these settings only apply to theScheduledtrigger type.trigger_type (
Optional[str]) – Specifies the type of flow trigger. This can beOnDemand,Scheduled, orEvent.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins trigger_config_property = appflow_mixins.CfnFlowPropsMixin.TriggerConfigProperty( trigger_properties=appflow_mixins.CfnFlowPropsMixin.ScheduledTriggerPropertiesProperty( data_pull_mode="dataPullMode", first_execution_from=123, flow_error_deactivation_threshold=123, schedule_end_time=123, schedule_expression="scheduleExpression", schedule_offset=123, schedule_start_time=123, time_zone="timeZone" ), trigger_type="triggerType" )
Attributes
- trigger_properties
Specifies the configuration details of a schedule-triggered flow as defined by the user.
Currently, these settings only apply to the
Scheduledtrigger type.
- trigger_type
Specifies the type of flow trigger.
This can be
OnDemand,Scheduled, orEvent.
UpsolverDestinationPropertiesProperty
- class CfnFlowPropsMixin.UpsolverDestinationPropertiesProperty(*, bucket_name=None, bucket_prefix=None, s3_output_format_config=None)
Bases:
objectThe properties that are applied when Upsolver is used as a destination.
- Parameters:
bucket_name (
Optional[str]) – The Upsolver Amazon S3 bucket name in which Amazon AppFlow places the transferred data.bucket_prefix (
Optional[str]) – The object key for the destination Upsolver Amazon S3 bucket in which Amazon AppFlow places the files.s3_output_format_config (
Union[IResolvable,UpsolverS3OutputFormatConfigProperty,Dict[str,Any],None]) – The configuration that determines how data is formatted when Upsolver is used as the flow destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins upsolver_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", s3_output_format_config=appflow_mixins.CfnFlowPropsMixin.UpsolverS3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ) ) )
Attributes
- bucket_name
The Upsolver Amazon S3 bucket name in which Amazon AppFlow places the transferred data.
- bucket_prefix
The object key for the destination Upsolver Amazon S3 bucket in which Amazon AppFlow places the files.
- s3_output_format_config
The configuration that determines how data is formatted when Upsolver is used as the flow destination.
UpsolverS3OutputFormatConfigProperty
- class CfnFlowPropsMixin.UpsolverS3OutputFormatConfigProperty(*, aggregation_config=None, file_type=None, prefix_config=None)
Bases:
objectThe configuration that determines how Amazon AppFlow formats the flow output data when Upsolver is used as the destination.
- Parameters:
aggregation_config (
Union[IResolvable,AggregationConfigProperty,Dict[str,Any],None]) – The aggregation settings that you can use to customize the output format of your flow data.file_type (
Optional[str]) – Indicates the file type that Amazon AppFlow places in the Upsolver Amazon S3 bucket.prefix_config (
Union[IResolvable,PrefixConfigProperty,Dict[str,Any],None]) – Specifies elements that Amazon AppFlow includes in the file and folder names in the flow destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins upsolver_s3_output_format_config_property = appflow_mixins.CfnFlowPropsMixin.UpsolverS3OutputFormatConfigProperty( aggregation_config=appflow_mixins.CfnFlowPropsMixin.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow_mixins.CfnFlowPropsMixin.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ) )
Attributes
- aggregation_config
The aggregation settings that you can use to customize the output format of your flow data.
- file_type
Indicates the file type that Amazon AppFlow places in the Upsolver Amazon S3 bucket.
- prefix_config
Specifies elements that Amazon AppFlow includes in the file and folder names in the flow destination.
VeevaSourcePropertiesProperty
- class CfnFlowPropsMixin.VeevaSourcePropertiesProperty(*, document_type=None, include_all_versions=None, include_renditions=None, include_source_files=None, object=None)
Bases:
objectThe properties that are applied when using Veeva as a flow source.
- Parameters:
document_type (
Optional[str]) – The document type specified in the Veeva document extract flow.include_all_versions (
Union[bool,IResolvable,None]) – Boolean value to include All Versions of files in Veeva document extract flow.include_renditions (
Union[bool,IResolvable,None]) – Boolean value to include file renditions in Veeva document extract flow.include_source_files (
Union[bool,IResolvable,None]) – Boolean value to include source files in Veeva document extract flow.object (
Optional[str]) – The object specified in the Veeva flow source.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins veeva_source_properties_property = appflow_mixins.CfnFlowPropsMixin.VeevaSourcePropertiesProperty( document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False, object="object" )
Attributes
- document_type
The document type specified in the Veeva document extract flow.
- include_all_versions
Boolean value to include All Versions of files in Veeva document extract flow.
- include_renditions
Boolean value to include file renditions in Veeva document extract flow.
- include_source_files
Boolean value to include source files in Veeva document extract flow.
- object
The object specified in the Veeva flow source.
ZendeskDestinationPropertiesProperty
- class CfnFlowPropsMixin.ZendeskDestinationPropertiesProperty(*, error_handling_config=None, id_field_names=None, object=None, write_operation_type=None)
Bases:
objectThe properties that are applied when Zendesk is used as a destination.
- Parameters:
error_handling_config (
Union[IResolvable,ErrorHandlingConfigProperty,Dict[str,Any],None]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfigis a part of the destination connector details.id_field_names (
Optional[Sequence[str]]) – A list of field names that can be used as an ID field when performing a write operation.object (
Optional[str]) – The object specified in the Zendesk flow destination.write_operation_type (
Optional[str]) – The possible write operations in the destination connector. When this value is not provided, this defaults to theINSERToperation.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins zendesk_destination_properties_property = appflow_mixins.CfnFlowPropsMixin.ZendeskDestinationPropertiesProperty( error_handling_config=appflow_mixins.CfnFlowPropsMixin.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], object="object", write_operation_type="writeOperationType" )
Attributes
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfigis a part of the destination connector details.
- id_field_names
A list of field names that can be used as an ID field when performing a write operation.
- object
The object specified in the Zendesk flow destination.
- write_operation_type
The possible write operations in the destination connector.
When this value is not provided, this defaults to the
INSERToperation.
ZendeskSourcePropertiesProperty
- class CfnFlowPropsMixin.ZendeskSourcePropertiesProperty(*, object=None)
Bases:
objectThe properties that are applied when using Zendesk as a flow source.
- Parameters:
object (
Optional[str]) – The object specified in the Zendesk flow source.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_appflow import mixins as appflow_mixins zendesk_source_properties_property = appflow_mixins.CfnFlowPropsMixin.ZendeskSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Zendesk flow source.