CfnFlow
- class aws_cdk.aws_appflow.CfnFlow(scope, id, *, destination_flow_config_list, flow_name, source_flow_config, tasks, trigger_config, description=None, flow_status=None, kms_arn=None, metadata_catalog_config=None, tags=None)
Bases:
CfnResource
A CloudFormation
AWS::AppFlow::Flow
.The
AWS::AppFlow::Flow
resource is an Amazon AppFlow resource type that specifies a new flow. .. epigraph:If you want to use AWS CloudFormation to create a connector profile for connectors that implement OAuth (such as Salesforce, Slack, Zendesk, and Google Analytics), you must fetch the access and refresh tokens. You can do this by implementing your own UI for OAuth, or by retrieving the tokens from elsewhere. Alternatively, you can use the Amazon AppFlow console to create the connector profile, and then use that connector profile in the flow creation CloudFormation template.
- CloudformationResource:
AWS::AppFlow::Flow
- Link:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-appflow-flow.html
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow cfn_flow = appflow.CfnFlow(self, "MyCfnFlow", destination_flow_config_list=[appflow.CfnFlow.DestinationFlowConfigProperty( connector_type="connectorType", destination_connector_properties=appflow.CfnFlow.DestinationConnectorPropertiesProperty( custom_connector=appflow.CfnFlow.CustomConnectorDestinationPropertiesProperty( entity_name="entityName", # the properties below are optional custom_properties={ "custom_properties_key": "customProperties" }, error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), event_bridge=appflow.CfnFlow.EventBridgeDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), lookout_metrics=appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), redshift=appflow.CfnFlow.RedshiftDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), s3=appflow.CfnFlow.S3DestinationPropertiesProperty( bucket_name="bucketName", # the properties below are optional bucket_prefix="bucketPrefix", s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False ) ), salesforce=appflow.CfnFlow.SalesforceDestinationPropertiesProperty( object="object", # the properties below are optional data_transfer_api="dataTransferApi", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), sapo_data=appflow.CfnFlow.SAPODataDestinationPropertiesProperty( object_path="objectPath", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" ), snowflake=appflow.CfnFlow.SnowflakeDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), upsolver=appflow.CfnFlow.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType" ), # the properties below are optional bucket_prefix="bucketPrefix" ), zendesk=appflow.CfnFlow.ZendeskDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ) ), # the properties below are optional api_version="apiVersion", connector_profile_name="connectorProfileName" )], flow_name="flowName", source_flow_config=appflow.CfnFlow.SourceFlowConfigProperty( connector_type="connectorType", source_connector_properties=appflow.CfnFlow.SourceConnectorPropertiesProperty( amplitude=appflow.CfnFlow.AmplitudeSourcePropertiesProperty( object="object" ), custom_connector=appflow.CfnFlow.CustomConnectorSourcePropertiesProperty( entity_name="entityName", # the properties below are optional custom_properties={ "custom_properties_key": "customProperties" } ), datadog=appflow.CfnFlow.DatadogSourcePropertiesProperty( object="object" ), dynatrace=appflow.CfnFlow.DynatraceSourcePropertiesProperty( object="object" ), google_analytics=appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty( object="object" ), infor_nexus=appflow.CfnFlow.InforNexusSourcePropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoSourcePropertiesProperty( object="object" ), pardot=appflow.CfnFlow.PardotSourcePropertiesProperty( object="object" ), s3=appflow.CfnFlow.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", # the properties below are optional s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) ), salesforce=appflow.CfnFlow.SalesforceSourcePropertiesProperty( object="object", # the properties below are optional data_transfer_api="dataTransferApi", enable_dynamic_field_update=False, include_deleted_records=False ), sapo_data=appflow.CfnFlow.SAPODataSourcePropertiesProperty( object_path="objectPath" ), service_now=appflow.CfnFlow.ServiceNowSourcePropertiesProperty( object="object" ), singular=appflow.CfnFlow.SingularSourcePropertiesProperty( object="object" ), slack=appflow.CfnFlow.SlackSourcePropertiesProperty( object="object" ), trendmicro=appflow.CfnFlow.TrendmicroSourcePropertiesProperty( object="object" ), veeva=appflow.CfnFlow.VeevaSourcePropertiesProperty( object="object", # the properties below are optional document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False ), zendesk=appflow.CfnFlow.ZendeskSourcePropertiesProperty( object="object" ) ), # the properties below are optional api_version="apiVersion", connector_profile_name="connectorProfileName", incremental_pull_config=appflow.CfnFlow.IncrementalPullConfigProperty( datetime_type_field_name="datetimeTypeFieldName" ) ), tasks=[appflow.CfnFlow.TaskProperty( source_fields=["sourceFields"], task_type="taskType", # the properties below are optional connector_operator=appflow.CfnFlow.ConnectorOperatorProperty( amplitude="amplitude", custom_connector="customConnector", datadog="datadog", dynatrace="dynatrace", google_analytics="googleAnalytics", infor_nexus="inforNexus", marketo="marketo", pardot="pardot", s3="s3", salesforce="salesforce", sapo_data="sapoData", service_now="serviceNow", singular="singular", slack="slack", trendmicro="trendmicro", veeva="veeva", zendesk="zendesk" ), destination_field="destinationField", task_properties=[appflow.CfnFlow.TaskPropertiesObjectProperty( key="key", value="value" )] )], trigger_config=appflow.CfnFlow.TriggerConfigProperty( trigger_type="triggerType", # the properties below are optional trigger_properties=appflow.CfnFlow.ScheduledTriggerPropertiesProperty( schedule_expression="scheduleExpression", # the properties below are optional data_pull_mode="dataPullMode", first_execution_from=123, flow_error_deactivation_threshold=123, schedule_end_time=123, schedule_offset=123, schedule_start_time=123, time_zone="timeZone" ) ), # the properties below are optional description="description", flow_status="flowStatus", kms_arn="kmsArn", metadata_catalog_config=appflow.CfnFlow.MetadataCatalogConfigProperty( glue_data_catalog=appflow.CfnFlow.GlueDataCatalogProperty( database_name="databaseName", role_arn="roleArn", table_prefix="tablePrefix" ) ), tags=[CfnTag( key="key", value="value" )] )
Create a new
AWS::AppFlow::Flow
.- Parameters:
scope (
Construct
) –scope in which this resource is defined.
id (
str
) –scoped id of the resource.
destination_flow_config_list (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,DestinationFlowConfigProperty
,Dict
[str
,Any
]]]]) – The configuration that controls how Amazon AppFlow places data in the destination connector.flow_name (
str
) – The specified name of the flow. Spaces are not allowed. Use underscores (_) or hyphens (-) only.source_flow_config (
Union
[IResolvable
,SourceFlowConfigProperty
,Dict
[str
,Any
]]) – Contains information about the configuration of the source connector used in the flow.tasks (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,TaskProperty
,Dict
[str
,Any
]]]]) – A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.trigger_config (
Union
[IResolvable
,TriggerConfigProperty
,Dict
[str
,Any
]]) – The trigger settings that determine how and when Amazon AppFlow runs the specified flow.description (
Optional
[str
]) – A user-entered description of the flow.flow_status (
Optional
[str
]) – Sets the status of the flow. You can specify one of the following values:. - Active - The flow runs based on the trigger settings that you defined. Active scheduled flows run as scheduled, and active event-triggered flows run when the specified change event occurs. However, active on-demand flows run only when you manually start them by using Amazon AppFlow. - Suspended - You can use this option to deactivate an active flow. Scheduled and event-triggered flows will cease to run until you reactive them. This value only affects scheduled and event-triggered flows. It has no effect for on-demand flows. If you omit the FlowStatus parameter, Amazon AppFlow creates the flow with a default status. The default status for on-demand flows is Active. The default status for scheduled and event-triggered flows is Draft, which means they’re not yet active.kms_arn (
Optional
[str
]) – The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption. This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don’t provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.metadata_catalog_config (
Union
[IResolvable
,MetadataCatalogConfigProperty
,Dict
[str
,Any
],None
]) –AWS::AppFlow::Flow.MetadataCatalogConfig
.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – The tags used to organize, track, or control access for your flow.
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_depends_on(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
)- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
)value (
Any
)
- See:
- Return type:
None
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermdediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
).- Parameters:
policy (
Optional
[RemovalPolicy
])apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resoure, please consult that specific resource’s documentation.
- Return type:
None
- get_att(attribute_name)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
)- See:
- Return type:
Any
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) –tree inspector to collect and process attributes.
- Return type:
None
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::AppFlow::Flow'
- attr_flow_arn
The flow’s Amazon Resource Name (ARN).
- CloudformationAttribute:
FlowArn
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- description
A user-entered description of the flow.
- destination_flow_config_list
The configuration that controls how Amazon AppFlow places data in the destination connector.
- flow_name
The specified name of the flow.
Spaces are not allowed. Use underscores (_) or hyphens (-) only.
- flow_status
.
Active - The flow runs based on the trigger settings that you defined. Active scheduled flows run as scheduled, and active event-triggered flows run when the specified change event occurs. However, active on-demand flows run only when you manually start them by using Amazon AppFlow.
Suspended - You can use this option to deactivate an active flow. Scheduled and event-triggered flows will cease to run until you reactive them. This value only affects scheduled and event-triggered flows. It has no effect for on-demand flows.
If you omit the FlowStatus parameter, Amazon AppFlow creates the flow with a default status. The default status for on-demand flows is Active. The default status for scheduled and event-triggered flows is Draft, which means they’re not yet active.
- Link:
- Type:
Sets the status of the flow. You can specify one of the following values
- kms_arn
The ARN (Amazon Resource Name) of the Key Management Service (KMS) key you provide for encryption.
This is required if you do not want to use the Amazon AppFlow-managed KMS key. If you don’t provide anything here, Amazon AppFlow uses the Amazon AppFlow-managed KMS key.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- metadata_catalog_config
AWS::AppFlow::Flow.MetadataCatalogConfig
.
- node
The construct tree node associated with this construct.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- source_flow_config
Contains information about the configuration of the source connector used in the flow.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
The tags used to organize, track, or control access for your flow.
- tasks
A list of tasks that Amazon AppFlow performs while transferring the data in the flow run.
- trigger_config
The trigger settings that determine how and when Amazon AppFlow runs the specified flow.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
)- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(construct)
Check whether the given construct is a CfnResource.
- Parameters:
construct (
IConstruct
)- Return type:
bool
- classmethod is_construct(x)
Return whether the given object is a Construct.
- Parameters:
x (
Any
)- Return type:
bool
AggregationConfigProperty
- class CfnFlow.AggregationConfigProperty(*, aggregation_type=None, target_file_size=None)
Bases:
object
The aggregation settings that you can use to customize the output format of your flow data.
- Parameters:
aggregation_type (
Optional
[str
]) – Specifies whether Amazon AppFlow aggregates the flow records into a single file, or leave them unaggregated.target_file_size (
Union
[int
,float
,None
]) –CfnFlow.AggregationConfigProperty.TargetFileSize
.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow aggregation_config_property = appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 )
Attributes
- aggregation_type
Specifies whether Amazon AppFlow aggregates the flow records into a single file, or leave them unaggregated.
- target_file_size
CfnFlow.AggregationConfigProperty.TargetFileSize
.
AmplitudeSourcePropertiesProperty
- class CfnFlow.AmplitudeSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when Amplitude is being used as a source.
- Parameters:
object (
str
) – The object specified in the Amplitude flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow amplitude_source_properties_property = appflow.CfnFlow.AmplitudeSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Amplitude flow source.
ConnectorOperatorProperty
- class CfnFlow.ConnectorOperatorProperty(*, amplitude=None, custom_connector=None, datadog=None, dynatrace=None, google_analytics=None, infor_nexus=None, marketo=None, pardot=None, s3=None, salesforce=None, sapo_data=None, service_now=None, singular=None, slack=None, trendmicro=None, veeva=None, zendesk=None)
Bases:
object
The operation to be performed on the provided source fields.
- Parameters:
amplitude (
Optional
[str
]) – The operation to be performed on the provided Amplitude source fields.custom_connector (
Optional
[str
]) – Operators supported by the custom connector.datadog (
Optional
[str
]) – The operation to be performed on the provided Datadog source fields.dynatrace (
Optional
[str
]) – The operation to be performed on the provided Dynatrace source fields.google_analytics (
Optional
[str
]) – The operation to be performed on the provided Google Analytics source fields.infor_nexus (
Optional
[str
]) – The operation to be performed on the provided Infor Nexus source fields.marketo (
Optional
[str
]) – The operation to be performed on the provided Marketo source fields.pardot (
Optional
[str
]) –CfnFlow.ConnectorOperatorProperty.Pardot
.s3 (
Optional
[str
]) – The operation to be performed on the provided Amazon S3 source fields.salesforce (
Optional
[str
]) – The operation to be performed on the provided Salesforce source fields.sapo_data (
Optional
[str
]) – The operation to be performed on the provided SAPOData source fields.service_now (
Optional
[str
]) – The operation to be performed on the provided ServiceNow source fields.singular (
Optional
[str
]) – The operation to be performed on the provided Singular source fields.slack (
Optional
[str
]) – The operation to be performed on the provided Slack source fields.trendmicro (
Optional
[str
]) – The operation to be performed on the provided Trend Micro source fields.veeva (
Optional
[str
]) – The operation to be performed on the provided Veeva source fields.zendesk (
Optional
[str
]) – The operation to be performed on the provided Zendesk source fields.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow connector_operator_property = appflow.CfnFlow.ConnectorOperatorProperty( amplitude="amplitude", custom_connector="customConnector", datadog="datadog", dynatrace="dynatrace", google_analytics="googleAnalytics", infor_nexus="inforNexus", marketo="marketo", pardot="pardot", s3="s3", salesforce="salesforce", sapo_data="sapoData", service_now="serviceNow", singular="singular", slack="slack", trendmicro="trendmicro", veeva="veeva", zendesk="zendesk" )
Attributes
- amplitude
The operation to be performed on the provided Amplitude source fields.
- custom_connector
Operators supported by the custom connector.
- datadog
The operation to be performed on the provided Datadog source fields.
- dynatrace
The operation to be performed on the provided Dynatrace source fields.
- google_analytics
The operation to be performed on the provided Google Analytics source fields.
- infor_nexus
The operation to be performed on the provided Infor Nexus source fields.
- marketo
The operation to be performed on the provided Marketo source fields.
- pardot
CfnFlow.ConnectorOperatorProperty.Pardot
.
- s3
The operation to be performed on the provided Amazon S3 source fields.
- salesforce
The operation to be performed on the provided Salesforce source fields.
- sapo_data
The operation to be performed on the provided SAPOData source fields.
- service_now
The operation to be performed on the provided ServiceNow source fields.
- singular
The operation to be performed on the provided Singular source fields.
- slack
The operation to be performed on the provided Slack source fields.
- trendmicro
The operation to be performed on the provided Trend Micro source fields.
- veeva
The operation to be performed on the provided Veeva source fields.
- zendesk
The operation to be performed on the provided Zendesk source fields.
CustomConnectorDestinationPropertiesProperty
- class CfnFlow.CustomConnectorDestinationPropertiesProperty(*, entity_name, custom_properties=None, error_handling_config=None, id_field_names=None, write_operation_type=None)
Bases:
object
The properties that are applied when the custom connector is being used as a destination.
- Parameters:
entity_name (
str
) – The entity specified in the custom connector as a destination in the flow.custom_properties (
Union
[IResolvable
,Mapping
[str
,str
],None
]) – The custom properties that are specific to the connector when it’s used as a destination in the flow.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,Dict
[str
,Any
],None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the custom connector as destination.id_field_names (
Optional
[Sequence
[str
]]) – The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update, delete, or upsert.write_operation_type (
Optional
[str
]) – Specifies the type of write operation to be performed in the custom connector when it’s used as destination.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow custom_connector_destination_properties_property = appflow.CfnFlow.CustomConnectorDestinationPropertiesProperty( entity_name="entityName", # the properties below are optional custom_properties={ "custom_properties_key": "customProperties" }, error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" )
Attributes
- custom_properties
The custom properties that are specific to the connector when it’s used as a destination in the flow.
- entity_name
The entity specified in the custom connector as a destination in the flow.
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the custom connector as destination.
- id_field_names
The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update, delete, or upsert.
- write_operation_type
Specifies the type of write operation to be performed in the custom connector when it’s used as destination.
CustomConnectorSourcePropertiesProperty
- class CfnFlow.CustomConnectorSourcePropertiesProperty(*, entity_name, custom_properties=None)
Bases:
object
The properties that are applied when the custom connector is being used as a source.
- Parameters:
entity_name (
str
) – The entity specified in the custom connector as a source in the flow.custom_properties (
Union
[IResolvable
,Mapping
[str
,str
],None
]) – Custom properties that are required to use the custom connector as a source.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow custom_connector_source_properties_property = appflow.CfnFlow.CustomConnectorSourcePropertiesProperty( entity_name="entityName", # the properties below are optional custom_properties={ "custom_properties_key": "customProperties" } )
Attributes
- custom_properties
Custom properties that are required to use the custom connector as a source.
- entity_name
The entity specified in the custom connector as a source in the flow.
DatadogSourcePropertiesProperty
- class CfnFlow.DatadogSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when Datadog is being used as a source.
- Parameters:
object (
str
) – The object specified in the Datadog flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow datadog_source_properties_property = appflow.CfnFlow.DatadogSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Datadog flow source.
DestinationConnectorPropertiesProperty
- class CfnFlow.DestinationConnectorPropertiesProperty(*, custom_connector=None, event_bridge=None, lookout_metrics=None, marketo=None, redshift=None, s3=None, salesforce=None, sapo_data=None, snowflake=None, upsolver=None, zendesk=None)
Bases:
object
This stores the information that is required to query a particular connector.
- Parameters:
custom_connector (
Union
[IResolvable
,CustomConnectorDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties that are required to query the custom Connector.event_bridge (
Union
[IResolvable
,EventBridgeDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query Amazon EventBridge.lookout_metrics (
Union
[IResolvable
,LookoutMetricsDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query Amazon Lookout for Metrics.marketo (
Union
[IResolvable
,MarketoDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query Marketo.redshift (
Union
[IResolvable
,RedshiftDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query Amazon Redshift.s3 (
Union
[IResolvable
,S3DestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query Amazon S3.salesforce (
Union
[IResolvable
,SalesforceDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query Salesforce.sapo_data (
Union
[IResolvable
,SAPODataDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query SAPOData.snowflake (
Union
[IResolvable
,SnowflakeDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query Snowflake.upsolver (
Union
[IResolvable
,UpsolverDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query Upsolver.zendesk (
Union
[IResolvable
,ZendeskDestinationPropertiesProperty
,Dict
[str
,Any
],None
]) – The properties required to query Zendesk.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow destination_connector_properties_property = appflow.CfnFlow.DestinationConnectorPropertiesProperty( custom_connector=appflow.CfnFlow.CustomConnectorDestinationPropertiesProperty( entity_name="entityName", # the properties below are optional custom_properties={ "custom_properties_key": "customProperties" }, error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), event_bridge=appflow.CfnFlow.EventBridgeDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), lookout_metrics=appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), redshift=appflow.CfnFlow.RedshiftDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), s3=appflow.CfnFlow.S3DestinationPropertiesProperty( bucket_name="bucketName", # the properties below are optional bucket_prefix="bucketPrefix", s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False ) ), salesforce=appflow.CfnFlow.SalesforceDestinationPropertiesProperty( object="object", # the properties below are optional data_transfer_api="dataTransferApi", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), sapo_data=appflow.CfnFlow.SAPODataDestinationPropertiesProperty( object_path="objectPath", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" ), snowflake=appflow.CfnFlow.SnowflakeDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), upsolver=appflow.CfnFlow.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType" ), # the properties below are optional bucket_prefix="bucketPrefix" ), zendesk=appflow.CfnFlow.ZendeskDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ) )
Attributes
- custom_connector
The properties that are required to query the custom Connector.
- event_bridge
The properties required to query Amazon EventBridge.
- lookout_metrics
The properties required to query Amazon Lookout for Metrics.
- marketo
The properties required to query Marketo.
- redshift
The properties required to query Amazon Redshift.
- s3
The properties required to query Amazon S3.
- salesforce
The properties required to query Salesforce.
- sapo_data
The properties required to query SAPOData.
- snowflake
The properties required to query Snowflake.
- upsolver
The properties required to query Upsolver.
- zendesk
The properties required to query Zendesk.
DestinationFlowConfigProperty
- class CfnFlow.DestinationFlowConfigProperty(*, connector_type, destination_connector_properties, api_version=None, connector_profile_name=None)
Bases:
object
Contains information about the configuration of destination connectors present in the flow.
- Parameters:
connector_type (
str
) – The type of destination connector, such as Sales force, Amazon S3, and so on. Allowed Values :EventBridge | Redshift | S3 | Salesforce | Snowflake
destination_connector_properties (
Union
[IResolvable
,DestinationConnectorPropertiesProperty
,Dict
[str
,Any
]]) – This stores the information that is required to query a particular connector.api_version (
Optional
[str
]) – The API version that the destination connector uses.connector_profile_name (
Optional
[str
]) – The name of the connector profile. This name must be unique for each connector profile in the AWS account .
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow destination_flow_config_property = appflow.CfnFlow.DestinationFlowConfigProperty( connector_type="connectorType", destination_connector_properties=appflow.CfnFlow.DestinationConnectorPropertiesProperty( custom_connector=appflow.CfnFlow.CustomConnectorDestinationPropertiesProperty( entity_name="entityName", # the properties below are optional custom_properties={ "custom_properties_key": "customProperties" }, error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), event_bridge=appflow.CfnFlow.EventBridgeDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), lookout_metrics=appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), redshift=appflow.CfnFlow.RedshiftDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), s3=appflow.CfnFlow.S3DestinationPropertiesProperty( bucket_name="bucketName", # the properties below are optional bucket_prefix="bucketPrefix", s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False ) ), salesforce=appflow.CfnFlow.SalesforceDestinationPropertiesProperty( object="object", # the properties below are optional data_transfer_api="dataTransferApi", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ), sapo_data=appflow.CfnFlow.SAPODataDestinationPropertiesProperty( object_path="objectPath", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" ), snowflake=appflow.CfnFlow.SnowflakeDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) ), upsolver=appflow.CfnFlow.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType" ), # the properties below are optional bucket_prefix="bucketPrefix" ), zendesk=appflow.CfnFlow.ZendeskDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" ) ), # the properties below are optional api_version="apiVersion", connector_profile_name="connectorProfileName" )
Attributes
- api_version
The API version that the destination connector uses.
- connector_profile_name
The name of the connector profile.
This name must be unique for each connector profile in the AWS account .
- connector_type
The type of destination connector, such as Sales force, Amazon S3, and so on.
Allowed Values :
EventBridge | Redshift | S3 | Salesforce | Snowflake
- destination_connector_properties
This stores the information that is required to query a particular connector.
DynatraceSourcePropertiesProperty
- class CfnFlow.DynatraceSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when Dynatrace is being used as a source.
- Parameters:
object (
str
) – The object specified in the Dynatrace flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow dynatrace_source_properties_property = appflow.CfnFlow.DynatraceSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Dynatrace flow source.
ErrorHandlingConfigProperty
- class CfnFlow.ErrorHandlingConfigProperty(*, bucket_name=None, bucket_prefix=None, fail_on_first_error=None)
Bases:
object
The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.- Parameters:
bucket_name (
Optional
[str
]) – Specifies the name of the Amazon S3 bucket.bucket_prefix (
Optional
[str
]) – Specifies the Amazon S3 bucket prefix.fail_on_first_error (
Union
[bool
,IResolvable
,None
]) – Specifies if the flow should fail after the first instance of a failure when attempting to place data in the destination.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow error_handling_config_property = appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False )
Attributes
- bucket_name
Specifies the name of the Amazon S3 bucket.
- bucket_prefix
Specifies the Amazon S3 bucket prefix.
- fail_on_first_error
Specifies if the flow should fail after the first instance of a failure when attempting to place data in the destination.
EventBridgeDestinationPropertiesProperty
- class CfnFlow.EventBridgeDestinationPropertiesProperty(*, object, error_handling_config=None)
Bases:
object
The properties that are applied when Amazon EventBridge is being used as a destination.
- Parameters:
object (
str
) – The object specified in the Amazon EventBridge flow destination.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,Dict
[str
,Any
],None
]) – The object specified in the Amplitude flow source.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow event_bridge_destination_properties_property = appflow.CfnFlow.EventBridgeDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) )
Attributes
- error_handling_config
The object specified in the Amplitude flow source.
- object
The object specified in the Amazon EventBridge flow destination.
GlueDataCatalogProperty
- class CfnFlow.GlueDataCatalogProperty(*, database_name, role_arn, table_prefix)
Bases:
object
- Parameters:
database_name (
str
) –CfnFlow.GlueDataCatalogProperty.DatabaseName
.role_arn (
str
) –CfnFlow.GlueDataCatalogProperty.RoleArn
.table_prefix (
str
) –CfnFlow.GlueDataCatalogProperty.TablePrefix
.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow glue_data_catalog_property = appflow.CfnFlow.GlueDataCatalogProperty( database_name="databaseName", role_arn="roleArn", table_prefix="tablePrefix" )
Attributes
- database_name
CfnFlow.GlueDataCatalogProperty.DatabaseName
.
- role_arn
CfnFlow.GlueDataCatalogProperty.RoleArn
.
- table_prefix
CfnFlow.GlueDataCatalogProperty.TablePrefix
.
GoogleAnalyticsSourcePropertiesProperty
- class CfnFlow.GoogleAnalyticsSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when Google Analytics is being used as a source.
- Parameters:
object (
str
) – The object specified in the Google Analytics flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow google_analytics_source_properties_property = appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Google Analytics flow source.
IncrementalPullConfigProperty
- class CfnFlow.IncrementalPullConfigProperty(*, datetime_type_field_name=None)
Bases:
object
Specifies the configuration used when importing incremental records from the source.
- Parameters:
datetime_type_field_name (
Optional
[str
]) – A field that specifies the date time or timestamp field as the criteria to use when importing incremental records from the source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow incremental_pull_config_property = appflow.CfnFlow.IncrementalPullConfigProperty( datetime_type_field_name="datetimeTypeFieldName" )
Attributes
- datetime_type_field_name
A field that specifies the date time or timestamp field as the criteria to use when importing incremental records from the source.
InforNexusSourcePropertiesProperty
- class CfnFlow.InforNexusSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when Infor Nexus is being used as a source.
- Parameters:
object (
str
) – The object specified in the Infor Nexus flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow infor_nexus_source_properties_property = appflow.CfnFlow.InforNexusSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Infor Nexus flow source.
LookoutMetricsDestinationPropertiesProperty
- class CfnFlow.LookoutMetricsDestinationPropertiesProperty(*, object=None)
Bases:
object
The properties that are applied when Amazon Lookout for Metrics is used as a destination.
- Parameters:
object (
Optional
[str
]) – The object specified in the Amazon Lookout for Metrics flow destination.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow lookout_metrics_destination_properties_property = appflow.CfnFlow.LookoutMetricsDestinationPropertiesProperty( object="object" )
Attributes
- object
The object specified in the Amazon Lookout for Metrics flow destination.
MarketoDestinationPropertiesProperty
- class CfnFlow.MarketoDestinationPropertiesProperty(*, object, error_handling_config=None)
Bases:
object
The properties that Amazon AppFlow applies when you use Marketo as a flow destination.
- Parameters:
object (
str
) – The object specified in the Marketo flow destination.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,Dict
[str
,Any
],None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow marketo_destination_properties_property = appflow.CfnFlow.MarketoDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) )
Attributes
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
- object
The object specified in the Marketo flow destination.
MarketoSourcePropertiesProperty
- class CfnFlow.MarketoSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when Marketo is being used as a source.
- Parameters:
object (
str
) – The object specified in the Marketo flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow marketo_source_properties_property = appflow.CfnFlow.MarketoSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Marketo flow source.
MetadataCatalogConfigProperty
- class CfnFlow.MetadataCatalogConfigProperty(*, glue_data_catalog=None)
Bases:
object
- Parameters:
glue_data_catalog (
Union
[IResolvable
,GlueDataCatalogProperty
,Dict
[str
,Any
],None
]) –CfnFlow.MetadataCatalogConfigProperty.GlueDataCatalog
.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow metadata_catalog_config_property = appflow.CfnFlow.MetadataCatalogConfigProperty( glue_data_catalog=appflow.CfnFlow.GlueDataCatalogProperty( database_name="databaseName", role_arn="roleArn", table_prefix="tablePrefix" ) )
Attributes
- glue_data_catalog
CfnFlow.MetadataCatalogConfigProperty.GlueDataCatalog
.
PardotSourcePropertiesProperty
- class CfnFlow.PardotSourcePropertiesProperty(*, object)
Bases:
object
- Parameters:
object (
str
) –CfnFlow.PardotSourcePropertiesProperty.Object
.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow pardot_source_properties_property = appflow.CfnFlow.PardotSourcePropertiesProperty( object="object" )
Attributes
- object
CfnFlow.PardotSourcePropertiesProperty.Object
.
PrefixConfigProperty
- class CfnFlow.PrefixConfigProperty(*, path_prefix_hierarchy=None, prefix_format=None, prefix_type=None)
Bases:
object
Specifies elements that Amazon AppFlow includes in the file and folder names in the flow destination.
- Parameters:
path_prefix_hierarchy (
Optional
[Sequence
[str
]]) –CfnFlow.PrefixConfigProperty.PathPrefixHierarchy
.prefix_format (
Optional
[str
]) – Determines the level of granularity for the date and time that’s included in the prefix.prefix_type (
Optional
[str
]) – Determines the format of the prefix, and whether it applies to the file name, file path, or both.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow prefix_config_property = appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" )
Attributes
- path_prefix_hierarchy
CfnFlow.PrefixConfigProperty.PathPrefixHierarchy
.
- prefix_format
Determines the level of granularity for the date and time that’s included in the prefix.
- prefix_type
Determines the format of the prefix, and whether it applies to the file name, file path, or both.
RedshiftDestinationPropertiesProperty
- class CfnFlow.RedshiftDestinationPropertiesProperty(*, intermediate_bucket_name, object, bucket_prefix=None, error_handling_config=None)
Bases:
object
The properties that are applied when Amazon Redshift is being used as a destination.
- Parameters:
intermediate_bucket_name (
str
) – The intermediate bucket that Amazon AppFlow uses when moving data into Amazon Redshift.object (
str
) – The object specified in the Amazon Redshift flow destination.bucket_prefix (
Optional
[str
]) – The object key for the bucket in which Amazon AppFlow places the destination files.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,Dict
[str
,Any
],None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Amazon Redshift destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow redshift_destination_properties_property = appflow.CfnFlow.RedshiftDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) )
Attributes
- bucket_prefix
The object key for the bucket in which Amazon AppFlow places the destination files.
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the Amazon Redshift destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
- intermediate_bucket_name
The intermediate bucket that Amazon AppFlow uses when moving data into Amazon Redshift.
- object
The object specified in the Amazon Redshift flow destination.
S3DestinationPropertiesProperty
- class CfnFlow.S3DestinationPropertiesProperty(*, bucket_name, bucket_prefix=None, s3_output_format_config=None)
Bases:
object
The properties that are applied when Amazon S3 is used as a destination.
- Parameters:
bucket_name (
str
) – The Amazon S3 bucket name in which Amazon AppFlow places the transferred data.bucket_prefix (
Optional
[str
]) – The object key for the destination bucket in which Amazon AppFlow places the files.s3_output_format_config (
Union
[IResolvable
,S3OutputFormatConfigProperty
,Dict
[str
,Any
],None
]) – The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s3_destination_properties_property = appflow.CfnFlow.S3DestinationPropertiesProperty( bucket_name="bucketName", # the properties below are optional bucket_prefix="bucketPrefix", s3_output_format_config=appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False ) )
Attributes
- bucket_name
The Amazon S3 bucket name in which Amazon AppFlow places the transferred data.
- bucket_prefix
The object key for the destination bucket in which Amazon AppFlow places the files.
- s3_output_format_config
The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.
S3InputFormatConfigProperty
- class CfnFlow.S3InputFormatConfigProperty(*, s3_input_file_type=None)
Bases:
object
When you use Amazon S3 as the source, the configuration format that you provide the flow input data.
- Parameters:
s3_input_file_type (
Optional
[str
]) – The file type that Amazon AppFlow gets from your Amazon S3 bucket.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s3_input_format_config_property = appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" )
Attributes
- s3_input_file_type
The file type that Amazon AppFlow gets from your Amazon S3 bucket.
S3OutputFormatConfigProperty
- class CfnFlow.S3OutputFormatConfigProperty(*, aggregation_config=None, file_type=None, prefix_config=None, preserve_source_data_typing=None)
Bases:
object
The configuration that determines how Amazon AppFlow should format the flow output data when Amazon S3 is used as the destination.
- Parameters:
aggregation_config (
Union
[IResolvable
,AggregationConfigProperty
,Dict
[str
,Any
],None
]) – The aggregation settings that you can use to customize the output format of your flow data.file_type (
Optional
[str
]) – Indicates the file type that Amazon AppFlow places in the Amazon S3 bucket.prefix_config (
Union
[IResolvable
,PrefixConfigProperty
,Dict
[str
,Any
],None
]) – Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket. You can name folders according to the flow frequency and date.preserve_source_data_typing (
Union
[bool
,IResolvable
,None
]) –CfnFlow.S3OutputFormatConfigProperty.PreserveSourceDataTyping
.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s3_output_format_config_property = appflow.CfnFlow.S3OutputFormatConfigProperty( aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType", prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), preserve_source_data_typing=False )
Attributes
- aggregation_config
The aggregation settings that you can use to customize the output format of your flow data.
- file_type
Indicates the file type that Amazon AppFlow places in the Amazon S3 bucket.
- prefix_config
Determines the prefix that Amazon AppFlow applies to the folder name in the Amazon S3 bucket.
You can name folders according to the flow frequency and date.
- preserve_source_data_typing
CfnFlow.S3OutputFormatConfigProperty.PreserveSourceDataTyping
.
S3SourcePropertiesProperty
- class CfnFlow.S3SourcePropertiesProperty(*, bucket_name, bucket_prefix, s3_input_format_config=None)
Bases:
object
The properties that are applied when Amazon S3 is being used as the flow source.
- Parameters:
bucket_name (
str
) – The Amazon S3 bucket name where the source files are stored.bucket_prefix (
str
) – The object key for the Amazon S3 bucket in which the source files are stored.s3_input_format_config (
Union
[IResolvable
,S3InputFormatConfigProperty
,Dict
[str
,Any
],None
]) – When you use Amazon S3 as the source, the configuration format that you provide the flow input data.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s3_source_properties_property = appflow.CfnFlow.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", # the properties below are optional s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) )
Attributes
- bucket_name
The Amazon S3 bucket name where the source files are stored.
- bucket_prefix
The object key for the Amazon S3 bucket in which the source files are stored.
- s3_input_format_config
When you use Amazon S3 as the source, the configuration format that you provide the flow input data.
SAPODataDestinationPropertiesProperty
- class CfnFlow.SAPODataDestinationPropertiesProperty(*, object_path, error_handling_config=None, id_field_names=None, success_response_handling_config=None, write_operation_type=None)
Bases:
object
The properties that are applied when using SAPOData as a flow destination.
- Parameters:
object_path (
str
) – The object path specified in the SAPOData flow destination.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,Dict
[str
,Any
],None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.id_field_names (
Optional
[Sequence
[str
]]) – A list of field names that can be used as an ID field when performing a write operation.success_response_handling_config (
Union
[IResolvable
,SuccessResponseHandlingConfigProperty
,Dict
[str
,Any
],None
]) – Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data. For example, this setting would determine where to write the response from a destination connector upon a successful insert operation.write_operation_type (
Optional
[str
]) – The possible write operations in the destination connector. When this value is not provided, this defaults to theINSERT
operation.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s_aPOData_destination_properties_property = appflow.CfnFlow.SAPODataDestinationPropertiesProperty( object_path="objectPath", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], success_response_handling_config=appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" ), write_operation_type="writeOperationType" )
Attributes
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
- id_field_names
A list of field names that can be used as an ID field when performing a write operation.
- object_path
The object path specified in the SAPOData flow destination.
- success_response_handling_config
Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data.
For example, this setting would determine where to write the response from a destination connector upon a successful insert operation.
- write_operation_type
The possible write operations in the destination connector.
When this value is not provided, this defaults to the
INSERT
operation.
SAPODataSourcePropertiesProperty
- class CfnFlow.SAPODataSourcePropertiesProperty(*, object_path)
Bases:
object
The properties that are applied when using SAPOData as a flow source.
- Parameters:
object_path (
str
) – The object path specified in the SAPOData flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow s_aPOData_source_properties_property = appflow.CfnFlow.SAPODataSourcePropertiesProperty( object_path="objectPath" )
Attributes
- object_path
The object path specified in the SAPOData flow source.
SalesforceDestinationPropertiesProperty
- class CfnFlow.SalesforceDestinationPropertiesProperty(*, object, data_transfer_api=None, error_handling_config=None, id_field_names=None, write_operation_type=None)
Bases:
object
The properties that are applied when Salesforce is being used as a destination.
- Parameters:
object (
str
) – The object specified in the Salesforce flow destination.data_transfer_api (
Optional
[str
]) – Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data to Salesforce. - AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers to Salesforce. If your flow transfers fewer than 1,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0. Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900 records, and it might use Bulk API 2.0 on the next day to transfer 1,100 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields. By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output. - BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers. Note that Bulk API 2.0 does not transfer Salesforce compound fields. - REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail with a timed out error.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,Dict
[str
,Any
],None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Salesforce destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.id_field_names (
Optional
[Sequence
[str
]]) – The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.write_operation_type (
Optional
[str
]) – This specifies the type of write operation to be performed in Salesforce. When the value isUPSERT
, thenidFieldNames
is required.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow salesforce_destination_properties_property = appflow.CfnFlow.SalesforceDestinationPropertiesProperty( object="object", # the properties below are optional data_transfer_api="dataTransferApi", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" )
Attributes
- data_transfer_api
Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data to Salesforce.
AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers to Salesforce. If your flow transfers fewer than 1,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0.
Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900 records, and it might use Bulk API 2.0 on the next day to transfer 1,100 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields.
By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output.
BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers.
Note that Bulk API 2.0 does not transfer Salesforce compound fields.
REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail with a timed out error.
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the Salesforce destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
- id_field_names
The name of the field that Amazon AppFlow uses as an ID when performing a write operation such as update or delete.
- object
The object specified in the Salesforce flow destination.
- write_operation_type
This specifies the type of write operation to be performed in Salesforce.
When the value is
UPSERT
, thenidFieldNames
is required.
SalesforceSourcePropertiesProperty
- class CfnFlow.SalesforceSourcePropertiesProperty(*, object, data_transfer_api=None, enable_dynamic_field_update=None, include_deleted_records=None)
Bases:
object
The properties that are applied when Salesforce is being used as a source.
- Parameters:
object (
str
) – The object specified in the Salesforce flow source.data_transfer_api (
Optional
[str
]) – Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data from Salesforce. - AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers from Salesforce. If your flow transfers fewer than 1,000,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0. Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900,000 records, and it might use Bulk API 2.0 on the next day to transfer 1,100,000 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields. By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output. - BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers. Note that Bulk API 2.0 does not transfer Salesforce compound fields. - REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail wituh a timed out error.enable_dynamic_field_update (
Union
[bool
,IResolvable
,None
]) – The flag that enables dynamic fetching of new (recently added) fields in the Salesforce objects while running a flow.include_deleted_records (
Union
[bool
,IResolvable
,None
]) – Indicates whether Amazon AppFlow includes deleted files in the flow run.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow salesforce_source_properties_property = appflow.CfnFlow.SalesforceSourcePropertiesProperty( object="object", # the properties below are optional data_transfer_api="dataTransferApi", enable_dynamic_field_update=False, include_deleted_records=False )
Attributes
- data_transfer_api
Specifies which Salesforce API is used by Amazon AppFlow when your flow transfers data from Salesforce.
AUTOMATIC - The default. Amazon AppFlow selects which API to use based on the number of records that your flow transfers from Salesforce. If your flow transfers fewer than 1,000,000 records, Amazon AppFlow uses Salesforce REST API. If your flow transfers 1,000,000 records or more, Amazon AppFlow uses Salesforce Bulk API 2.0.
Each of these Salesforce APIs structures data differently. If Amazon AppFlow selects the API automatically, be aware that, for recurring flows, the data output might vary from one flow run to the next. For example, if a flow runs daily, it might use REST API on one day to transfer 900,000 records, and it might use Bulk API 2.0 on the next day to transfer 1,100,000 records. For each of these flow runs, the respective Salesforce API formats the data differently. Some of the differences include how dates are formatted and null values are represented. Also, Bulk API 2.0 doesn’t transfer Salesforce compound fields.
By choosing this option, you optimize flow performance for both small and large data transfers, but the tradeoff is inconsistent formatting in the output.
BULKV2 - Amazon AppFlow uses only Salesforce Bulk API 2.0. This API runs asynchronous data transfers, and it’s optimal for large sets of data. By choosing this option, you ensure that your flow writes consistent output, but you optimize performance only for large data transfers.
Note that Bulk API 2.0 does not transfer Salesforce compound fields.
REST_SYNC - Amazon AppFlow uses only Salesforce REST API. By choosing this option, you ensure that your flow writes consistent output, but you decrease performance for large data transfers that are better suited for Bulk API 2.0. In some cases, if your flow attempts to transfer a vary large set of data, it might fail wituh a timed out error.
- enable_dynamic_field_update
The flag that enables dynamic fetching of new (recently added) fields in the Salesforce objects while running a flow.
- include_deleted_records
Indicates whether Amazon AppFlow includes deleted files in the flow run.
- object
The object specified in the Salesforce flow source.
ScheduledTriggerPropertiesProperty
- class CfnFlow.ScheduledTriggerPropertiesProperty(*, schedule_expression, data_pull_mode=None, first_execution_from=None, flow_error_deactivation_threshold=None, schedule_end_time=None, schedule_offset=None, schedule_start_time=None, time_zone=None)
Bases:
object
Specifies the configuration details of a schedule-triggered flow as defined by the user.
Currently, these settings only apply to the
Scheduled
trigger type.- Parameters:
schedule_expression (
str
) – The scheduling expression that determines the rate at which the schedule will run, for examplerate(5minutes)
.data_pull_mode (
Optional
[str
]) – Specifies whether a scheduled flow has an incremental data transfer or a complete data transfer for each flow run.first_execution_from (
Union
[int
,float
,None
]) – Specifies the date range for the records to import from the connector in the first flow run.flow_error_deactivation_threshold (
Union
[int
,float
,None
]) –CfnFlow.ScheduledTriggerPropertiesProperty.FlowErrorDeactivationThreshold
.schedule_end_time (
Union
[int
,float
,None
]) – The time at which the scheduled flow ends. The time is formatted as a timestamp that follows the ISO 8601 standard, such as2022-04-27T13:00:00-07:00
.schedule_offset (
Union
[int
,float
,None
]) – Specifies the optional offset that is added to the time interval for a schedule-triggered flow.schedule_start_time (
Union
[int
,float
,None
]) – The time at which the scheduled flow starts. The time is formatted as a timestamp that follows the ISO 8601 standard, such as2022-04-26T13:00:00-07:00
.time_zone (
Optional
[str
]) – Specifies the time zone used when referring to the dates and times of a scheduled flow, such asAmerica/New_York
. This time zone is only a descriptive label. It doesn’t affect how Amazon AppFlow interprets the timestamps that you specify to schedule the flow. If you want to schedule a flow by using times in a particular time zone, indicate the time zone as a UTC offset in your timestamps. For example, the UTC offsets for theAmerica/New_York
timezone are-04:00
EDT and-05:00 EST
.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow scheduled_trigger_properties_property = appflow.CfnFlow.ScheduledTriggerPropertiesProperty( schedule_expression="scheduleExpression", # the properties below are optional data_pull_mode="dataPullMode", first_execution_from=123, flow_error_deactivation_threshold=123, schedule_end_time=123, schedule_offset=123, schedule_start_time=123, time_zone="timeZone" )
Attributes
- data_pull_mode
Specifies whether a scheduled flow has an incremental data transfer or a complete data transfer for each flow run.
- first_execution_from
Specifies the date range for the records to import from the connector in the first flow run.
- flow_error_deactivation_threshold
CfnFlow.ScheduledTriggerPropertiesProperty.FlowErrorDeactivationThreshold
.
- schedule_end_time
The time at which the scheduled flow ends.
The time is formatted as a timestamp that follows the ISO 8601 standard, such as
2022-04-27T13:00:00-07:00
.
- schedule_expression
The scheduling expression that determines the rate at which the schedule will run, for example
rate(5minutes)
.
- schedule_offset
Specifies the optional offset that is added to the time interval for a schedule-triggered flow.
- schedule_start_time
The time at which the scheduled flow starts.
The time is formatted as a timestamp that follows the ISO 8601 standard, such as
2022-04-26T13:00:00-07:00
.
- time_zone
Specifies the time zone used when referring to the dates and times of a scheduled flow, such as
America/New_York
.This time zone is only a descriptive label. It doesn’t affect how Amazon AppFlow interprets the timestamps that you specify to schedule the flow.
If you want to schedule a flow by using times in a particular time zone, indicate the time zone as a UTC offset in your timestamps. For example, the UTC offsets for the
America/New_York
timezone are-04:00
EDT and-05:00 EST
.
ServiceNowSourcePropertiesProperty
- class CfnFlow.ServiceNowSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when ServiceNow is being used as a source.
- Parameters:
object (
str
) – The object specified in the ServiceNow flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow service_now_source_properties_property = appflow.CfnFlow.ServiceNowSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the ServiceNow flow source.
SingularSourcePropertiesProperty
- class CfnFlow.SingularSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when Singular is being used as a source.
- Parameters:
object (
str
) – The object specified in the Singular flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow singular_source_properties_property = appflow.CfnFlow.SingularSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Singular flow source.
SlackSourcePropertiesProperty
- class CfnFlow.SlackSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when Slack is being used as a source.
- Parameters:
object (
str
) – The object specified in the Slack flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow slack_source_properties_property = appflow.CfnFlow.SlackSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Slack flow source.
SnowflakeDestinationPropertiesProperty
- class CfnFlow.SnowflakeDestinationPropertiesProperty(*, intermediate_bucket_name, object, bucket_prefix=None, error_handling_config=None)
Bases:
object
The properties that are applied when Snowflake is being used as a destination.
- Parameters:
intermediate_bucket_name (
str
) – The intermediate bucket that Amazon AppFlow uses when moving data into Snowflake.object (
str
) – The object specified in the Snowflake flow destination.bucket_prefix (
Optional
[str
]) – The object key for the destination bucket in which Amazon AppFlow places the files.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,Dict
[str
,Any
],None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the Snowflake destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow snowflake_destination_properties_property = appflow.CfnFlow.SnowflakeDestinationPropertiesProperty( intermediate_bucket_name="intermediateBucketName", object="object", # the properties below are optional bucket_prefix="bucketPrefix", error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ) )
Attributes
- bucket_prefix
The object key for the destination bucket in which Amazon AppFlow places the files.
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the Snowflake destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
- intermediate_bucket_name
The intermediate bucket that Amazon AppFlow uses when moving data into Snowflake.
- object
The object specified in the Snowflake flow destination.
SourceConnectorPropertiesProperty
- class CfnFlow.SourceConnectorPropertiesProperty(*, amplitude=None, custom_connector=None, datadog=None, dynatrace=None, google_analytics=None, infor_nexus=None, marketo=None, pardot=None, s3=None, salesforce=None, sapo_data=None, service_now=None, singular=None, slack=None, trendmicro=None, veeva=None, zendesk=None)
Bases:
object
Specifies the information that is required to query a particular connector.
- Parameters:
amplitude (
Union
[IResolvable
,AmplitudeSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Amplitude.custom_connector (
Union
[IResolvable
,CustomConnectorSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – The properties that are applied when the custom connector is being used as a source.datadog (
Union
[IResolvable
,DatadogSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Datadog.dynatrace (
Union
[IResolvable
,DynatraceSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Dynatrace.google_analytics (
Union
[IResolvable
,GoogleAnalyticsSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Google Analytics.infor_nexus (
Union
[IResolvable
,InforNexusSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Infor Nexus.marketo (
Union
[IResolvable
,MarketoSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Marketo.pardot (
Union
[IResolvable
,PardotSourcePropertiesProperty
,Dict
[str
,Any
],None
]) –CfnFlow.SourceConnectorPropertiesProperty.Pardot
.s3 (
Union
[IResolvable
,S3SourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Amazon S3.salesforce (
Union
[IResolvable
,SalesforceSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Salesforce.sapo_data (
Union
[IResolvable
,SAPODataSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – The properties that are applied when using SAPOData as a flow source.service_now (
Union
[IResolvable
,ServiceNowSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying ServiceNow.singular (
Union
[IResolvable
,SingularSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Singular.slack (
Union
[IResolvable
,SlackSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Slack.trendmicro (
Union
[IResolvable
,TrendmicroSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Trend Micro.veeva (
Union
[IResolvable
,VeevaSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Veeva.zendesk (
Union
[IResolvable
,ZendeskSourcePropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the information that is required for querying Zendesk.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow source_connector_properties_property = appflow.CfnFlow.SourceConnectorPropertiesProperty( amplitude=appflow.CfnFlow.AmplitudeSourcePropertiesProperty( object="object" ), custom_connector=appflow.CfnFlow.CustomConnectorSourcePropertiesProperty( entity_name="entityName", # the properties below are optional custom_properties={ "custom_properties_key": "customProperties" } ), datadog=appflow.CfnFlow.DatadogSourcePropertiesProperty( object="object" ), dynatrace=appflow.CfnFlow.DynatraceSourcePropertiesProperty( object="object" ), google_analytics=appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty( object="object" ), infor_nexus=appflow.CfnFlow.InforNexusSourcePropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoSourcePropertiesProperty( object="object" ), pardot=appflow.CfnFlow.PardotSourcePropertiesProperty( object="object" ), s3=appflow.CfnFlow.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", # the properties below are optional s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) ), salesforce=appflow.CfnFlow.SalesforceSourcePropertiesProperty( object="object", # the properties below are optional data_transfer_api="dataTransferApi", enable_dynamic_field_update=False, include_deleted_records=False ), sapo_data=appflow.CfnFlow.SAPODataSourcePropertiesProperty( object_path="objectPath" ), service_now=appflow.CfnFlow.ServiceNowSourcePropertiesProperty( object="object" ), singular=appflow.CfnFlow.SingularSourcePropertiesProperty( object="object" ), slack=appflow.CfnFlow.SlackSourcePropertiesProperty( object="object" ), trendmicro=appflow.CfnFlow.TrendmicroSourcePropertiesProperty( object="object" ), veeva=appflow.CfnFlow.VeevaSourcePropertiesProperty( object="object", # the properties below are optional document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False ), zendesk=appflow.CfnFlow.ZendeskSourcePropertiesProperty( object="object" ) )
Attributes
- amplitude
Specifies the information that is required for querying Amplitude.
- custom_connector
The properties that are applied when the custom connector is being used as a source.
- datadog
Specifies the information that is required for querying Datadog.
- dynatrace
Specifies the information that is required for querying Dynatrace.
- google_analytics
Specifies the information that is required for querying Google Analytics.
- infor_nexus
Specifies the information that is required for querying Infor Nexus.
- marketo
Specifies the information that is required for querying Marketo.
- pardot
CfnFlow.SourceConnectorPropertiesProperty.Pardot
.
- s3
Specifies the information that is required for querying Amazon S3.
- salesforce
Specifies the information that is required for querying Salesforce.
- sapo_data
The properties that are applied when using SAPOData as a flow source.
- service_now
Specifies the information that is required for querying ServiceNow.
- singular
Specifies the information that is required for querying Singular.
- slack
Specifies the information that is required for querying Slack.
- trendmicro
Specifies the information that is required for querying Trend Micro.
- veeva
Specifies the information that is required for querying Veeva.
- zendesk
Specifies the information that is required for querying Zendesk.
SourceFlowConfigProperty
- class CfnFlow.SourceFlowConfigProperty(*, connector_type, source_connector_properties, api_version=None, connector_profile_name=None, incremental_pull_config=None)
Bases:
object
Contains information about the configuration of the source connector used in the flow.
- Parameters:
connector_type (
str
) – The type of connector, such as Salesforce, Amplitude, and so on.source_connector_properties (
Union
[IResolvable
,SourceConnectorPropertiesProperty
,Dict
[str
,Any
]]) – Specifies the information that is required to query a particular source connector.api_version (
Optional
[str
]) – The API version of the connector when it’s used as a source in the flow.connector_profile_name (
Optional
[str
]) – The name of the connector profile. This name must be unique for each connector profile in the AWS account .incremental_pull_config (
Union
[IResolvable
,IncrementalPullConfigProperty
,Dict
[str
,Any
],None
]) – Defines the configuration for a scheduled incremental data pull. If a valid configuration is provided, the fields specified in the configuration are used when querying for the incremental data pull.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow source_flow_config_property = appflow.CfnFlow.SourceFlowConfigProperty( connector_type="connectorType", source_connector_properties=appflow.CfnFlow.SourceConnectorPropertiesProperty( amplitude=appflow.CfnFlow.AmplitudeSourcePropertiesProperty( object="object" ), custom_connector=appflow.CfnFlow.CustomConnectorSourcePropertiesProperty( entity_name="entityName", # the properties below are optional custom_properties={ "custom_properties_key": "customProperties" } ), datadog=appflow.CfnFlow.DatadogSourcePropertiesProperty( object="object" ), dynatrace=appflow.CfnFlow.DynatraceSourcePropertiesProperty( object="object" ), google_analytics=appflow.CfnFlow.GoogleAnalyticsSourcePropertiesProperty( object="object" ), infor_nexus=appflow.CfnFlow.InforNexusSourcePropertiesProperty( object="object" ), marketo=appflow.CfnFlow.MarketoSourcePropertiesProperty( object="object" ), pardot=appflow.CfnFlow.PardotSourcePropertiesProperty( object="object" ), s3=appflow.CfnFlow.S3SourcePropertiesProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", # the properties below are optional s3_input_format_config=appflow.CfnFlow.S3InputFormatConfigProperty( s3_input_file_type="s3InputFileType" ) ), salesforce=appflow.CfnFlow.SalesforceSourcePropertiesProperty( object="object", # the properties below are optional data_transfer_api="dataTransferApi", enable_dynamic_field_update=False, include_deleted_records=False ), sapo_data=appflow.CfnFlow.SAPODataSourcePropertiesProperty( object_path="objectPath" ), service_now=appflow.CfnFlow.ServiceNowSourcePropertiesProperty( object="object" ), singular=appflow.CfnFlow.SingularSourcePropertiesProperty( object="object" ), slack=appflow.CfnFlow.SlackSourcePropertiesProperty( object="object" ), trendmicro=appflow.CfnFlow.TrendmicroSourcePropertiesProperty( object="object" ), veeva=appflow.CfnFlow.VeevaSourcePropertiesProperty( object="object", # the properties below are optional document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False ), zendesk=appflow.CfnFlow.ZendeskSourcePropertiesProperty( object="object" ) ), # the properties below are optional api_version="apiVersion", connector_profile_name="connectorProfileName", incremental_pull_config=appflow.CfnFlow.IncrementalPullConfigProperty( datetime_type_field_name="datetimeTypeFieldName" ) )
Attributes
- api_version
The API version of the connector when it’s used as a source in the flow.
- connector_profile_name
The name of the connector profile.
This name must be unique for each connector profile in the AWS account .
- connector_type
The type of connector, such as Salesforce, Amplitude, and so on.
- incremental_pull_config
Defines the configuration for a scheduled incremental data pull.
If a valid configuration is provided, the fields specified in the configuration are used when querying for the incremental data pull.
- source_connector_properties
Specifies the information that is required to query a particular source connector.
SuccessResponseHandlingConfigProperty
- class CfnFlow.SuccessResponseHandlingConfigProperty(*, bucket_name=None, bucket_prefix=None)
Bases:
object
Determines how Amazon AppFlow handles the success response that it gets from the connector after placing data.
For example, this setting would determine where to write the response from the destination connector upon a successful insert operation.
- Parameters:
bucket_name (
Optional
[str
]) – The name of the Amazon S3 bucket.bucket_prefix (
Optional
[str
]) – The Amazon S3 bucket prefix.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow success_response_handling_config_property = appflow.CfnFlow.SuccessResponseHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix" )
Attributes
- bucket_name
The name of the Amazon S3 bucket.
- bucket_prefix
The Amazon S3 bucket prefix.
TaskPropertiesObjectProperty
- class CfnFlow.TaskPropertiesObjectProperty(*, key, value)
Bases:
object
A map used to store task-related information.
The execution service looks for particular information based on the
TaskType
.- Parameters:
key (
str
) – The task property key. Allowed Values :VALUE | VALUES | DATA_TYPE | UPPER_BOUND | LOWER_BOUND | SOURCE_DATA_TYPE | DESTINATION_DATA_TYPE | VALIDATION_ACTION | MASK_VALUE | MASK_LENGTH | TRUNCATE_LENGTH | MATH_OPERATION_FIELDS_ORDER | CONCAT_FORMAT | SUBFIELD_CATEGORY_MAP
|EXCLUDE_SOURCE_FIELDS_LIST
value (
str
) – The task property value.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow task_properties_object_property = appflow.CfnFlow.TaskPropertiesObjectProperty( key="key", value="value" )
Attributes
- key
The task property key.
Allowed Values :
VALUE | VALUES | DATA_TYPE | UPPER_BOUND | LOWER_BOUND | SOURCE_DATA_TYPE | DESTINATION_DATA_TYPE | VALIDATION_ACTION | MASK_VALUE | MASK_LENGTH | TRUNCATE_LENGTH | MATH_OPERATION_FIELDS_ORDER | CONCAT_FORMAT | SUBFIELD_CATEGORY_MAP
|EXCLUDE_SOURCE_FIELDS_LIST
TaskProperty
- class CfnFlow.TaskProperty(*, source_fields, task_type, connector_operator=None, destination_field=None, task_properties=None)
Bases:
object
A class for modeling different type of tasks.
Task implementation varies based on the
TaskType
.- Parameters:
source_fields (
Sequence
[str
]) – The source fields to which a particular task is applied.task_type (
str
) – Specifies the particular task implementation that Amazon AppFlow performs. Allowed values :Arithmetic
|Filter
|Map
|Map_all
|Mask
|Merge
|Truncate
|Validate
connector_operator (
Union
[IResolvable
,ConnectorOperatorProperty
,Dict
[str
,Any
],None
]) – The operation to be performed on the provided source fields.destination_field (
Optional
[str
]) – A field in a destination connector, or a field value against which Amazon AppFlow validates a source field.task_properties (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,TaskPropertiesObjectProperty
,Dict
[str
,Any
]]],None
]) – A map used to store task-related information. The execution service looks for particular information based on theTaskType
.
- Link:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-appflow-flow-task.html
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow task_property = appflow.CfnFlow.TaskProperty( source_fields=["sourceFields"], task_type="taskType", # the properties below are optional connector_operator=appflow.CfnFlow.ConnectorOperatorProperty( amplitude="amplitude", custom_connector="customConnector", datadog="datadog", dynatrace="dynatrace", google_analytics="googleAnalytics", infor_nexus="inforNexus", marketo="marketo", pardot="pardot", s3="s3", salesforce="salesforce", sapo_data="sapoData", service_now="serviceNow", singular="singular", slack="slack", trendmicro="trendmicro", veeva="veeva", zendesk="zendesk" ), destination_field="destinationField", task_properties=[appflow.CfnFlow.TaskPropertiesObjectProperty( key="key", value="value" )] )
Attributes
- connector_operator
The operation to be performed on the provided source fields.
- destination_field
A field in a destination connector, or a field value against which Amazon AppFlow validates a source field.
- source_fields
The source fields to which a particular task is applied.
- task_properties
A map used to store task-related information.
The execution service looks for particular information based on the
TaskType
.
- task_type
Specifies the particular task implementation that Amazon AppFlow performs.
Allowed values :
Arithmetic
|Filter
|Map
|Map_all
|Mask
|Merge
|Truncate
|Validate
TrendmicroSourcePropertiesProperty
- class CfnFlow.TrendmicroSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when using Trend Micro as a flow source.
- Parameters:
object (
str
) – The object specified in the Trend Micro flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow trendmicro_source_properties_property = appflow.CfnFlow.TrendmicroSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Trend Micro flow source.
TriggerConfigProperty
- class CfnFlow.TriggerConfigProperty(*, trigger_type, trigger_properties=None)
Bases:
object
The trigger settings that determine how and when Amazon AppFlow runs the specified flow.
- Parameters:
trigger_type (
str
) – Specifies the type of flow trigger. This can beOnDemand
,Scheduled
, orEvent
.trigger_properties (
Union
[IResolvable
,ScheduledTriggerPropertiesProperty
,Dict
[str
,Any
],None
]) – Specifies the configuration details of a schedule-triggered flow as defined by the user. Currently, these settings only apply to theScheduled
trigger type.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow trigger_config_property = appflow.CfnFlow.TriggerConfigProperty( trigger_type="triggerType", # the properties below are optional trigger_properties=appflow.CfnFlow.ScheduledTriggerPropertiesProperty( schedule_expression="scheduleExpression", # the properties below are optional data_pull_mode="dataPullMode", first_execution_from=123, flow_error_deactivation_threshold=123, schedule_end_time=123, schedule_offset=123, schedule_start_time=123, time_zone="timeZone" ) )
Attributes
- trigger_properties
Specifies the configuration details of a schedule-triggered flow as defined by the user.
Currently, these settings only apply to the
Scheduled
trigger type.
- trigger_type
Specifies the type of flow trigger.
This can be
OnDemand
,Scheduled
, orEvent
.
UpsolverDestinationPropertiesProperty
- class CfnFlow.UpsolverDestinationPropertiesProperty(*, bucket_name, s3_output_format_config, bucket_prefix=None)
Bases:
object
The properties that are applied when Upsolver is used as a destination.
- Parameters:
bucket_name (
str
) – The Upsolver Amazon S3 bucket name in which Amazon AppFlow places the transferred data.s3_output_format_config (
Union
[IResolvable
,UpsolverS3OutputFormatConfigProperty
,Dict
[str
,Any
]]) – The configuration that determines how data is formatted when Upsolver is used as the flow destination.bucket_prefix (
Optional
[str
]) – The object key for the destination Upsolver Amazon S3 bucket in which Amazon AppFlow places the files.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow upsolver_destination_properties_property = appflow.CfnFlow.UpsolverDestinationPropertiesProperty( bucket_name="bucketName", s3_output_format_config=appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType" ), # the properties below are optional bucket_prefix="bucketPrefix" )
Attributes
- bucket_name
The Upsolver Amazon S3 bucket name in which Amazon AppFlow places the transferred data.
- bucket_prefix
The object key for the destination Upsolver Amazon S3 bucket in which Amazon AppFlow places the files.
- s3_output_format_config
The configuration that determines how data is formatted when Upsolver is used as the flow destination.
UpsolverS3OutputFormatConfigProperty
- class CfnFlow.UpsolverS3OutputFormatConfigProperty(*, prefix_config, aggregation_config=None, file_type=None)
Bases:
object
The configuration that determines how Amazon AppFlow formats the flow output data when Upsolver is used as the destination.
- Parameters:
prefix_config (
Union
[IResolvable
,PrefixConfigProperty
,Dict
[str
,Any
]]) – Specifies elements that Amazon AppFlow includes in the file and folder names in the flow destination.aggregation_config (
Union
[IResolvable
,AggregationConfigProperty
,Dict
[str
,Any
],None
]) – The aggregation settings that you can use to customize the output format of your flow data.file_type (
Optional
[str
]) – Indicates the file type that Amazon AppFlow places in the Upsolver Amazon S3 bucket.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow upsolver_s3_output_format_config_property = appflow.CfnFlow.UpsolverS3OutputFormatConfigProperty( prefix_config=appflow.CfnFlow.PrefixConfigProperty( path_prefix_hierarchy=["pathPrefixHierarchy"], prefix_format="prefixFormat", prefix_type="prefixType" ), # the properties below are optional aggregation_config=appflow.CfnFlow.AggregationConfigProperty( aggregation_type="aggregationType", target_file_size=123 ), file_type="fileType" )
Attributes
- aggregation_config
The aggregation settings that you can use to customize the output format of your flow data.
- file_type
Indicates the file type that Amazon AppFlow places in the Upsolver Amazon S3 bucket.
- prefix_config
Specifies elements that Amazon AppFlow includes in the file and folder names in the flow destination.
VeevaSourcePropertiesProperty
- class CfnFlow.VeevaSourcePropertiesProperty(*, object, document_type=None, include_all_versions=None, include_renditions=None, include_source_files=None)
Bases:
object
The properties that are applied when using Veeva as a flow source.
- Parameters:
object (
str
) – The object specified in the Veeva flow source.document_type (
Optional
[str
]) – The document type specified in the Veeva document extract flow.include_all_versions (
Union
[bool
,IResolvable
,None
]) – Boolean value to include All Versions of files in Veeva document extract flow.include_renditions (
Union
[bool
,IResolvable
,None
]) – Boolean value to include file renditions in Veeva document extract flow.include_source_files (
Union
[bool
,IResolvable
,None
]) – Boolean value to include source files in Veeva document extract flow.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow veeva_source_properties_property = appflow.CfnFlow.VeevaSourcePropertiesProperty( object="object", # the properties below are optional document_type="documentType", include_all_versions=False, include_renditions=False, include_source_files=False )
Attributes
- document_type
The document type specified in the Veeva document extract flow.
- include_all_versions
Boolean value to include All Versions of files in Veeva document extract flow.
- include_renditions
Boolean value to include file renditions in Veeva document extract flow.
- include_source_files
Boolean value to include source files in Veeva document extract flow.
- object
The object specified in the Veeva flow source.
ZendeskDestinationPropertiesProperty
- class CfnFlow.ZendeskDestinationPropertiesProperty(*, object, error_handling_config=None, id_field_names=None, write_operation_type=None)
Bases:
object
The properties that are applied when Zendesk is used as a destination.
- Parameters:
object (
str
) – The object specified in the Zendesk flow destination.error_handling_config (
Union
[IResolvable
,ErrorHandlingConfigProperty
,Dict
[str
,Any
],None
]) – The settings that determine how Amazon AppFlow handles an error when placing data in the destination. For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.ErrorHandlingConfig
is a part of the destination connector details.id_field_names (
Optional
[Sequence
[str
]]) – A list of field names that can be used as an ID field when performing a write operation.write_operation_type (
Optional
[str
]) – The possible write operations in the destination connector. When this value is not provided, this defaults to theINSERT
operation.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow zendesk_destination_properties_property = appflow.CfnFlow.ZendeskDestinationPropertiesProperty( object="object", # the properties below are optional error_handling_config=appflow.CfnFlow.ErrorHandlingConfigProperty( bucket_name="bucketName", bucket_prefix="bucketPrefix", fail_on_first_error=False ), id_field_names=["idFieldNames"], write_operation_type="writeOperationType" )
Attributes
- error_handling_config
The settings that determine how Amazon AppFlow handles an error when placing data in the destination.
For example, this setting would determine if the flow should fail after one insertion error, or continue and attempt to insert every record regardless of the initial failure.
ErrorHandlingConfig
is a part of the destination connector details.
- id_field_names
A list of field names that can be used as an ID field when performing a write operation.
- object
The object specified in the Zendesk flow destination.
- write_operation_type
The possible write operations in the destination connector.
When this value is not provided, this defaults to the
INSERT
operation.
ZendeskSourcePropertiesProperty
- class CfnFlow.ZendeskSourcePropertiesProperty(*, object)
Bases:
object
The properties that are applied when using Zendesk as a flow source.
- Parameters:
object (
str
) – The object specified in the Zendesk flow source.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_appflow as appflow zendesk_source_properties_property = appflow.CfnFlow.ZendeskSourcePropertiesProperty( object="object" )
Attributes
- object
The object specified in the Zendesk flow source.