CfnApplicationV2
- class aws_cdk.aws_kinesisanalytics.CfnApplicationV2(scope, id, *, runtime_environment, service_execution_role, application_configuration=None, application_description=None, application_maintenance_configuration=None, application_mode=None, application_name=None, run_configuration=None, tags=None)
Bases:
CfnResource
A CloudFormation
AWS::KinesisAnalyticsV2::Application
.Creates an Amazon Kinesis Data Analytics application. For information about creating a Kinesis Data Analytics application, see Creating an Application .
- CloudformationResource:
AWS::KinesisAnalyticsV2::Application
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics cfn_application_v2 = kinesisanalytics.CfnApplicationV2(self, "MyCfnApplicationV2", runtime_environment="runtimeEnvironment", service_execution_role="serviceExecutionRole", # the properties below are optional application_configuration=kinesisanalytics.CfnApplicationV2.ApplicationConfigurationProperty( application_code_configuration=kinesisanalytics.CfnApplicationV2.ApplicationCodeConfigurationProperty( code_content=kinesisanalytics.CfnApplicationV2.CodeContentProperty( s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentLocationProperty( bucket_arn="bucketArn", file_key="fileKey", # the properties below are optional object_version="objectVersion" ), text_content="textContent", zip_file_content="zipFileContent" ), code_content_type="codeContentType" ), application_snapshot_configuration=kinesisanalytics.CfnApplicationV2.ApplicationSnapshotConfigurationProperty( snapshots_enabled=False ), environment_properties=kinesisanalytics.CfnApplicationV2.EnvironmentPropertiesProperty( property_groups=[kinesisanalytics.CfnApplicationV2.PropertyGroupProperty( property_group_id="propertyGroupId", property_map={ "property_map_key": "propertyMap" } )] ), flink_application_configuration=kinesisanalytics.CfnApplicationV2.FlinkApplicationConfigurationProperty( checkpoint_configuration=kinesisanalytics.CfnApplicationV2.CheckpointConfigurationProperty( configuration_type="configurationType", # the properties below are optional checkpointing_enabled=False, checkpoint_interval=123, min_pause_between_checkpoints=123 ), monitoring_configuration=kinesisanalytics.CfnApplicationV2.MonitoringConfigurationProperty( configuration_type="configurationType", # the properties below are optional log_level="logLevel", metrics_level="metricsLevel" ), parallelism_configuration=kinesisanalytics.CfnApplicationV2.ParallelismConfigurationProperty( configuration_type="configurationType", # the properties below are optional auto_scaling_enabled=False, parallelism=123, parallelism_per_kpu=123 ) ), sql_application_configuration=kinesisanalytics.CfnApplicationV2.SqlApplicationConfigurationProperty( inputs=[kinesisanalytics.CfnApplicationV2.InputProperty( input_schema=kinesisanalytics.CfnApplicationV2.InputSchemaProperty( record_columns=[kinesisanalytics.CfnApplicationV2.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )], record_format=kinesisanalytics.CfnApplicationV2.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplicationV2.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplicationV2.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplicationV2.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) ), # the properties below are optional record_encoding="recordEncoding" ), name_prefix="namePrefix", # the properties below are optional input_parallelism=kinesisanalytics.CfnApplicationV2.InputParallelismProperty( count=123 ), input_processing_configuration=kinesisanalytics.CfnApplicationV2.InputProcessingConfigurationProperty( input_lambda_processor=kinesisanalytics.CfnApplicationV2.InputLambdaProcessorProperty( resource_arn="resourceArn" ) ), kinesis_firehose_input=kinesisanalytics.CfnApplicationV2.KinesisFirehoseInputProperty( resource_arn="resourceArn" ), kinesis_streams_input=kinesisanalytics.CfnApplicationV2.KinesisStreamsInputProperty( resource_arn="resourceArn" ) )] ), vpc_configurations=[kinesisanalytics.CfnApplicationV2.VpcConfigurationProperty( security_group_ids=["securityGroupIds"], subnet_ids=["subnetIds"] )], zeppelin_application_configuration=kinesisanalytics.CfnApplicationV2.ZeppelinApplicationConfigurationProperty( catalog_configuration=kinesisanalytics.CfnApplicationV2.CatalogConfigurationProperty( glue_data_catalog_configuration=kinesisanalytics.CfnApplicationV2.GlueDataCatalogConfigurationProperty( database_arn="databaseArn" ) ), custom_artifacts_configuration=[kinesisanalytics.CfnApplicationV2.CustomArtifactConfigurationProperty( artifact_type="artifactType", # the properties below are optional maven_reference=kinesisanalytics.CfnApplicationV2.MavenReferenceProperty( artifact_id="artifactId", group_id="groupId", version="version" ), s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentLocationProperty( bucket_arn="bucketArn", file_key="fileKey", # the properties below are optional object_version="objectVersion" ) )], deploy_as_application_configuration=kinesisanalytics.CfnApplicationV2.DeployAsApplicationConfigurationProperty( s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentBaseLocationProperty( bucket_arn="bucketArn", # the properties below are optional base_path="basePath" ) ), monitoring_configuration=kinesisanalytics.CfnApplicationV2.ZeppelinMonitoringConfigurationProperty( log_level="logLevel" ) ) ), application_description="applicationDescription", application_maintenance_configuration=kinesisanalytics.CfnApplicationV2.ApplicationMaintenanceConfigurationProperty( application_maintenance_window_start_time="applicationMaintenanceWindowStartTime" ), application_mode="applicationMode", application_name="applicationName", run_configuration=kinesisanalytics.CfnApplicationV2.RunConfigurationProperty( application_restore_configuration=kinesisanalytics.CfnApplicationV2.ApplicationRestoreConfigurationProperty( application_restore_type="applicationRestoreType", # the properties below are optional snapshot_name="snapshotName" ), flink_run_configuration=kinesisanalytics.CfnApplicationV2.FlinkRunConfigurationProperty( allow_non_restored_state=False ) ), tags=[CfnTag( key="key", value="value" )] )
Create a new
AWS::KinesisAnalyticsV2::Application
.- Parameters:
scope (
Construct
) –scope in which this resource is defined.
id (
str
) –scoped id of the resource.
runtime_environment (
str
) – The runtime environment for the application.service_execution_role (
str
) – Specifies the IAM role that the application uses to access external resources.application_configuration (
Union
[IResolvable
,ApplicationConfigurationProperty
,Dict
[str
,Any
],None
]) – Use this parameter to configure the application.application_description (
Optional
[str
]) – The description of the application.application_maintenance_configuration (
Union
[IResolvable
,ApplicationMaintenanceConfigurationProperty
,Dict
[str
,Any
],None
]) –AWS::KinesisAnalyticsV2::Application.ApplicationMaintenanceConfiguration
.application_mode (
Optional
[str
]) – To create a Kinesis Data Analytics Studio notebook, you must set the mode toINTERACTIVE
. However, for a Kinesis Data Analytics for Apache Flink application, the mode is optional.application_name (
Optional
[str
]) – The name of the application.run_configuration (
Union
[IResolvable
,RunConfigurationProperty
,Dict
[str
,Any
],None
]) –AWS::KinesisAnalyticsV2::Application.RunConfiguration
.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – A list of one or more tags to assign to the application. A tag is a key-value pair that identifies an application. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50.
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_depends_on(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermdediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
).- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resoure, please consult that specific resource’s documentation.
- Return type:
None
- get_att(attribute_name)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) –tree inspector to collect and process attributes.
- Return type:
None
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::KinesisAnalyticsV2::Application'
- application_configuration
Use this parameter to configure the application.
- application_description
The description of the application.
- application_maintenance_configuration
AWS::KinesisAnalyticsV2::Application.ApplicationMaintenanceConfiguration
.
- application_mode
To create a Kinesis Data Analytics Studio notebook, you must set the mode to
INTERACTIVE
.However, for a Kinesis Data Analytics for Apache Flink application, the mode is optional.
- application_name
The name of the application.
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- node
The construct tree node associated with this construct.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- run_configuration
AWS::KinesisAnalyticsV2::Application.RunConfiguration
.
- runtime_environment
The runtime environment for the application.
- service_execution_role
Specifies the IAM role that the application uses to access external resources.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
A list of one or more tags to assign to the application.
A tag is a key-value pair that identifies an application. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(construct)
Check whether the given construct is a CfnResource.
- Parameters:
construct (
IConstruct
) –- Return type:
bool
- classmethod is_construct(x)
Return whether the given object is a Construct.
- Parameters:
x (
Any
) –- Return type:
bool
ApplicationCodeConfigurationProperty
- class CfnApplicationV2.ApplicationCodeConfigurationProperty(*, code_content, code_content_type)
Bases:
object
Describes code configuration for an application.
- Parameters:
code_content (
Union
[IResolvable
,CodeContentProperty
,Dict
[str
,Any
]]) – The location and type of the application code.code_content_type (
str
) – Specifies whether the code content is in text or zip format.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics application_code_configuration_property = kinesisanalytics.CfnApplicationV2.ApplicationCodeConfigurationProperty( code_content=kinesisanalytics.CfnApplicationV2.CodeContentProperty( s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentLocationProperty( bucket_arn="bucketArn", file_key="fileKey", # the properties below are optional object_version="objectVersion" ), text_content="textContent", zip_file_content="zipFileContent" ), code_content_type="codeContentType" )
Attributes
- code_content
The location and type of the application code.
- code_content_type
Specifies whether the code content is in text or zip format.
ApplicationConfigurationProperty
- class CfnApplicationV2.ApplicationConfigurationProperty(*, application_code_configuration=None, application_snapshot_configuration=None, environment_properties=None, flink_application_configuration=None, sql_application_configuration=None, vpc_configurations=None, zeppelin_application_configuration=None)
Bases:
object
Specifies the creation parameters for a Kinesis Data Analytics application.
- Parameters:
application_code_configuration (
Union
[IResolvable
,ApplicationCodeConfigurationProperty
,Dict
[str
,Any
],None
]) – The code location and type parameters for a Flink-based Kinesis Data Analytics application.application_snapshot_configuration (
Union
[IResolvable
,ApplicationSnapshotConfigurationProperty
,Dict
[str
,Any
],None
]) – Describes whether snapshots are enabled for a Flink-based Kinesis Data Analytics application.environment_properties (
Union
[IResolvable
,EnvironmentPropertiesProperty
,Dict
[str
,Any
],None
]) – Describes execution properties for a Flink-based Kinesis Data Analytics application.flink_application_configuration (
Union
[IResolvable
,FlinkApplicationConfigurationProperty
,Dict
[str
,Any
],None
]) – The creation and update parameters for a Flink-based Kinesis Data Analytics application.sql_application_configuration (
Union
[IResolvable
,SqlApplicationConfigurationProperty
,Dict
[str
,Any
],None
]) – The creation and update parameters for a SQL-based Kinesis Data Analytics application.vpc_configurations (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,VpcConfigurationProperty
,Dict
[str
,Any
]]],None
]) – The array of descriptions of VPC configurations available to the application.zeppelin_application_configuration (
Union
[IResolvable
,ZeppelinApplicationConfigurationProperty
,Dict
[str
,Any
],None
]) – The configuration parameters for a Kinesis Data Analytics Studio notebook.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics application_configuration_property = kinesisanalytics.CfnApplicationV2.ApplicationConfigurationProperty( application_code_configuration=kinesisanalytics.CfnApplicationV2.ApplicationCodeConfigurationProperty( code_content=kinesisanalytics.CfnApplicationV2.CodeContentProperty( s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentLocationProperty( bucket_arn="bucketArn", file_key="fileKey", # the properties below are optional object_version="objectVersion" ), text_content="textContent", zip_file_content="zipFileContent" ), code_content_type="codeContentType" ), application_snapshot_configuration=kinesisanalytics.CfnApplicationV2.ApplicationSnapshotConfigurationProperty( snapshots_enabled=False ), environment_properties=kinesisanalytics.CfnApplicationV2.EnvironmentPropertiesProperty( property_groups=[kinesisanalytics.CfnApplicationV2.PropertyGroupProperty( property_group_id="propertyGroupId", property_map={ "property_map_key": "propertyMap" } )] ), flink_application_configuration=kinesisanalytics.CfnApplicationV2.FlinkApplicationConfigurationProperty( checkpoint_configuration=kinesisanalytics.CfnApplicationV2.CheckpointConfigurationProperty( configuration_type="configurationType", # the properties below are optional checkpointing_enabled=False, checkpoint_interval=123, min_pause_between_checkpoints=123 ), monitoring_configuration=kinesisanalytics.CfnApplicationV2.MonitoringConfigurationProperty( configuration_type="configurationType", # the properties below are optional log_level="logLevel", metrics_level="metricsLevel" ), parallelism_configuration=kinesisanalytics.CfnApplicationV2.ParallelismConfigurationProperty( configuration_type="configurationType", # the properties below are optional auto_scaling_enabled=False, parallelism=123, parallelism_per_kpu=123 ) ), sql_application_configuration=kinesisanalytics.CfnApplicationV2.SqlApplicationConfigurationProperty( inputs=[kinesisanalytics.CfnApplicationV2.InputProperty( input_schema=kinesisanalytics.CfnApplicationV2.InputSchemaProperty( record_columns=[kinesisanalytics.CfnApplicationV2.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )], record_format=kinesisanalytics.CfnApplicationV2.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplicationV2.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplicationV2.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplicationV2.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) ), # the properties below are optional record_encoding="recordEncoding" ), name_prefix="namePrefix", # the properties below are optional input_parallelism=kinesisanalytics.CfnApplicationV2.InputParallelismProperty( count=123 ), input_processing_configuration=kinesisanalytics.CfnApplicationV2.InputProcessingConfigurationProperty( input_lambda_processor=kinesisanalytics.CfnApplicationV2.InputLambdaProcessorProperty( resource_arn="resourceArn" ) ), kinesis_firehose_input=kinesisanalytics.CfnApplicationV2.KinesisFirehoseInputProperty( resource_arn="resourceArn" ), kinesis_streams_input=kinesisanalytics.CfnApplicationV2.KinesisStreamsInputProperty( resource_arn="resourceArn" ) )] ), vpc_configurations=[kinesisanalytics.CfnApplicationV2.VpcConfigurationProperty( security_group_ids=["securityGroupIds"], subnet_ids=["subnetIds"] )], zeppelin_application_configuration=kinesisanalytics.CfnApplicationV2.ZeppelinApplicationConfigurationProperty( catalog_configuration=kinesisanalytics.CfnApplicationV2.CatalogConfigurationProperty( glue_data_catalog_configuration=kinesisanalytics.CfnApplicationV2.GlueDataCatalogConfigurationProperty( database_arn="databaseArn" ) ), custom_artifacts_configuration=[kinesisanalytics.CfnApplicationV2.CustomArtifactConfigurationProperty( artifact_type="artifactType", # the properties below are optional maven_reference=kinesisanalytics.CfnApplicationV2.MavenReferenceProperty( artifact_id="artifactId", group_id="groupId", version="version" ), s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentLocationProperty( bucket_arn="bucketArn", file_key="fileKey", # the properties below are optional object_version="objectVersion" ) )], deploy_as_application_configuration=kinesisanalytics.CfnApplicationV2.DeployAsApplicationConfigurationProperty( s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentBaseLocationProperty( bucket_arn="bucketArn", # the properties below are optional base_path="basePath" ) ), monitoring_configuration=kinesisanalytics.CfnApplicationV2.ZeppelinMonitoringConfigurationProperty( log_level="logLevel" ) ) )
Attributes
- application_code_configuration
The code location and type parameters for a Flink-based Kinesis Data Analytics application.
- application_snapshot_configuration
Describes whether snapshots are enabled for a Flink-based Kinesis Data Analytics application.
- environment_properties
Describes execution properties for a Flink-based Kinesis Data Analytics application.
- flink_application_configuration
The creation and update parameters for a Flink-based Kinesis Data Analytics application.
- sql_application_configuration
The creation and update parameters for a SQL-based Kinesis Data Analytics application.
- vpc_configurations
The array of descriptions of VPC configurations available to the application.
- zeppelin_application_configuration
The configuration parameters for a Kinesis Data Analytics Studio notebook.
ApplicationMaintenanceConfigurationProperty
- class CfnApplicationV2.ApplicationMaintenanceConfigurationProperty(*, application_maintenance_window_start_time)
Bases:
object
Specifies the maintence window parameters for a Kinesis Data Analytics application.
- Parameters:
application_maintenance_window_start_time (
str
) – Specifies the start time of the maintence window.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics application_maintenance_configuration_property = kinesisanalytics.CfnApplicationV2.ApplicationMaintenanceConfigurationProperty( application_maintenance_window_start_time="applicationMaintenanceWindowStartTime" )
Attributes
- application_maintenance_window_start_time
Specifies the start time of the maintence window.
ApplicationRestoreConfigurationProperty
- class CfnApplicationV2.ApplicationRestoreConfigurationProperty(*, application_restore_type, snapshot_name=None)
Bases:
object
Specifies the method and snapshot to use when restarting an application using previously saved application state.
- Parameters:
application_restore_type (
str
) – Specifies how the application should be restored.snapshot_name (
Optional
[str
]) – The identifier of an existing snapshot of application state to use to restart an application. The application uses this value ifRESTORE_FROM_CUSTOM_SNAPSHOT
is specified for theApplicationRestoreType
.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics application_restore_configuration_property = kinesisanalytics.CfnApplicationV2.ApplicationRestoreConfigurationProperty( application_restore_type="applicationRestoreType", # the properties below are optional snapshot_name="snapshotName" )
Attributes
- application_restore_type
Specifies how the application should be restored.
- snapshot_name
The identifier of an existing snapshot of application state to use to restart an application.
The application uses this value if
RESTORE_FROM_CUSTOM_SNAPSHOT
is specified for theApplicationRestoreType
.
ApplicationSnapshotConfigurationProperty
- class CfnApplicationV2.ApplicationSnapshotConfigurationProperty(*, snapshots_enabled)
Bases:
object
Describes whether snapshots are enabled for a Flink-based Kinesis Data Analytics application.
- Parameters:
snapshots_enabled (
Union
[bool
,IResolvable
]) – Describes whether snapshots are enabled for a Flink-based Kinesis Data Analytics application.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics application_snapshot_configuration_property = kinesisanalytics.CfnApplicationV2.ApplicationSnapshotConfigurationProperty( snapshots_enabled=False )
Attributes
- snapshots_enabled
Describes whether snapshots are enabled for a Flink-based Kinesis Data Analytics application.
CSVMappingParametersProperty
- class CfnApplicationV2.CSVMappingParametersProperty(*, record_column_delimiter, record_row_delimiter)
Bases:
object
For a SQL-based Kinesis Data Analytics application, provides additional mapping information when the record format uses delimiters, such as CSV.
For example, the following sample records use CSV format, where the records use the ‘n’ as the row delimiter and a comma (“,”) as the column delimiter:
"name1", "address1"
"name2", "address2"
- Parameters:
record_column_delimiter (
str
) – The column delimiter. For example, in a CSV format, a comma (“,”) is the typical column delimiter.record_row_delimiter (
str
) – The row delimiter. For example, in a CSV format, ‘n’ is the typical row delimiter.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics c_sVMapping_parameters_property = kinesisanalytics.CfnApplicationV2.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" )
Attributes
- record_column_delimiter
The column delimiter.
For example, in a CSV format, a comma (“,”) is the typical column delimiter.
- record_row_delimiter
The row delimiter.
For example, in a CSV format, ‘n’ is the typical row delimiter.
CatalogConfigurationProperty
- class CfnApplicationV2.CatalogConfigurationProperty(*, glue_data_catalog_configuration=None)
Bases:
object
The configuration parameters for the default Amazon Glue database.
You use this database for SQL queries that you write in a Kinesis Data Analytics Studio notebook.
- Parameters:
glue_data_catalog_configuration (
Union
[IResolvable
,GlueDataCatalogConfigurationProperty
,Dict
[str
,Any
],None
]) – The configuration parameters for the default Amazon Glue database. You use this database for Apache Flink SQL queries and table API transforms that you write in a Kinesis Data Analytics Studio notebook.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics catalog_configuration_property = kinesisanalytics.CfnApplicationV2.CatalogConfigurationProperty( glue_data_catalog_configuration=kinesisanalytics.CfnApplicationV2.GlueDataCatalogConfigurationProperty( database_arn="databaseArn" ) )
Attributes
- glue_data_catalog_configuration
The configuration parameters for the default Amazon Glue database.
You use this database for Apache Flink SQL queries and table API transforms that you write in a Kinesis Data Analytics Studio notebook.
CheckpointConfigurationProperty
- class CfnApplicationV2.CheckpointConfigurationProperty(*, configuration_type, checkpointing_enabled=None, checkpoint_interval=None, min_pause_between_checkpoints=None)
Bases:
object
Describes an application’s checkpointing configuration.
Checkpointing is the process of persisting application state for fault tolerance. For more information, see Checkpoints for Fault Tolerance in the Apache Flink Documentation .
- Parameters:
configuration_type (
str
) – Describes whether the application uses Kinesis Data Analytics’ default checkpointing behavior. You must set this property toCUSTOM
in order to set theCheckpointingEnabled
,CheckpointInterval
, orMinPauseBetweenCheckpoints
parameters. .. epigraph:: If this value is set toDEFAULT
, the application will use the following values, even if they are set to other values using APIs or application code: - CheckpointingEnabled: true - CheckpointInterval: 60000 - MinPauseBetweenCheckpoints: 5000checkpointing_enabled (
Union
[bool
,IResolvable
,None
]) – Describes whether checkpointing is enabled for a Flink-based Kinesis Data Analytics application. .. epigraph:: IfCheckpointConfiguration.ConfigurationType
isDEFAULT
, the application will use aCheckpointingEnabled
value oftrue
, even if this value is set to another value using this API or in application code.checkpoint_interval (
Union
[int
,float
,None
]) – Describes the interval in milliseconds between checkpoint operations. .. epigraph:: IfCheckpointConfiguration.ConfigurationType
isDEFAULT
, the application will use aCheckpointInterval
value of 60000, even if this value is set to another value using this API or in application code.min_pause_between_checkpoints (
Union
[int
,float
,None
]) –Describes the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start. If a checkpoint operation takes longer than the
CheckpointInterval
, the application otherwise performs continual checkpoint operations. For more information, see Tuning Checkpointing in the Apache Flink Documentation . .. epigraph:: IfCheckpointConfiguration.ConfigurationType
isDEFAULT
, the application will use aMinPauseBetweenCheckpoints
value of 5000, even if this value is set using this API or in application code.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics checkpoint_configuration_property = kinesisanalytics.CfnApplicationV2.CheckpointConfigurationProperty( configuration_type="configurationType", # the properties below are optional checkpointing_enabled=False, checkpoint_interval=123, min_pause_between_checkpoints=123 )
Attributes
- checkpoint_interval
Describes the interval in milliseconds between checkpoint operations.
If
CheckpointConfiguration.ConfigurationType
isDEFAULT
, the application will use aCheckpointInterval
value of 60000, even if this value is set to another value using this API or in application code.
- checkpointing_enabled
Describes whether checkpointing is enabled for a Flink-based Kinesis Data Analytics application.
If
CheckpointConfiguration.ConfigurationType
isDEFAULT
, the application will use aCheckpointingEnabled
value oftrue
, even if this value is set to another value using this API or in application code.
- configuration_type
Describes whether the application uses Kinesis Data Analytics’ default checkpointing behavior.
You must set this property to
CUSTOM
in order to set theCheckpointingEnabled
,CheckpointInterval
, orMinPauseBetweenCheckpoints
parameters. .. epigraph:If this value is set to ``DEFAULT`` , the application will use the following values, even if they are set to other values using APIs or application code: - *CheckpointingEnabled:* true - *CheckpointInterval:* 60000 - *MinPauseBetweenCheckpoints:* 5000
- min_pause_between_checkpoints
Describes the minimum time in milliseconds after a checkpoint operation completes that a new checkpoint operation can start.
If a checkpoint operation takes longer than the
CheckpointInterval
, the application otherwise performs continual checkpoint operations. For more information, see Tuning Checkpointing in the Apache Flink Documentation . .. epigraph:If ``CheckpointConfiguration.ConfigurationType`` is ``DEFAULT`` , the application will use a ``MinPauseBetweenCheckpoints`` value of 5000, even if this value is set using this API or in application code.
CodeContentProperty
- class CfnApplicationV2.CodeContentProperty(*, s3_content_location=None, text_content=None, zip_file_content=None)
Bases:
object
Specifies either the application code, or the location of the application code, for a Flink-based Kinesis Data Analytics application.
- Parameters:
s3_content_location (
Union
[IResolvable
,S3ContentLocationProperty
,Dict
[str
,Any
],None
]) – Information about the Amazon S3 bucket that contains the application code.text_content (
Optional
[str
]) – The text-format code for a Flink-based Kinesis Data Analytics application.zip_file_content (
Optional
[str
]) – The zip-format code for a Flink-based Kinesis Data Analytics application.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics code_content_property = kinesisanalytics.CfnApplicationV2.CodeContentProperty( s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentLocationProperty( bucket_arn="bucketArn", file_key="fileKey", # the properties below are optional object_version="objectVersion" ), text_content="textContent", zip_file_content="zipFileContent" )
Attributes
- s3_content_location
Information about the Amazon S3 bucket that contains the application code.
- text_content
The text-format code for a Flink-based Kinesis Data Analytics application.
- zip_file_content
The zip-format code for a Flink-based Kinesis Data Analytics application.
CustomArtifactConfigurationProperty
- class CfnApplicationV2.CustomArtifactConfigurationProperty(*, artifact_type, maven_reference=None, s3_content_location=None)
Bases:
object
The configuration of connectors and user-defined functions.
- Parameters:
artifact_type (
str
) – Set this to eitherUDF
orDEPENDENCY_JAR
.UDF
stands for user-defined functions. This type of artifact must be in an S3 bucket. ADEPENDENCY_JAR
can be in either Maven or an S3 bucket.maven_reference (
Union
[IResolvable
,MavenReferenceProperty
,Dict
[str
,Any
],None
]) – The parameters required to fully specify a Maven reference.s3_content_location (
Union
[IResolvable
,S3ContentLocationProperty
,Dict
[str
,Any
],None
]) – The location of the custom artifacts.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics custom_artifact_configuration_property = kinesisanalytics.CfnApplicationV2.CustomArtifactConfigurationProperty( artifact_type="artifactType", # the properties below are optional maven_reference=kinesisanalytics.CfnApplicationV2.MavenReferenceProperty( artifact_id="artifactId", group_id="groupId", version="version" ), s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentLocationProperty( bucket_arn="bucketArn", file_key="fileKey", # the properties below are optional object_version="objectVersion" ) )
Attributes
- artifact_type
Set this to either
UDF
orDEPENDENCY_JAR
.UDF
stands for user-defined functions. This type of artifact must be in an S3 bucket. ADEPENDENCY_JAR
can be in either Maven or an S3 bucket.
- maven_reference
The parameters required to fully specify a Maven reference.
- s3_content_location
The location of the custom artifacts.
DeployAsApplicationConfigurationProperty
- class CfnApplicationV2.DeployAsApplicationConfigurationProperty(*, s3_content_location)
Bases:
object
The information required to deploy a Kinesis Data Analytics Studio notebook as an application with durable state.
- Parameters:
s3_content_location (
Union
[IResolvable
,S3ContentBaseLocationProperty
,Dict
[str
,Any
]]) – The description of an Amazon S3 object that contains the Amazon Data Analytics application, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics deploy_as_application_configuration_property = kinesisanalytics.CfnApplicationV2.DeployAsApplicationConfigurationProperty( s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentBaseLocationProperty( bucket_arn="bucketArn", # the properties below are optional base_path="basePath" ) )
Attributes
- s3_content_location
The description of an Amazon S3 object that contains the Amazon Data Analytics application, including the Amazon Resource Name (ARN) of the S3 bucket, the name of the Amazon S3 object that contains the data, and the version number of the Amazon S3 object that contains the data.
EnvironmentPropertiesProperty
- class CfnApplicationV2.EnvironmentPropertiesProperty(*, property_groups=None)
Bases:
object
Describes execution properties for a Flink-based Kinesis Data Analytics application.
- Parameters:
property_groups (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,PropertyGroupProperty
,Dict
[str
,Any
]]],None
]) – Describes the execution property groups.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics environment_properties_property = kinesisanalytics.CfnApplicationV2.EnvironmentPropertiesProperty( property_groups=[kinesisanalytics.CfnApplicationV2.PropertyGroupProperty( property_group_id="propertyGroupId", property_map={ "property_map_key": "propertyMap" } )] )
Attributes
- property_groups
Describes the execution property groups.
FlinkApplicationConfigurationProperty
- class CfnApplicationV2.FlinkApplicationConfigurationProperty(*, checkpoint_configuration=None, monitoring_configuration=None, parallelism_configuration=None)
Bases:
object
Describes configuration parameters for a Flink-based Kinesis Data Analytics application or a Studio notebook.
- Parameters:
checkpoint_configuration (
Union
[IResolvable
,CheckpointConfigurationProperty
,Dict
[str
,Any
],None
]) –Describes an application’s checkpointing configuration. Checkpointing is the process of persisting application state for fault tolerance. For more information, see Checkpoints for Fault Tolerance in the Apache Flink Documentation .
monitoring_configuration (
Union
[IResolvable
,MonitoringConfigurationProperty
,Dict
[str
,Any
],None
]) – Describes configuration parameters for Amazon CloudWatch logging for an application.parallelism_configuration (
Union
[IResolvable
,ParallelismConfigurationProperty
,Dict
[str
,Any
],None
]) – Describes parameters for how an application executes multiple tasks simultaneously.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics flink_application_configuration_property = kinesisanalytics.CfnApplicationV2.FlinkApplicationConfigurationProperty( checkpoint_configuration=kinesisanalytics.CfnApplicationV2.CheckpointConfigurationProperty( configuration_type="configurationType", # the properties below are optional checkpointing_enabled=False, checkpoint_interval=123, min_pause_between_checkpoints=123 ), monitoring_configuration=kinesisanalytics.CfnApplicationV2.MonitoringConfigurationProperty( configuration_type="configurationType", # the properties below are optional log_level="logLevel", metrics_level="metricsLevel" ), parallelism_configuration=kinesisanalytics.CfnApplicationV2.ParallelismConfigurationProperty( configuration_type="configurationType", # the properties below are optional auto_scaling_enabled=False, parallelism=123, parallelism_per_kpu=123 ) )
Attributes
- checkpoint_configuration
Describes an application’s checkpointing configuration.
Checkpointing is the process of persisting application state for fault tolerance. For more information, see Checkpoints for Fault Tolerance in the Apache Flink Documentation .
- monitoring_configuration
Describes configuration parameters for Amazon CloudWatch logging for an application.
- parallelism_configuration
Describes parameters for how an application executes multiple tasks simultaneously.
FlinkRunConfigurationProperty
- class CfnApplicationV2.FlinkRunConfigurationProperty(*, allow_non_restored_state=None)
Bases:
object
Describes the starting parameters for a Flink-based Kinesis Data Analytics application.
- Parameters:
allow_non_restored_state (
Union
[bool
,IResolvable
,None
]) –When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program. This will happen if the program is updated between snapshots to remove stateful parameters, and state data in the snapshot no longer corresponds to valid application data. For more information, see Allowing Non-Restored State in the Apache Flink documentation . .. epigraph:: This value defaults to
false
. If you update your application without specifying this parameter,AllowNonRestoredState
will be set tofalse
, even if it was previously set totrue
.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics flink_run_configuration_property = kinesisanalytics.CfnApplicationV2.FlinkRunConfigurationProperty( allow_non_restored_state=False )
Attributes
- allow_non_restored_state
When restoring from a snapshot, specifies whether the runtime is allowed to skip a state that cannot be mapped to the new program.
This will happen if the program is updated between snapshots to remove stateful parameters, and state data in the snapshot no longer corresponds to valid application data. For more information, see Allowing Non-Restored State in the Apache Flink documentation . .. epigraph:
This value defaults to ``false`` . If you update your application without specifying this parameter, ``AllowNonRestoredState`` will be set to ``false`` , even if it was previously set to ``true`` .
GlueDataCatalogConfigurationProperty
- class CfnApplicationV2.GlueDataCatalogConfigurationProperty(*, database_arn=None)
Bases:
object
The configuration of the Glue Data Catalog that you use for Apache Flink SQL queries and table API transforms that you write in an application.
- Parameters:
database_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) of the database.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics glue_data_catalog_configuration_property = kinesisanalytics.CfnApplicationV2.GlueDataCatalogConfigurationProperty( database_arn="databaseArn" )
Attributes
- database_arn
The Amazon Resource Name (ARN) of the database.
InputLambdaProcessorProperty
- class CfnApplicationV2.InputLambdaProcessorProperty(*, resource_arn)
Bases:
object
An object that contains the Amazon Resource Name (ARN) of the Amazon Lambda function that is used to preprocess records in the stream in a SQL-based Kinesis Data Analytics application.
- Parameters:
resource_arn (
str
) – The ARN of the Amazon Lambda function that operates on records in the stream. .. epigraph:: To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_lambda_processor_property = kinesisanalytics.CfnApplicationV2.InputLambdaProcessorProperty( resource_arn="resourceArn" )
Attributes
- resource_arn
The ARN of the Amazon Lambda function that operates on records in the stream.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: Amazon Lambda
InputParallelismProperty
- class CfnApplicationV2.InputParallelismProperty(*, count=None)
Bases:
object
For a SQL-based Kinesis Data Analytics application, describes the number of in-application streams to create for a given streaming source.
- Parameters:
count (
Union
[int
,float
,None
]) – The number of in-application streams to create.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_parallelism_property = kinesisanalytics.CfnApplicationV2.InputParallelismProperty( count=123 )
Attributes
- count
The number of in-application streams to create.
InputProcessingConfigurationProperty
- class CfnApplicationV2.InputProcessingConfigurationProperty(*, input_lambda_processor=None)
Bases:
object
For an SQL-based Amazon Kinesis Data Analytics application, describes a processor that is used to preprocess the records in the stream before being processed by your application code.
Currently, the only input processor available is Amazon Lambda .
- Parameters:
input_lambda_processor (
Union
[IResolvable
,InputLambdaProcessorProperty
,Dict
[str
,Any
],None
]) – The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_processing_configuration_property = kinesisanalytics.CfnApplicationV2.InputProcessingConfigurationProperty( input_lambda_processor=kinesisanalytics.CfnApplicationV2.InputLambdaProcessorProperty( resource_arn="resourceArn" ) )
Attributes
- input_lambda_processor
//docs.aws.amazon.com/kinesisanalytics/latest/apiv2/API_InputLambdaProcessor.html>`_ that is used to preprocess the records in the stream before being processed by your application code.
InputProperty
- class CfnApplicationV2.InputProperty(*, input_schema, name_prefix, input_parallelism=None, input_processing_configuration=None, kinesis_firehose_input=None, kinesis_streams_input=None)
Bases:
object
When you configure the application input for a SQL-based Kinesis Data Analytics application, you specify the streaming source, the in-application stream name that is created, and the mapping between the two.
- Parameters:
input_schema (
Union
[IResolvable
,InputSchemaProperty
,Dict
[str
,Any
]]) – Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created. Also used to describe the format of the reference data source.name_prefix (
str
) – The name prefix to use when creating an in-application stream. Suppose that you specify a prefix ”MyInApplicationStream
.” Kinesis Data Analytics then creates one or more (as per theInputParallelism
count you specified) in-application streams with the names ”MyInApplicationStream_001
,” ”MyInApplicationStream_002
,” and so on.input_parallelism (
Union
[IResolvable
,InputParallelismProperty
,Dict
[str
,Any
],None
]) – Describes the number of in-application streams to create.input_processing_configuration (
Union
[IResolvable
,InputProcessingConfigurationProperty
,Dict
[str
,Any
],None
]) –The InputProcessingConfiguration for the input. An input processor transforms records as they are received from the stream, before the application’s SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor .
kinesis_firehose_input (
Union
[IResolvable
,KinesisFirehoseInputProperty
,Dict
[str
,Any
],None
]) – If the streaming source is an Amazon Kinesis Data Firehose delivery stream, identifies the delivery stream’s ARN.kinesis_streams_input (
Union
[IResolvable
,KinesisStreamsInputProperty
,Dict
[str
,Any
],None
]) – If the streaming source is an Amazon Kinesis data stream, identifies the stream’s Amazon Resource Name (ARN).
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_property = kinesisanalytics.CfnApplicationV2.InputProperty( input_schema=kinesisanalytics.CfnApplicationV2.InputSchemaProperty( record_columns=[kinesisanalytics.CfnApplicationV2.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )], record_format=kinesisanalytics.CfnApplicationV2.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplicationV2.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplicationV2.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplicationV2.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) ), # the properties below are optional record_encoding="recordEncoding" ), name_prefix="namePrefix", # the properties below are optional input_parallelism=kinesisanalytics.CfnApplicationV2.InputParallelismProperty( count=123 ), input_processing_configuration=kinesisanalytics.CfnApplicationV2.InputProcessingConfigurationProperty( input_lambda_processor=kinesisanalytics.CfnApplicationV2.InputLambdaProcessorProperty( resource_arn="resourceArn" ) ), kinesis_firehose_input=kinesisanalytics.CfnApplicationV2.KinesisFirehoseInputProperty( resource_arn="resourceArn" ), kinesis_streams_input=kinesisanalytics.CfnApplicationV2.KinesisStreamsInputProperty( resource_arn="resourceArn" ) )
Attributes
- input_parallelism
Describes the number of in-application streams to create.
- input_processing_configuration
//docs.aws.amazon.com/kinesisanalytics/latest/apiv2/API_InputLambdaProcessor.html>`_ .
- Link:
- Type:
The `InputProcessingConfiguration <https
- Type:
//docs.aws.amazon.com/kinesisanalytics/latest/apiv2/API_InputProcessingConfiguration.html>`_ for the input. An input processor transforms records as they are received from the stream, before the application’s SQL code executes. Currently, the only input processing configuration available is `InputLambdaProcessor <https
- input_schema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.
Also used to describe the format of the reference data source.
- kinesis_firehose_input
If the streaming source is an Amazon Kinesis Data Firehose delivery stream, identifies the delivery stream’s ARN.
- kinesis_streams_input
If the streaming source is an Amazon Kinesis data stream, identifies the stream’s Amazon Resource Name (ARN).
- name_prefix
The name prefix to use when creating an in-application stream.
Suppose that you specify a prefix ”
MyInApplicationStream
.” Kinesis Data Analytics then creates one or more (as per theInputParallelism
count you specified) in-application streams with the names ”MyInApplicationStream_001
,” ”MyInApplicationStream_002
,” and so on.
InputSchemaProperty
- class CfnApplicationV2.InputSchemaProperty(*, record_columns, record_format, record_encoding=None)
Bases:
object
For a SQL-based Kinesis Data Analytics application, describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream.
- Parameters:
record_columns (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,RecordColumnProperty
,Dict
[str
,Any
]]]]) – A list ofRecordColumn
objects.record_format (
Union
[IResolvable
,RecordFormatProperty
,Dict
[str
,Any
]]) – Specifies the format of the records on the streaming source.record_encoding (
Optional
[str
]) – Specifies the encoding of the records in the streaming source. For example, UTF-8.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics input_schema_property = kinesisanalytics.CfnApplicationV2.InputSchemaProperty( record_columns=[kinesisanalytics.CfnApplicationV2.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )], record_format=kinesisanalytics.CfnApplicationV2.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplicationV2.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplicationV2.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplicationV2.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) ), # the properties below are optional record_encoding="recordEncoding" )
Attributes
- record_columns
A list of
RecordColumn
objects.
- record_encoding
Specifies the encoding of the records in the streaming source.
For example, UTF-8.
- record_format
Specifies the format of the records on the streaming source.
JSONMappingParametersProperty
- class CfnApplicationV2.JSONMappingParametersProperty(*, record_row_path)
Bases:
object
For a SQL-based Kinesis Data Analytics application, provides additional mapping information when JSON is the record format on the streaming source.
- Parameters:
record_row_path (
str
) – The path to the top-level parent that contains the records.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics j_sONMapping_parameters_property = kinesisanalytics.CfnApplicationV2.JSONMappingParametersProperty( record_row_path="recordRowPath" )
Attributes
- record_row_path
The path to the top-level parent that contains the records.
KinesisFirehoseInputProperty
- class CfnApplicationV2.KinesisFirehoseInputProperty(*, resource_arn)
Bases:
object
For a SQL-based Kinesis Data Analytics application, identifies a Kinesis Data Firehose delivery stream as the streaming source.
You provide the delivery stream’s Amazon Resource Name (ARN).
- Parameters:
resource_arn (
str
) – The Amazon Resource Name (ARN) of the delivery stream.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics kinesis_firehose_input_property = kinesisanalytics.CfnApplicationV2.KinesisFirehoseInputProperty( resource_arn="resourceArn" )
Attributes
- resource_arn
The Amazon Resource Name (ARN) of the delivery stream.
KinesisStreamsInputProperty
- class CfnApplicationV2.KinesisStreamsInputProperty(*, resource_arn)
Bases:
object
Identifies a Kinesis data stream as the streaming source.
You provide the stream’s Amazon Resource Name (ARN).
- Parameters:
resource_arn (
str
) – The ARN of the input Kinesis data stream to read.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics kinesis_streams_input_property = kinesisanalytics.CfnApplicationV2.KinesisStreamsInputProperty( resource_arn="resourceArn" )
Attributes
- resource_arn
The ARN of the input Kinesis data stream to read.
MappingParametersProperty
- class CfnApplicationV2.MappingParametersProperty(*, csv_mapping_parameters=None, json_mapping_parameters=None)
Bases:
object
When you configure a SQL-based Kinesis Data Analytics application’s input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source.
- Parameters:
csv_mapping_parameters (
Union
[IResolvable
,CSVMappingParametersProperty
,Dict
[str
,Any
],None
]) – Provides additional mapping information when the record format uses delimiters (for example, CSV).json_mapping_parameters (
Union
[IResolvable
,JSONMappingParametersProperty
,Dict
[str
,Any
],None
]) – Provides additional mapping information when JSON is the record format on the streaming source.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics mapping_parameters_property = kinesisanalytics.CfnApplicationV2.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplicationV2.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplicationV2.JSONMappingParametersProperty( record_row_path="recordRowPath" ) )
Attributes
- csv_mapping_parameters
Provides additional mapping information when the record format uses delimiters (for example, CSV).
- json_mapping_parameters
Provides additional mapping information when JSON is the record format on the streaming source.
MavenReferenceProperty
- class CfnApplicationV2.MavenReferenceProperty(*, artifact_id, group_id, version)
Bases:
object
The information required to specify a Maven reference.
You can use Maven references to specify dependency JAR files.
- Parameters:
artifact_id (
str
) – The artifact ID of the Maven reference.group_id (
str
) – The group ID of the Maven reference.version (
str
) – The version of the Maven reference.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics maven_reference_property = kinesisanalytics.CfnApplicationV2.MavenReferenceProperty( artifact_id="artifactId", group_id="groupId", version="version" )
Attributes
- artifact_id
The artifact ID of the Maven reference.
- group_id
The group ID of the Maven reference.
- version
The version of the Maven reference.
MonitoringConfigurationProperty
- class CfnApplicationV2.MonitoringConfigurationProperty(*, configuration_type, log_level=None, metrics_level=None)
Bases:
object
Describes configuration parameters for Amazon CloudWatch logging for a Java-based Kinesis Data Analytics application.
For more information about CloudWatch logging, see Monitoring .
- Parameters:
configuration_type (
str
) – Describes whether to use the default CloudWatch logging configuration for an application. You must set this property toCUSTOM
in order to set theLogLevel
orMetricsLevel
parameters.log_level (
Optional
[str
]) – Describes the verbosity of the CloudWatch Logs for an application.metrics_level (
Optional
[str
]) – Describes the granularity of the CloudWatch Logs for an application. TheParallelism
level is not recommended for applications with a Parallelism over 64 due to excessive costs.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics monitoring_configuration_property = kinesisanalytics.CfnApplicationV2.MonitoringConfigurationProperty( configuration_type="configurationType", # the properties below are optional log_level="logLevel", metrics_level="metricsLevel" )
Attributes
- configuration_type
Describes whether to use the default CloudWatch logging configuration for an application.
You must set this property to
CUSTOM
in order to set theLogLevel
orMetricsLevel
parameters.
- log_level
Describes the verbosity of the CloudWatch Logs for an application.
- metrics_level
Describes the granularity of the CloudWatch Logs for an application.
The
Parallelism
level is not recommended for applications with a Parallelism over 64 due to excessive costs.
ParallelismConfigurationProperty
- class CfnApplicationV2.ParallelismConfigurationProperty(*, configuration_type, auto_scaling_enabled=None, parallelism=None, parallelism_per_kpu=None)
Bases:
object
Describes parameters for how a Flink-based Kinesis Data Analytics application executes multiple tasks simultaneously.
For more information about parallelism, see Parallel Execution in the Apache Flink Documentation .
- Parameters:
configuration_type (
str
) – Describes whether the application uses the default parallelism for the Kinesis Data Analytics service. You must set this property toCUSTOM
in order to change your application’sAutoScalingEnabled
,Parallelism
, orParallelismPerKPU
properties.auto_scaling_enabled (
Union
[bool
,IResolvable
,None
]) – Describes whether the Kinesis Data Analytics service can increase the parallelism of the application in response to increased throughput.parallelism (
Union
[int
,float
,None
]) – Describes the initial number of parallel tasks that a Java-based Kinesis Data Analytics application can perform. The Kinesis Data Analytics service can increase this number automatically if ParallelismConfiguration:AutoScalingEnabled is set totrue
.parallelism_per_kpu (
Union
[int
,float
,None
]) – Describes the number of parallel tasks that a Java-based Kinesis Data Analytics application can perform per Kinesis Processing Unit (KPU) used by the application. For more information about KPUs, see Amazon Kinesis Data Analytics Pricing .
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics parallelism_configuration_property = kinesisanalytics.CfnApplicationV2.ParallelismConfigurationProperty( configuration_type="configurationType", # the properties below are optional auto_scaling_enabled=False, parallelism=123, parallelism_per_kpu=123 )
Attributes
- auto_scaling_enabled
Describes whether the Kinesis Data Analytics service can increase the parallelism of the application in response to increased throughput.
- configuration_type
Describes whether the application uses the default parallelism for the Kinesis Data Analytics service.
You must set this property to
CUSTOM
in order to change your application’sAutoScalingEnabled
,Parallelism
, orParallelismPerKPU
properties.
- parallelism
Describes the initial number of parallel tasks that a Java-based Kinesis Data Analytics application can perform.
The Kinesis Data Analytics service can increase this number automatically if ParallelismConfiguration:AutoScalingEnabled is set to
true
.
- parallelism_per_kpu
Describes the number of parallel tasks that a Java-based Kinesis Data Analytics application can perform per Kinesis Processing Unit (KPU) used by the application.
For more information about KPUs, see Amazon Kinesis Data Analytics Pricing .
PropertyGroupProperty
- class CfnApplicationV2.PropertyGroupProperty(*, property_group_id=None, property_map=None)
Bases:
object
Property key-value pairs passed into an application.
- Parameters:
property_group_id (
Optional
[str
]) – Describes the key of an application execution property key-value pair.property_map (
Union
[IResolvable
,Mapping
[str
,str
],None
]) – Describes the value of an application execution property key-value pair.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics property_group_property = kinesisanalytics.CfnApplicationV2.PropertyGroupProperty( property_group_id="propertyGroupId", property_map={ "property_map_key": "propertyMap" } )
Attributes
- property_group_id
Describes the key of an application execution property key-value pair.
- property_map
Describes the value of an application execution property key-value pair.
RecordColumnProperty
- class CfnApplicationV2.RecordColumnProperty(*, name, sql_type, mapping=None)
Bases:
object
For a SQL-based Kinesis Data Analytics application, describes the mapping of each data element in the streaming source to the corresponding column in the in-application stream.
Also used to describe the format of the reference data source.
- Parameters:
name (
str
) – The name of the column that is created in the in-application input stream or reference table.sql_type (
str
) – The type of column created in the in-application input stream or reference table.mapping (
Optional
[str
]) – A reference to the data element in the streaming input or the reference data source.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics record_column_property = kinesisanalytics.CfnApplicationV2.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )
Attributes
- mapping
A reference to the data element in the streaming input or the reference data source.
- name
The name of the column that is created in the in-application input stream or reference table.
- sql_type
The type of column created in the in-application input stream or reference table.
RecordFormatProperty
- class CfnApplicationV2.RecordFormatProperty(*, record_format_type, mapping_parameters=None)
Bases:
object
For a SQL-based Kinesis Data Analytics application, describes the record format and relevant mapping information that should be applied to schematize the records on the stream.
- Parameters:
record_format_type (
str
) – The type of record format.mapping_parameters (
Union
[IResolvable
,MappingParametersProperty
,Dict
[str
,Any
],None
]) – When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics record_format_property = kinesisanalytics.CfnApplicationV2.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplicationV2.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplicationV2.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplicationV2.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) )
Attributes
- mapping_parameters
When you configure application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source.
- record_format_type
The type of record format.
RunConfigurationProperty
- class CfnApplicationV2.RunConfigurationProperty(*, application_restore_configuration=None, flink_run_configuration=None)
Bases:
object
Describes the starting parameters for an Kinesis Data Analytics application.
- Parameters:
application_restore_configuration (
Union
[IResolvable
,ApplicationRestoreConfigurationProperty
,Dict
[str
,Any
],None
]) – Describes the restore behavior of a restarting application.flink_run_configuration (
Union
[IResolvable
,FlinkRunConfigurationProperty
,Dict
[str
,Any
],None
]) – Describes the starting parameters for a Flink-based Kinesis Data Analytics application.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics run_configuration_property = kinesisanalytics.CfnApplicationV2.RunConfigurationProperty( application_restore_configuration=kinesisanalytics.CfnApplicationV2.ApplicationRestoreConfigurationProperty( application_restore_type="applicationRestoreType", # the properties below are optional snapshot_name="snapshotName" ), flink_run_configuration=kinesisanalytics.CfnApplicationV2.FlinkRunConfigurationProperty( allow_non_restored_state=False ) )
Attributes
- application_restore_configuration
Describes the restore behavior of a restarting application.
- flink_run_configuration
Describes the starting parameters for a Flink-based Kinesis Data Analytics application.
S3ContentBaseLocationProperty
- class CfnApplicationV2.S3ContentBaseLocationProperty(*, bucket_arn, base_path=None)
Bases:
object
The base location of the Amazon Data Analytics application.
- Parameters:
bucket_arn (
str
) – The Amazon Resource Name (ARN) of the S3 bucket.base_path (
Optional
[str
]) – The base path for the S3 bucket.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics s3_content_base_location_property = kinesisanalytics.CfnApplicationV2.S3ContentBaseLocationProperty( bucket_arn="bucketArn", # the properties below are optional base_path="basePath" )
Attributes
- base_path
The base path for the S3 bucket.
- bucket_arn
The Amazon Resource Name (ARN) of the S3 bucket.
S3ContentLocationProperty
- class CfnApplicationV2.S3ContentLocationProperty(*, bucket_arn, file_key, object_version=None)
Bases:
object
The location of an application or a custom artifact.
- Parameters:
bucket_arn (
str
) – The Amazon Resource Name (ARN) for the S3 bucket containing the application code.file_key (
str
) – The file key for the object containing the application code.object_version (
Optional
[str
]) – The version of the object containing the application code.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics s3_content_location_property = kinesisanalytics.CfnApplicationV2.S3ContentLocationProperty( bucket_arn="bucketArn", file_key="fileKey", # the properties below are optional object_version="objectVersion" )
Attributes
- bucket_arn
The Amazon Resource Name (ARN) for the S3 bucket containing the application code.
- file_key
The file key for the object containing the application code.
- object_version
The version of the object containing the application code.
SqlApplicationConfigurationProperty
- class CfnApplicationV2.SqlApplicationConfigurationProperty(*, inputs=None)
Bases:
object
Describes the inputs, outputs, and reference data sources for a SQL-based Kinesis Data Analytics application.
- Parameters:
inputs (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,InputProperty
,Dict
[str
,Any
]]],None
]) – The array of Input objects describing the input streams used by the application.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics sql_application_configuration_property = kinesisanalytics.CfnApplicationV2.SqlApplicationConfigurationProperty( inputs=[kinesisanalytics.CfnApplicationV2.InputProperty( input_schema=kinesisanalytics.CfnApplicationV2.InputSchemaProperty( record_columns=[kinesisanalytics.CfnApplicationV2.RecordColumnProperty( name="name", sql_type="sqlType", # the properties below are optional mapping="mapping" )], record_format=kinesisanalytics.CfnApplicationV2.RecordFormatProperty( record_format_type="recordFormatType", # the properties below are optional mapping_parameters=kinesisanalytics.CfnApplicationV2.MappingParametersProperty( csv_mapping_parameters=kinesisanalytics.CfnApplicationV2.CSVMappingParametersProperty( record_column_delimiter="recordColumnDelimiter", record_row_delimiter="recordRowDelimiter" ), json_mapping_parameters=kinesisanalytics.CfnApplicationV2.JSONMappingParametersProperty( record_row_path="recordRowPath" ) ) ), # the properties below are optional record_encoding="recordEncoding" ), name_prefix="namePrefix", # the properties below are optional input_parallelism=kinesisanalytics.CfnApplicationV2.InputParallelismProperty( count=123 ), input_processing_configuration=kinesisanalytics.CfnApplicationV2.InputProcessingConfigurationProperty( input_lambda_processor=kinesisanalytics.CfnApplicationV2.InputLambdaProcessorProperty( resource_arn="resourceArn" ) ), kinesis_firehose_input=kinesisanalytics.CfnApplicationV2.KinesisFirehoseInputProperty( resource_arn="resourceArn" ), kinesis_streams_input=kinesisanalytics.CfnApplicationV2.KinesisStreamsInputProperty( resource_arn="resourceArn" ) )] )
Attributes
- inputs
//docs.aws.amazon.com/kinesisanalytics/latest/apiv2/API_Input.html>`_ objects describing the input streams used by the application.
VpcConfigurationProperty
- class CfnApplicationV2.VpcConfigurationProperty(*, security_group_ids, subnet_ids)
Bases:
object
Describes the parameters of a VPC used by the application.
- Parameters:
security_group_ids (
Sequence
[str
]) – The array of SecurityGroup IDs used by the VPC configuration.subnet_ids (
Sequence
[str
]) – The array of Subnet IDs used by the VPC configuration.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics vpc_configuration_property = kinesisanalytics.CfnApplicationV2.VpcConfigurationProperty( security_group_ids=["securityGroupIds"], subnet_ids=["subnetIds"] )
Attributes
- security_group_ids
//docs.aws.amazon.com/AWSEC2/latest/APIReference/API_SecurityGroup.html>`_ IDs used by the VPC configuration.
- subnet_ids
//docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Subnet.html>`_ IDs used by the VPC configuration.
ZeppelinApplicationConfigurationProperty
- class CfnApplicationV2.ZeppelinApplicationConfigurationProperty(*, catalog_configuration=None, custom_artifacts_configuration=None, deploy_as_application_configuration=None, monitoring_configuration=None)
Bases:
object
The configuration of a Kinesis Data Analytics Studio notebook.
- Parameters:
catalog_configuration (
Union
[IResolvable
,CatalogConfigurationProperty
,Dict
[str
,Any
],None
]) – The Amazon Glue Data Catalog that you use in queries in a Kinesis Data Analytics Studio notebook.custom_artifacts_configuration (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,CustomArtifactConfigurationProperty
,Dict
[str
,Any
]]],None
]) – A list ofCustomArtifactConfiguration
objects.deploy_as_application_configuration (
Union
[IResolvable
,DeployAsApplicationConfigurationProperty
,Dict
[str
,Any
],None
]) – The information required to deploy a Kinesis Data Analytics Studio notebook as an application with durable state.monitoring_configuration (
Union
[IResolvable
,ZeppelinMonitoringConfigurationProperty
,Dict
[str
,Any
],None
]) – The monitoring configuration of a Kinesis Data Analytics Studio notebook.
- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics zeppelin_application_configuration_property = kinesisanalytics.CfnApplicationV2.ZeppelinApplicationConfigurationProperty( catalog_configuration=kinesisanalytics.CfnApplicationV2.CatalogConfigurationProperty( glue_data_catalog_configuration=kinesisanalytics.CfnApplicationV2.GlueDataCatalogConfigurationProperty( database_arn="databaseArn" ) ), custom_artifacts_configuration=[kinesisanalytics.CfnApplicationV2.CustomArtifactConfigurationProperty( artifact_type="artifactType", # the properties below are optional maven_reference=kinesisanalytics.CfnApplicationV2.MavenReferenceProperty( artifact_id="artifactId", group_id="groupId", version="version" ), s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentLocationProperty( bucket_arn="bucketArn", file_key="fileKey", # the properties below are optional object_version="objectVersion" ) )], deploy_as_application_configuration=kinesisanalytics.CfnApplicationV2.DeployAsApplicationConfigurationProperty( s3_content_location=kinesisanalytics.CfnApplicationV2.S3ContentBaseLocationProperty( bucket_arn="bucketArn", # the properties below are optional base_path="basePath" ) ), monitoring_configuration=kinesisanalytics.CfnApplicationV2.ZeppelinMonitoringConfigurationProperty( log_level="logLevel" ) )
Attributes
- catalog_configuration
The Amazon Glue Data Catalog that you use in queries in a Kinesis Data Analytics Studio notebook.
- custom_artifacts_configuration
A list of
CustomArtifactConfiguration
objects.
- deploy_as_application_configuration
The information required to deploy a Kinesis Data Analytics Studio notebook as an application with durable state.
- monitoring_configuration
The monitoring configuration of a Kinesis Data Analytics Studio notebook.
ZeppelinMonitoringConfigurationProperty
- class CfnApplicationV2.ZeppelinMonitoringConfigurationProperty(*, log_level=None)
Bases:
object
Describes configuration parameters for Amazon CloudWatch logging for a Kinesis Data Analytics Studio notebook.
For more information about CloudWatch logging, see Monitoring .
- Parameters:
log_level (
Optional
[str
]) – The verbosity of the CloudWatch Logs for an application. You can set it toINFO
,WARN
,ERROR
, orDEBUG
.- Link:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. import aws_cdk.aws_kinesisanalytics as kinesisanalytics zeppelin_monitoring_configuration_property = kinesisanalytics.CfnApplicationV2.ZeppelinMonitoringConfigurationProperty( log_level="logLevel" )
Attributes
- log_level
The verbosity of the CloudWatch Logs for an application.
You can set it to
INFO
,WARN
,ERROR
, orDEBUG
.