CfnDataset
- class aws_cdk.aws_databrew.CfnDataset(scope, id, *, input, name, format=None, format_options=None, path_options=None, tags=None)
Bases:
CfnResource
Specifies a new DataBrew dataset.
- See:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-databrew-dataset.html
- CloudformationResource:
AWS::DataBrew::Dataset
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew cfn_dataset = databrew.CfnDataset(self, "MyCfnDataset", input=databrew.CfnDataset.InputProperty( database_input_definition=databrew.CfnDataset.DatabaseInputDefinitionProperty( glue_connection_name="glueConnectionName", # the properties below are optional database_table_name="databaseTableName", query_string="queryString", temp_directory=databrew.CfnDataset.S3LocationProperty( bucket="bucket", # the properties below are optional key="key" ) ), data_catalog_input_definition=databrew.CfnDataset.DataCatalogInputDefinitionProperty( catalog_id="catalogId", database_name="databaseName", table_name="tableName", temp_directory=databrew.CfnDataset.S3LocationProperty( bucket="bucket", # the properties below are optional key="key" ) ), metadata=databrew.CfnDataset.MetadataProperty( source_arn="sourceArn" ), s3_input_definition=databrew.CfnDataset.S3LocationProperty( bucket="bucket", # the properties below are optional key="key" ) ), name="name", # the properties below are optional format="format", format_options=databrew.CfnDataset.FormatOptionsProperty( csv=databrew.CfnDataset.CsvOptionsProperty( delimiter="delimiter", header_row=False ), excel=databrew.CfnDataset.ExcelOptionsProperty( header_row=False, sheet_indexes=[123], sheet_names=["sheetNames"] ), json=databrew.CfnDataset.JsonOptionsProperty( multi_line=False ) ), path_options=databrew.CfnDataset.PathOptionsProperty( files_limit=databrew.CfnDataset.FilesLimitProperty( max_files=123, # the properties below are optional order="order", ordered_by="orderedBy" ), last_modified_date_condition=databrew.CfnDataset.FilterExpressionProperty( expression="expression", values_map=[databrew.CfnDataset.FilterValueProperty( value="value", value_reference="valueReference" )] ), parameters=[databrew.CfnDataset.PathParameterProperty( dataset_parameter=databrew.CfnDataset.DatasetParameterProperty( name="name", type="type", # the properties below are optional create_column=False, datetime_options=databrew.CfnDataset.DatetimeOptionsProperty( format="format", # the properties below are optional locale_code="localeCode", timezone_offset="timezoneOffset" ), filter=databrew.CfnDataset.FilterExpressionProperty( expression="expression", values_map=[databrew.CfnDataset.FilterValueProperty( value="value", value_reference="valueReference" )] ) ), path_parameter_name="pathParameterName" )] ), tags=[CfnTag( key="key", value="value" )] )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).input (
Union
[IResolvable
,InputProperty
,Dict
[str
,Any
]]) – Information on how DataBrew can find the dataset, in either the AWS Glue Data Catalog or Amazon S3 .name (
str
) – The unique name of the dataset.format (
Optional
[str
]) – The file format of a dataset that is created from an Amazon S3 file or folder.format_options (
Union
[IResolvable
,FormatOptionsProperty
,Dict
[str
,Any
],None
]) – A set of options that define how DataBrew interprets the data in the dataset.path_options (
Union
[IResolvable
,PathOptionsProperty
,Dict
[str
,Any
],None
]) – A set of options that defines how DataBrew interprets an Amazon S3 path of the dataset.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – Metadata tags that have been applied to the dataset.
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
) –- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
]) –
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::DataBrew::Dataset'
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- format
The file format of a dataset that is created from an Amazon S3 file or folder.
- format_options
A set of options that define how DataBrew interprets the data in the dataset.
- input
Information on how DataBrew can find the dataset, in either the AWS Glue Data Catalog or Amazon S3 .
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- name
The unique name of the dataset.
- node
The tree node.
- path_options
A set of options that defines how DataBrew interprets an Amazon S3 path of the dataset.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
Tag Manager which manages the tags for this resource.
- tags_raw
Metadata tags that have been applied to the dataset.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
) –- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
CsvOptionsProperty
- class CfnDataset.CsvOptionsProperty(*, delimiter=None, header_row=None)
Bases:
object
Represents a set of options that define how DataBrew will read a comma-separated value (CSV) file when creating a dataset from that file.
- Parameters:
delimiter (
Optional
[str
]) – A single character that specifies the delimiter being used in the CSV file.header_row (
Union
[bool
,IResolvable
,None
]) – A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew csv_options_property = databrew.CfnDataset.CsvOptionsProperty( delimiter="delimiter", header_row=False )
Attributes
- delimiter
A single character that specifies the delimiter being used in the CSV file.
- header_row
A variable that specifies whether the first row in the file is parsed as the header.
If this value is false, column names are auto-generated.
DataCatalogInputDefinitionProperty
- class CfnDataset.DataCatalogInputDefinitionProperty(*, catalog_id=None, database_name=None, table_name=None, temp_directory=None)
Bases:
object
Represents how metadata stored in the AWS Glue Data Catalog is defined in a DataBrew dataset.
- Parameters:
catalog_id (
Optional
[str
]) – The unique identifier of the AWS account that holds the Data Catalog that stores the data.database_name (
Optional
[str
]) – The name of a database in the Data Catalog.table_name (
Optional
[str
]) – The name of a database table in the Data Catalog. This table corresponds to a DataBrew dataset.temp_directory (
Union
[IResolvable
,S3LocationProperty
,Dict
[str
,Any
],None
]) – An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew data_catalog_input_definition_property = databrew.CfnDataset.DataCatalogInputDefinitionProperty( catalog_id="catalogId", database_name="databaseName", table_name="tableName", temp_directory=databrew.CfnDataset.S3LocationProperty( bucket="bucket", # the properties below are optional key="key" ) )
Attributes
- catalog_id
The unique identifier of the AWS account that holds the Data Catalog that stores the data.
- database_name
The name of a database in the Data Catalog.
- table_name
The name of a database table in the Data Catalog.
This table corresponds to a DataBrew dataset.
- temp_directory
An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
DatabaseInputDefinitionProperty
- class CfnDataset.DatabaseInputDefinitionProperty(*, glue_connection_name, database_table_name=None, query_string=None, temp_directory=None)
Bases:
object
Connection information for dataset input files stored in a database.
- Parameters:
glue_connection_name (
str
) – The AWS Glue Connection that stores the connection information for the target database.database_table_name (
Optional
[str
]) – The table within the target database.query_string (
Optional
[str
]) – Custom SQL to run against the provided AWS Glue connection. This SQL will be used as the input for DataBrew projects and jobs.temp_directory (
Union
[IResolvable
,S3LocationProperty
,Dict
[str
,Any
],None
]) – An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew database_input_definition_property = databrew.CfnDataset.DatabaseInputDefinitionProperty( glue_connection_name="glueConnectionName", # the properties below are optional database_table_name="databaseTableName", query_string="queryString", temp_directory=databrew.CfnDataset.S3LocationProperty( bucket="bucket", # the properties below are optional key="key" ) )
Attributes
- database_table_name
The table within the target database.
- glue_connection_name
The AWS Glue Connection that stores the connection information for the target database.
- query_string
Custom SQL to run against the provided AWS Glue connection.
This SQL will be used as the input for DataBrew projects and jobs.
- temp_directory
An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
DatasetParameterProperty
- class CfnDataset.DatasetParameterProperty(*, name, type, create_column=None, datetime_options=None, filter=None)
Bases:
object
Represents a dataset paramater that defines type and conditions for a parameter in the Amazon S3 path of the dataset.
- Parameters:
name (
str
) – The name of the parameter that is used in the dataset’s Amazon S3 path.type (
str
) – The type of the dataset parameter, can be one of a ‘String’, ‘Number’ or ‘Datetime’.create_column (
Union
[bool
,IResolvable
,None
]) – Optional boolean value that defines whether the captured value of this parameter should be loaded as an additional column in the dataset.datetime_options (
Union
[IResolvable
,DatetimeOptionsProperty
,Dict
[str
,Any
],None
]) – Additional parameter options such as a format and a timezone. Required for datetime parameters.filter (
Union
[IResolvable
,FilterExpressionProperty
,Dict
[str
,Any
],None
]) – The optional filter expression structure to apply additional matching criteria to the parameter.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew dataset_parameter_property = databrew.CfnDataset.DatasetParameterProperty( name="name", type="type", # the properties below are optional create_column=False, datetime_options=databrew.CfnDataset.DatetimeOptionsProperty( format="format", # the properties below are optional locale_code="localeCode", timezone_offset="timezoneOffset" ), filter=databrew.CfnDataset.FilterExpressionProperty( expression="expression", values_map=[databrew.CfnDataset.FilterValueProperty( value="value", value_reference="valueReference" )] ) )
Attributes
- create_column
Optional boolean value that defines whether the captured value of this parameter should be loaded as an additional column in the dataset.
- datetime_options
Additional parameter options such as a format and a timezone.
Required for datetime parameters.
- filter
The optional filter expression structure to apply additional matching criteria to the parameter.
- name
The name of the parameter that is used in the dataset’s Amazon S3 path.
- type
The type of the dataset parameter, can be one of a ‘String’, ‘Number’ or ‘Datetime’.
DatetimeOptionsProperty
- class CfnDataset.DatetimeOptionsProperty(*, format, locale_code=None, timezone_offset=None)
Bases:
object
Represents additional options for correct interpretation of datetime parameters used in the Amazon S3 path of a dataset.
- Parameters:
format (
str
) – Required option, that defines the datetime format used for a date parameter in the Amazon S3 path. Should use only supported datetime specifiers and separation characters, all litera a-z or A-Z character should be escaped with single quotes. E.g. “MM.dd.yyyy-‘at’-HH:mm”.locale_code (
Optional
[str
]) – Optional value for a non-US locale code, needed for correct interpretation of some date formats.timezone_offset (
Optional
[str
]) – Optional value for a timezone offset of the datetime parameter value in the Amazon S3 path. Shouldn’t be used if Format for this parameter includes timezone fields. If no offset specified, UTC is assumed.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew datetime_options_property = databrew.CfnDataset.DatetimeOptionsProperty( format="format", # the properties below are optional locale_code="localeCode", timezone_offset="timezoneOffset" )
Attributes
- format
Required option, that defines the datetime format used for a date parameter in the Amazon S3 path.
Should use only supported datetime specifiers and separation characters, all litera a-z or A-Z character should be escaped with single quotes. E.g. “MM.dd.yyyy-‘at’-HH:mm”.
- locale_code
Optional value for a non-US locale code, needed for correct interpretation of some date formats.
- timezone_offset
Optional value for a timezone offset of the datetime parameter value in the Amazon S3 path.
Shouldn’t be used if Format for this parameter includes timezone fields. If no offset specified, UTC is assumed.
ExcelOptionsProperty
- class CfnDataset.ExcelOptionsProperty(*, header_row=None, sheet_indexes=None, sheet_names=None)
Bases:
object
Represents a set of options that define how DataBrew will interpret a Microsoft Excel file when creating a dataset from that file.
- Parameters:
header_row (
Union
[bool
,IResolvable
,None
]) – A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.sheet_indexes (
Union
[IResolvable
,Sequence
[Union
[int
,float
]],None
]) – One or more sheet numbers in the Excel file that will be included in the dataset.sheet_names (
Optional
[Sequence
[str
]]) – One or more named sheets in the Excel file that will be included in the dataset.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew excel_options_property = databrew.CfnDataset.ExcelOptionsProperty( header_row=False, sheet_indexes=[123], sheet_names=["sheetNames"] )
Attributes
- header_row
A variable that specifies whether the first row in the file is parsed as the header.
If this value is false, column names are auto-generated.
- sheet_indexes
One or more sheet numbers in the Excel file that will be included in the dataset.
- sheet_names
One or more named sheets in the Excel file that will be included in the dataset.
FilesLimitProperty
- class CfnDataset.FilesLimitProperty(*, max_files, order=None, ordered_by=None)
Bases:
object
Represents a limit imposed on number of Amazon S3 files that should be selected for a dataset from a connected Amazon S3 path.
- Parameters:
max_files (
Union
[int
,float
]) – The number of Amazon S3 files to select.order (
Optional
[str
]) – A criteria to use for Amazon S3 files sorting before their selection. By default uses DESCENDING order, i.e. most recent files are selected first. Anotherpossible value is ASCENDING.ordered_by (
Optional
[str
]) – A criteria to use for Amazon S3 files sorting before their selection. By default uses LAST_MODIFIED_DATE as a sorting criteria. Currently it’s the only allowed value.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew files_limit_property = databrew.CfnDataset.FilesLimitProperty( max_files=123, # the properties below are optional order="order", ordered_by="orderedBy" )
Attributes
- max_files
The number of Amazon S3 files to select.
- order
A criteria to use for Amazon S3 files sorting before their selection.
By default uses DESCENDING order, i.e. most recent files are selected first. Anotherpossible value is ASCENDING.
- ordered_by
A criteria to use for Amazon S3 files sorting before their selection.
By default uses LAST_MODIFIED_DATE as a sorting criteria. Currently it’s the only allowed value.
FilterExpressionProperty
- class CfnDataset.FilterExpressionProperty(*, expression, values_map)
Bases:
object
Represents a structure for defining parameter conditions.
- Parameters:
expression (
str
) – The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, “(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)”. Substitution variables should start with ‘:’ symbol.values_map (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,FilterValueProperty
,Dict
[str
,Any
]]]]) – The map of substitution variable names to their values used in this filter expression.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew filter_expression_property = databrew.CfnDataset.FilterExpressionProperty( expression="expression", values_map=[databrew.CfnDataset.FilterValueProperty( value="value", value_reference="valueReference" )] )
Attributes
- expression
The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions.
For example, “(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)”. Substitution variables should start with ‘:’ symbol.
- values_map
The map of substitution variable names to their values used in this filter expression.
FilterValueProperty
- class CfnDataset.FilterValueProperty(*, value, value_reference)
Bases:
object
Represents a single entry in the
ValuesMap
of aFilterExpression
.A
FilterValue
associates the name of a substitution variable in an expression to its value.- Parameters:
value (
str
) – The value to be associated with the substitution variable.value_reference (
str
) – The substitution variable reference.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew filter_value_property = databrew.CfnDataset.FilterValueProperty( value="value", value_reference="valueReference" )
Attributes
- value
The value to be associated with the substitution variable.
- value_reference
The substitution variable reference.
FormatOptionsProperty
- class CfnDataset.FormatOptionsProperty(*, csv=None, excel=None, json=None)
Bases:
object
Represents a set of options that define the structure of either comma-separated value (CSV), Excel, or JSON input.
- Parameters:
csv (
Union
[IResolvable
,CsvOptionsProperty
,Dict
[str
,Any
],None
]) – Options that define how CSV input is to be interpreted by DataBrew.excel (
Union
[IResolvable
,ExcelOptionsProperty
,Dict
[str
,Any
],None
]) – Options that define how Excel input is to be interpreted by DataBrew.json (
Union
[IResolvable
,JsonOptionsProperty
,Dict
[str
,Any
],None
]) – Options that define how JSON input is to be interpreted by DataBrew.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew format_options_property = databrew.CfnDataset.FormatOptionsProperty( csv=databrew.CfnDataset.CsvOptionsProperty( delimiter="delimiter", header_row=False ), excel=databrew.CfnDataset.ExcelOptionsProperty( header_row=False, sheet_indexes=[123], sheet_names=["sheetNames"] ), json=databrew.CfnDataset.JsonOptionsProperty( multi_line=False ) )
Attributes
- csv
Options that define how CSV input is to be interpreted by DataBrew.
- excel
Options that define how Excel input is to be interpreted by DataBrew.
- json
Options that define how JSON input is to be interpreted by DataBrew.
InputProperty
- class CfnDataset.InputProperty(*, database_input_definition=None, data_catalog_input_definition=None, metadata=None, s3_input_definition=None)
Bases:
object
Represents information on how DataBrew can find data, in either the AWS Glue Data Catalog or Amazon S3.
- Parameters:
database_input_definition (
Union
[IResolvable
,DatabaseInputDefinitionProperty
,Dict
[str
,Any
],None
]) – Connection information for dataset input files stored in a database.data_catalog_input_definition (
Union
[IResolvable
,DataCatalogInputDefinitionProperty
,Dict
[str
,Any
],None
]) – The AWS Glue Data Catalog parameters for the data.metadata (
Union
[IResolvable
,MetadataProperty
,Dict
[str
,Any
],None
]) – Contains additional resource information needed for specific datasets.s3_input_definition (
Union
[IResolvable
,S3LocationProperty
,Dict
[str
,Any
],None
]) – The Amazon S3 location where the data is stored.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew input_property = databrew.CfnDataset.InputProperty( database_input_definition=databrew.CfnDataset.DatabaseInputDefinitionProperty( glue_connection_name="glueConnectionName", # the properties below are optional database_table_name="databaseTableName", query_string="queryString", temp_directory=databrew.CfnDataset.S3LocationProperty( bucket="bucket", # the properties below are optional key="key" ) ), data_catalog_input_definition=databrew.CfnDataset.DataCatalogInputDefinitionProperty( catalog_id="catalogId", database_name="databaseName", table_name="tableName", temp_directory=databrew.CfnDataset.S3LocationProperty( bucket="bucket", # the properties below are optional key="key" ) ), metadata=databrew.CfnDataset.MetadataProperty( source_arn="sourceArn" ), s3_input_definition=databrew.CfnDataset.S3LocationProperty( bucket="bucket", # the properties below are optional key="key" ) )
Attributes
- data_catalog_input_definition
The AWS Glue Data Catalog parameters for the data.
- database_input_definition
Connection information for dataset input files stored in a database.
- metadata
Contains additional resource information needed for specific datasets.
- s3_input_definition
The Amazon S3 location where the data is stored.
JsonOptionsProperty
- class CfnDataset.JsonOptionsProperty(*, multi_line=None)
Bases:
object
Represents the JSON-specific options that define how input is to be interpreted by AWS Glue DataBrew .
- Parameters:
multi_line (
Union
[bool
,IResolvable
,None
]) – A value that specifies whether JSON input contains embedded new line characters.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew json_options_property = databrew.CfnDataset.JsonOptionsProperty( multi_line=False )
Attributes
- multi_line
A value that specifies whether JSON input contains embedded new line characters.
MetadataProperty
- class CfnDataset.MetadataProperty(*, source_arn=None)
Bases:
object
Contains additional resource information needed for specific datasets.
- Parameters:
source_arn (
Optional
[str
]) – The Amazon Resource Name (ARN) associated with the dataset. Currently, DataBrew only supports ARNs from Amazon AppFlow.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew metadata_property = databrew.CfnDataset.MetadataProperty( source_arn="sourceArn" )
Attributes
- source_arn
The Amazon Resource Name (ARN) associated with the dataset.
Currently, DataBrew only supports ARNs from Amazon AppFlow.
PathOptionsProperty
- class CfnDataset.PathOptionsProperty(*, files_limit=None, last_modified_date_condition=None, parameters=None)
Bases:
object
Represents a set of options that define how DataBrew selects files for a given Amazon S3 path in a dataset.
- Parameters:
files_limit (
Union
[IResolvable
,FilesLimitProperty
,Dict
[str
,Any
],None
]) – If provided, this structure imposes a limit on a number of files that should be selected.last_modified_date_condition (
Union
[IResolvable
,FilterExpressionProperty
,Dict
[str
,Any
],None
]) – If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3 .parameters (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,PathParameterProperty
,Dict
[str
,Any
]]],None
]) – A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew path_options_property = databrew.CfnDataset.PathOptionsProperty( files_limit=databrew.CfnDataset.FilesLimitProperty( max_files=123, # the properties below are optional order="order", ordered_by="orderedBy" ), last_modified_date_condition=databrew.CfnDataset.FilterExpressionProperty( expression="expression", values_map=[databrew.CfnDataset.FilterValueProperty( value="value", value_reference="valueReference" )] ), parameters=[databrew.CfnDataset.PathParameterProperty( dataset_parameter=databrew.CfnDataset.DatasetParameterProperty( name="name", type="type", # the properties below are optional create_column=False, datetime_options=databrew.CfnDataset.DatetimeOptionsProperty( format="format", # the properties below are optional locale_code="localeCode", timezone_offset="timezoneOffset" ), filter=databrew.CfnDataset.FilterExpressionProperty( expression="expression", values_map=[databrew.CfnDataset.FilterValueProperty( value="value", value_reference="valueReference" )] ) ), path_parameter_name="pathParameterName" )] )
Attributes
- files_limit
If provided, this structure imposes a limit on a number of files that should be selected.
- last_modified_date_condition
If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3 .
- parameters
A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.
PathParameterProperty
- class CfnDataset.PathParameterProperty(*, dataset_parameter, path_parameter_name)
Bases:
object
Represents a single entry in the path parameters of a dataset.
Each
PathParameter
consists of a name and a parameter definition.- Parameters:
dataset_parameter (
Union
[IResolvable
,DatasetParameterProperty
,Dict
[str
,Any
]]) – The path parameter definition.path_parameter_name (
str
) – The name of the path parameter.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew path_parameter_property = databrew.CfnDataset.PathParameterProperty( dataset_parameter=databrew.CfnDataset.DatasetParameterProperty( name="name", type="type", # the properties below are optional create_column=False, datetime_options=databrew.CfnDataset.DatetimeOptionsProperty( format="format", # the properties below are optional locale_code="localeCode", timezone_offset="timezoneOffset" ), filter=databrew.CfnDataset.FilterExpressionProperty( expression="expression", values_map=[databrew.CfnDataset.FilterValueProperty( value="value", value_reference="valueReference" )] ) ), path_parameter_name="pathParameterName" )
Attributes
- dataset_parameter
The path parameter definition.
- path_parameter_name
The name of the path parameter.
S3LocationProperty
- class CfnDataset.S3LocationProperty(*, bucket, key=None)
Bases:
object
Represents an Amazon S3 location (bucket name, bucket owner, and object key) where DataBrew can read input data, or write output from a job.
- Parameters:
bucket (
str
) – The Amazon S3 bucket name.key (
Optional
[str
]) – The unique name of the object in the bucket.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_databrew as databrew s3_location_property = databrew.CfnDataset.S3LocationProperty( bucket="bucket", # the properties below are optional key="key" )
Attributes
- bucket
The Amazon S3 bucket name.
- key
The unique name of the object in the bucket.