CfnDataRepositoryAssociation
- class aws_cdk.aws_fsx.CfnDataRepositoryAssociation(scope, id, *, data_repository_path, file_system_id, file_system_path, batch_import_meta_data_on_create=None, imported_file_chunk_size=None, s3=None, tags=None)
Bases:
CfnResource
Creates an Amazon FSx for Lustre data repository association (DRA).
A data repository association is a link between a directory on the file system and an Amazon S3 bucket or prefix. You can have a maximum of 8 data repository associations on a file system. Data repository associations are supported on all FSx for Lustre 2.12 and newer file systems, excluding
scratch_1
deployment type.Each data repository association must have a unique Amazon FSx file system directory and a unique S3 bucket or prefix associated with it. You can configure a data repository association for automatic import only, for automatic export only, or for both. To learn more about linking a data repository to your file system, see Linking your file system to an S3 bucket .
- See:
- CloudformationResource:
AWS::FSx::DataRepositoryAssociation
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_fsx as fsx cfn_data_repository_association = fsx.CfnDataRepositoryAssociation(self, "MyCfnDataRepositoryAssociation", data_repository_path="dataRepositoryPath", file_system_id="fileSystemId", file_system_path="fileSystemPath", # the properties below are optional batch_import_meta_data_on_create=False, imported_file_chunk_size=123, s3=fsx.CfnDataRepositoryAssociation.S3Property( auto_export_policy=fsx.CfnDataRepositoryAssociation.AutoExportPolicyProperty( events=["events"] ), auto_import_policy=fsx.CfnDataRepositoryAssociation.AutoImportPolicyProperty( events=["events"] ) ), tags=[CfnTag( key="key", value="value" )] )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).data_repository_path (
str
) – The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the formats3://bucket-name/prefix/
. This path specifies where in the S3 data repository files will be imported from or exported to.file_system_id (
str
) – The ID of the file system on which the data repository association is configured.file_system_path (
str
) – A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as/ns1/
) or subdirectory (such as/ns1/subdir/
) that will be mapped 1-1 withDataRepositoryPath
. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path/ns1/
, then you cannot link another data repository with file system path/ns1/ns2
. This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory. .. epigraph:: If you specify only a forward slash (/
) as the file system path, you can link only one data repository to the file system. You can only specify “/” as the file system path for the first data repository associated with a file system.batch_import_meta_data_on_create (
Union
[bool
,IResolvable
,None
]) – A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set totrue
.imported_file_chunk_size (
Union
[int
,float
,None
]) – For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache. The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.s3 (
Union
[IResolvable
,S3Property
,Dict
[str
,Any
],None
]) – The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association. The configuration defines which file events (new, changed, or deleted files or directories) are automatically imported from the linked data repository to the file system or automatically exported from the file system to the data repository.tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) – A list ofTag
values, with a maximum of 50 elements.
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
) –- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
]) –
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::FSx::DataRepositoryAssociation'
- attr_association_id
Returns the data repository association’s system generated Association ID.
Example:
dra-abcdef0123456789d
- CloudformationAttribute:
AssociationId
- attr_resource_arn
Returns the data repository association’s Amazon Resource Name (ARN).
Example:
arn:aws:fsx:us-east-1:111122223333:association/fs-abc012345def6789a/dra-abcdef0123456789b
- CloudformationAttribute:
ResourceARN
- batch_import_meta_data_on_create
A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created.
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- data_repository_path
The path to the Amazon S3 data repository that will be linked to the file system.
- file_system_id
The ID of the file system on which the data repository association is configured.
- file_system_path
A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as
/ns1/
) or subdirectory (such as/ns1/subdir/
) that will be mapped 1-1 withDataRepositoryPath
.
- imported_file_chunk_size
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- node
The tree node.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- s3
The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
Tag Manager which manages the tags for this resource.
- tags_raw
A list of
Tag
values, with a maximum of 50 elements.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
) –- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
AutoExportPolicyProperty
- class CfnDataRepositoryAssociation.AutoExportPolicyProperty(*, events)
Bases:
object
Describes a data repository association’s automatic export policy.
The
AutoExportPolicy
defines the types of updated objects on the file system that will be automatically exported to the data repository. As you create, modify, or delete files, Amazon FSx for Lustre automatically exports the defined changes asynchronously once your application finishes modifying the file.The
AutoExportPolicy
is only supported on Amazon FSx for Lustre file systems with a data repository association.- Parameters:
events (
Sequence
[str
]) – TheAutoExportPolicy
can have the following event values:. -NEW
- New files and directories are automatically exported to the data repository as they are added to the file system. -CHANGED
- Changes to files and directories on the file system are automatically exported to the data repository. -DELETED
- Files and directories are automatically deleted on the data repository when they are deleted on the file system. You can define any combination of event types for yourAutoExportPolicy
.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_fsx as fsx auto_export_policy_property = fsx.CfnDataRepositoryAssociation.AutoExportPolicyProperty( events=["events"] )
Attributes
- events
.
NEW
- New files and directories are automatically exported to the data repository as they are added to the file system.CHANGED
- Changes to files and directories on the file system are automatically exported to the data repository.DELETED
- Files and directories are automatically deleted on the data repository when they are deleted on the file system.
You can define any combination of event types for your
AutoExportPolicy
.- See:
- Type:
The
AutoExportPolicy
can have the following event values
AutoImportPolicyProperty
- class CfnDataRepositoryAssociation.AutoImportPolicyProperty(*, events)
Bases:
object
Describes the data repository association’s automatic import policy.
The AutoImportPolicy defines how Amazon FSx keeps your file metadata and directory listings up to date by importing changes to your Amazon FSx for Lustre file system as you modify objects in a linked S3 bucket.
The
AutoImportPolicy
is only supported on Amazon FSx for Lustre file systems with a data repository association.- Parameters:
events (
Sequence
[str
]) – TheAutoImportPolicy
can have the following event values:. -NEW
- Amazon FSx automatically imports metadata of files added to the linked S3 bucket that do not currently exist in the FSx file system. -CHANGED
- Amazon FSx automatically updates file metadata and invalidates existing file content on the file system as files change in the data repository. -DELETED
- Amazon FSx automatically deletes files on the file system as corresponding files are deleted in the data repository. You can define any combination of event types for yourAutoImportPolicy
.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_fsx as fsx auto_import_policy_property = fsx.CfnDataRepositoryAssociation.AutoImportPolicyProperty( events=["events"] )
Attributes
- events
.
NEW
- Amazon FSx automatically imports metadata of files added to the linked S3 bucket that do not currently exist in the FSx file system.CHANGED
- Amazon FSx automatically updates file metadata and invalidates existing file content on the file system as files change in the data repository.DELETED
- Amazon FSx automatically deletes files on the file system as corresponding files are deleted in the data repository.
You can define any combination of event types for your
AutoImportPolicy
.- See:
- Type:
The
AutoImportPolicy
can have the following event values
S3Property
- class CfnDataRepositoryAssociation.S3Property(*, auto_export_policy=None, auto_import_policy=None)
Bases:
object
The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association.
The configuration defines which file events (new, changed, or deleted files or directories) are automatically imported from the linked data repository to the file system or automatically exported from the file system to the data repository.
- Parameters:
auto_export_policy (
Union
[IResolvable
,AutoExportPolicyProperty
,Dict
[str
,Any
],None
]) – Describes a data repository association’s automatic export policy. TheAutoExportPolicy
defines the types of updated objects on the file system that will be automatically exported to the data repository. As you create, modify, or delete files, Amazon FSx for Lustre automatically exports the defined changes asynchronously once your application finishes modifying the file. TheAutoExportPolicy
is only supported on Amazon FSx for Lustre file systems with a data repository association.auto_import_policy (
Union
[IResolvable
,AutoImportPolicyProperty
,Dict
[str
,Any
],None
]) – Describes the data repository association’s automatic import policy. The AutoImportPolicy defines how Amazon FSx keeps your file metadata and directory listings up to date by importing changes to your Amazon FSx for Lustre file system as you modify objects in a linked S3 bucket. TheAutoImportPolicy
is only supported on Amazon FSx for Lustre file systems with a data repository association.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_fsx as fsx s3_property = fsx.CfnDataRepositoryAssociation.S3Property( auto_export_policy=fsx.CfnDataRepositoryAssociation.AutoExportPolicyProperty( events=["events"] ), auto_import_policy=fsx.CfnDataRepositoryAssociation.AutoImportPolicyProperty( events=["events"] ) )
Attributes
- auto_export_policy
Describes a data repository association’s automatic export policy.
The
AutoExportPolicy
defines the types of updated objects on the file system that will be automatically exported to the data repository. As you create, modify, or delete files, Amazon FSx for Lustre automatically exports the defined changes asynchronously once your application finishes modifying the file.The
AutoExportPolicy
is only supported on Amazon FSx for Lustre file systems with a data repository association.
- auto_import_policy
Describes the data repository association’s automatic import policy.
The AutoImportPolicy defines how Amazon FSx keeps your file metadata and directory listings up to date by importing changes to your Amazon FSx for Lustre file system as you modify objects in a linked S3 bucket.
The
AutoImportPolicy
is only supported on Amazon FSx for Lustre file systems with a data repository association.