

# Automate backups with Amazon Data Lifecycle Manager
<a name="snapshot-lifecycle"></a>

You can use Amazon Data Lifecycle Manager to automate the creation, retention, and deletion of EBS snapshots and EBS-backed AMIs. When you automate snapshot and AMI management, it helps you to:
+ Protect valuable data by enforcing a regular backup schedule.
+ Create standardized AMIs that can be refreshed at regular intervals.
+ Retain backups as required by auditors or internal compliance.
+ Reduce storage costs by deleting outdated backups.
+ Create disaster recovery backup policies that back up data to isolated Regions or accounts.

When combined with the monitoring features of Amazon EventBridge and AWS CloudTrail, Amazon Data Lifecycle Manager provides a complete backup solution for Amazon EC2 instances and individual EBS volumes at no additional cost.

**Important**  
Amazon Data Lifecycle Manager can't manage snapshots or AMIs created by any other means.
Amazon Data Lifecycle Manager can't automate the creation, retention, and deletion of instance store-backed AMIs.

Amazon Data Lifecycle Manager is assessed as a service capability of Amazon Elastic Block Store (Amazon EBS). Any [AWS Services in Scope by Compliance Program](https://aws.amazon.com/compliance/services-in-scope/) (FedRAMP, HIPAA BAA, SOC, etc) which lists Amazon EBS will also apply to Amazon Data Lifecycle Manager.

**Topics**
+ [

## Quotas
](#dlm-quotas)
+ [How it works](dlm-elements.md)
+ [Default vs custom policies](policy-differences.md)
+ [Create default policies](default-policies.md)
+ [Create custom policy for snapshots](snapshot-ami-policy.md)
+ [Create custom policy for AMIs](ami-policy.md)
+ [Automate cross-account snapshot copies](event-policy.md)
+ [Modify policies](modify.md)
+ [Delete policies](delete.md)
+ [Control access](dlm-prerequisites.md)
+ [Monitor policies](dlm-monitor-lifecycle.md)
+ [Service endpoints](dlm-service-endpoints.md)
+ [Interface VPC endpoints](dlm-vpc-endpoints.md)
+ [Troubleshoot](dlm-troubleshooting.md)

## Quotas
<a name="dlm-quotas"></a>

Your AWS account has the following quotas related to Amazon Data Lifecycle Manager:


| Description | Quota | 
| --- | --- | 
| Custom lifecycle policies per Region | 100 | 
| Default policies for EBS snapshots per Region | 1 | 
| Default policies for EBS-backed AMIs per Region | 1 | 
| Tags per resource | 45 | 

# How Amazon Data Lifecycle Manager works
<a name="dlm-elements"></a>

The following are the key elements of Amazon Data Lifecycle Manager.

**Topics**
+ [

## Policies
](#dlm-policies)
+ [Policy schedules](#dlm-lifecycle-schedule)
+ [Target resource tags](#dlm-tagging-volumes)
+ [

## Snapshots
](#dlm-ebs-snapshots)
+ [

## EBS-backed AMIs
](#dlm-ebs-amis)
+ [

## Amazon Data Lifecycle Manager tags
](#dlm-tagging-snapshots)

## Policies
<a name="dlm-policies"></a>

With Amazon Data Lifecycle Manager, you create policies to define your backup creation and retention requirements. These policies typically specify the following:
+ **Policy type** — Defines the type of backup resources that the policy manages (snapshots or EBS-backed AMIs).
+ **Target resources** — Defines the type of resources that are targeted by the policy (instances or EBS volumes).
+ **Creation frequency** — Defines how often the policy runs and creates snapshots or AMIs.
+ **Retention threshold** — Defines how long the policy retains snapshots or AMIs after creation.
+ **Additional actions** — Defines additional actions that the policy should perform, such as cross-Region copying, archiving, or resource tagging.

Amazon Data Lifecycle Manager offers default policies and custom policies.

**Default policies**  
Default policies back up all volumes and instances in a Region that do not have recent backups. You can optionally exclude volumes and instances by specifying exclusion parameters.

Amazon Data Lifecycle Manager supports the following default policies:
+ Default policy for EBS snapshots — Targets volumes and automates the creation, retention, and deletion of snapshots.
+ Default policy for EBS-backed AMIs — Targets instances and automates the creation, retention, and deregistration of EBS-backed AMIs.

You can have only one default policy per resource type in each account and AWS Region.

**Custom policies**  
Custom policies target specific resources based on their assigned tags and support advanced features, such as fast snapshot restore, snapshot archiving, cross-account copying, and pre and post scripts. A custom policy can include up to 4 schedules, where each schedule can have its own creation frequency, retention threshold, and advanced feature configuration.

Amazon Data Lifecycle Manager supports the following custom policies:
+ EBS snapshot policy — Targets volumes or instances and automates the creation, retention, and deletion of EBS snapshots.
+ EBS-backed AMI policy — Targets instances and automates the creation, retention, and deregistration of EBS-backed AMIs.
+ Cross-account copy event policy — Automates cross-Region copy actions for snapshots that are shared with you.

For more information, see [Amazon Data Lifecycle Manager default policies vs custom policies](policy-differences.md).

## Policy schedules (*custom policies only*)
<a name="dlm-lifecycle-schedule"></a>

Policy schedules define when snapshots or AMIs are created by the policy. Policies can have up to four schedules—one mandatory schedule, and up to three optional schedules.

Adding multiple schedules to a single policy lets you create snapshots or AMIs at different frequencies using the same policy. For example, you can create a single policy that creates daily, weekly, monthly, and yearly snapshots. This eliminates the need to manage multiple policies.

For each schedule, you can define the frequency, fast snapshot restore settings (snapshot lifecycle policies only), cross-Region copy rules, and tags. The tags that are assigned to a schedule are automatically assigned to the snapshots or AMIs that are created when the schedule is initiated. In addition, Amazon Data Lifecycle Manager automatically assigns a system-generated tag based on the schedule's frequency to each snapshot or AMI.

Each schedule is initiated individually based on its frequency. If multiple schedules are initiated at the same time, Amazon Data Lifecycle Manager creates only one snapshot or AMI and applies the retention settings of the schedule that has the highest retention period. The tags of all of the initiated schedules are applied to the snapshot or AMI.
+ (Snapshot lifecycle policies only) If more than one of the initiated schedules is enabled for fast snapshot restore, then the snapshot is enabled for fast snapshot restore in all of the Availability Zones specified across all of the initiated schedules. The highest retention settings of the initiated schedules is used for each Availability Zone.
+ If more than one of the initiated schedules is enabled for cross-Region copy, the snapshot or AMI is copied to all Regions specified across all of the initiated schedules. The highest retention period of the initiated schedules is applied.

## Target resource tags (*custom policies only*)
<a name="dlm-tagging-volumes"></a>

Amazon Data Lifecycle Manager custom policies use resource tags to identify the resources to back up. When you create a snapshot or EBS-backed AMI policy, you can specify multiple target resource tags. All resources of the specified type (instance or volume) that have at least one of the specified target resource tags will be targeted by the policy. For example, if you create a snapshot policy that targets volumes and you specify `purpose=prod`, `costcenter=prod`, and `environment=live` as target resource tags, then the policy will target all volumes that have any of those tag-key value pairs.

If you want to run multiple policies on a resource, you can assign multiple tags to the target resource, and then create separate policies that each target a specific resource tag.

You can't use the `\` or `=` characters in a tag key. Target resource tags are case sensitive. For more information, see [Tag your resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html).

## Snapshots
<a name="dlm-ebs-snapshots"></a>

Snapshots are the primary means to back up data from your EBS volumes. To save storage costs, successive snapshots are incremental, containing only the volume data that changed since the previous snapshot. When you delete one snapshot in a series of snapshots for a volume, only the data that's unique to that snapshot is removed. The rest of the captured history of the volume is preserved. For more information, see [Amazon EBS snapshots](ebs-snapshots.md).

## EBS-backed AMIs
<a name="dlm-ebs-amis"></a>

An Amazon Machine Image (AMI) provides the information that's required to launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. Amazon Data Lifecycle Manager supports EBS-backed AMIs only. EBS-backed AMIs include a snapshot for each EBS volume that's attached to the source instance. For more information, see [ Amazon Machine Images (AMI)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html).

## Amazon Data Lifecycle Manager tags
<a name="dlm-tagging-snapshots"></a>

Amazon Data Lifecycle Manager applies the following system tags to all snapshots and AMIs created by a policy, to distinguish them from snapshots and AMIs created by any other means:
+ `aws:dlm:lifecycle-policy-id`
+ `aws:dlm:lifecycle-schedule-name`
+ `aws:dlm:expirationTime` — For snapshots created by an age-based schedule. Indicates when the snapshot is to be deleted from the standard tier. 
+ `dlm:managed`
+ `aws:dlm:archived` — For snapshots that were archived by a schedule.
+ `aws:dlm:pre-script` — For snapshots created with pre scripts.
+ `aws:dlm:post-script` — For snapshots created with post scripts.

You can also specify custom tags to be applied to snapshots and AMIs on creation. You can't use the `\` or `=` characters in a tag key.

The target tags that Amazon Data Lifecycle Manager uses to associate volumes with a snapshot policy can optionally be applied to snapshots created by the policy. Similarly, the target tags that are used to associate instances with an AMI policy can optionally be applied to AMIs created by the policy.

# Amazon Data Lifecycle Manager default policies vs custom policies
<a name="policy-differences"></a>

This section compares default policies and custom policies and highlights their similarities and differences.

**Topics**
+ [

## EBS snapshot policy comparison
](#snapshot-policy-diffs)
+ [

## EBS-backed AMI policy comparison
](#ami-policy-diffs)

## EBS snapshot policy comparison
<a name="snapshot-policy-diffs"></a>

The following table highlights the differences between the default policy for EBS snapshots and custom EBS snapshot policies. 


| Feature | Default policy for EBS snapshots | Custom EBS snapshot policy | 
| --- | --- | --- | 
| Managed backup resource | EBS snapshot | EBS snapshot | 
| Target resource types | Volumes | Volumes or instances | 
| Resource targeting | Targets all volumes in the Region that do not have recent snapshots. You can specify exclusion parameters to exclude specific volumes. | Targets only volumes or instances that have specific tags. | 
| Exclusion parameters | Yes, can exclude boot volumes, specific volume types, and volumes with specific tags. | Yes, can exclude boot volumes and volumes with specific tags when targeting instances. | 
| Support AWS Outposts | No | Yes | 
| Support multiple schedules | No | Yes, up to 4 schedules per policy | 
| Supported retention types | Age-based retention only | Age-based and count-based retention | 
| Snapshot creation frequency | Every 1 to 7 days. | Daily, weekly, monthly, yearly, or custom frequency using a cron expression. | 
| Snapshot retention | 2 to 14 days. | Up to 1000 snapshots (count-based) or up to 100 years (age-based). | 
| Support application-consistent snapshots | No | Yes, using pre and post scripts | 
| Support snapshot archiving | No | Yes | 
| Support fast snapshot restore | No | Yes | 
| Support cross-Region copying  | Yes, with default settings 1 | Yes, with custom settings | 
| Support cross-account sharing | No | Yes | 
| Support extended deletion 2 | Yes | No | 

1 For default policies:
+ You can't copy tags to cross-Region copies.
+ Copies use the same retention period as the source snapshot.
+ Copies get the same encryption state as the source snapshot. If the destination Region is enabled for encryption by default, copies are always encrypted, even if the source snapshots are unencrypted. Copies are always encrypted with the default KMS key for the destination Region.

2 For default and custom policies:
+ If a target instance or volume is deleted, Amazon Data Lifecycle Manager continues deleting snapshots up to, but not including, the last one based on the retention period. For default policies, you can extend deletion to include the last snapshot.
+ If a policy is deleted or enters the error or disabled state, Amazon Data Lifecycle Manager stops deleting snapshots. For default policies, you can extend deletion to continue deleting snapshots, including the last one.

## EBS-backed AMI policy comparison
<a name="ami-policy-diffs"></a>

The following table highlights the differences between the default policy for EBS-backed AMIs and custom EBS-backed AMI policies. 


| Feature | Default policy for EBS-backed AMIs | Custom EBS-backed AMI policy | 
| --- | --- | --- | 
| Managed backup resource | EBS-backed AMIs | EBS-backed AMIs | 
| Target resource types | Instances | Instances | 
| Resource targeting | Targets all instances in the Region that do not have recent AMIs. You can specify exclusion parameters to exclude specific instances. | Targets only instances that have specific tags. | 
| Reboot instances before AMI creation | No | Yes | 
| Exclusion parameters | Yes, can exclude instances with specific tags. | No | 
| Support multiple schedules | No | Yes, up to 4 schedules per policy. | 
| AMI creation frequency | Every 1 to 7 days. | Daily, weekly, monthly, yearly, or custom frequency using a cron expression. | 
| Supported retention types | Age-based retention only. | Age-based and count-based retention. | 
| AMIs retention | 2 to 14 days. | Up to 1000 AMIs (count-based) or up to 100 years (age-based). | 
| Support AMI deprecation | No | Yes | 
| Support cross-Region copying | Yes, with default settings 1 | Yes, with custom settings | 
| Support extended deletion 2 | Yes | No | 

1For default policies:
+ You can't copy tags to cross-Region copies.
+ Copies use the same retention period as the source AMI.
+ Copies get the same encryption state as the source AMI. If the destination Region is enabled for encryption by default, copies are always encrypted, even if the source AMIs are unencrypted. Copies are always encrypted with the default KMS key for the destination Region.

2 For default and custom policies:
+ If a targeted instance is terminated, Amazon Data Lifecycle Manager continues deregistering AMIs up to, but not including, the last one based on the retention period. For default policies, you can extend deregistration to include the last AMI.
+ If a policy is deleted or enters the error or disabled state, Amazon Data Lifecycle Manager stops deregistering AMIs. For default policies, you can extend deletion to continue deregistering AMIs, including the last one.

# Create Amazon Data Lifecycle Manager default policies
<a name="default-policies"></a>

To create periodic EBS-backed AMIs from instances, use the default policy for EBS-backed AMIs. To create snapshots of all volumes regardless of their attachment state, or if you want to exclude specific volumes, use the default policy for EBS snapshots.

This section explains how to create default policies.

**Topics**
+ [

## Considerations for default policies
](#default-policy-considerations)
+ [

## Create default policy for Amazon EBS snapshots
](#default-snapshot-policy)
+ [

## Create default policy for EBS-backed AMIs
](#default-ami-policy)
+ [Enable default policies across accounts and Regions](dlm-stacksets.md)

## Considerations for default policies
<a name="default-policy-considerations"></a>

Keep the following in mind when working with default policies:
+ Default policies do not back up target resources (instances or volumes) that have recent backups (snapshots or AMIs). The creation frequency determines which resources are backed up. A volume or instance is backed up only if its last snapshot or AMI is older than the policy's creation frequency. For example, if you specify a creation frequency of 3 days, the default policy for EBS snapshots will create a snapshot of a volume only if its last snapshot is older than 3 days.
+ By default, default policies target all instances or volumes in the Region, unless exclusion parameters are specified.
+ Default policies will create a minimum set of unique snapshots. For example, if you enable the EBS-backed AMI policy and the EBS snapshot policy, the snapshot policy will not duplicate snapshots of volumes that were already backed up by the EBS-backed AMI policy.
+ Default policies will only start targeting resources that are at least 24 hours old.
+ If you delete a volume or terminate an instance targeted by a default policy, Amazon Data Lifecycle Manager will continue to delete the previously created backups (snapshots or AMIs) according to the retention period up to, but not including, the last backup. You must manually delete this backup if it is not required.

  If you want Amazon Data Lifecycle Manager to delete the last backup, you can enable *extend deletion*.
+ If a default policy is deleted or enters the error or disabled state, Amazon Data Lifecycle Manager stops deleting the previously created backups (snapshots or AMIs). If you want Amazon Data Lifecycle Manager to continue deleting backups, including the last one, you must enable *extend deletion* before deleting the policy or before the policy's state changes to disabled or deleted.
+ When you create and enable a default policy, Amazon Data Lifecycle Manager randomly assigns targeted resources to a four-hour time window. Targeted resources are backed up during their assigned window at the specified creation frequency. For example, if a policy has a creation frequency of 3 days, and a target resource is assigned to the 12:00 - 16:00 window, that resource will be backed up between 12:00 - 16:00 every 3 days.

## Create default policy for Amazon EBS snapshots
<a name="default-snapshot-policy"></a>

The following procedure shows you how to create a default policy for EBS snapshots.

------
#### [ Console ]

**To create a default policy for EBS snapshots**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation panel, choose **Lifecycle Manager** and then choose **Create lifecycle policy**.

1. For **Policy type**, choose **Default policy** and then choose **EBS snapshot policy**.

1. For **Description**, enter a brief description for the policy.

1. For **IAM role**, choose the IAM role that has permissions to manage snapshots.

   We recommend that you choose **Default** to use the default IAM role provided by Amazon Data Lifecycle Manager. However, you can also use a custom IAM role that you previously created.

1. For **Creation frequency**, specify how often you want the policy to run and create snapshots of your volumes.

   The frequency that you specify also determines which volumes are backed up. The policy will only back up volumes that have not been backed up by any other means within the specified frequency. For example, if you specify a creation frequency of 3 days, the policy will only create snapshots of volumes that have not been backed up within the last 3 days.

1. For **Retention period**, specify how long you want the policy to retain the snapshots that it creates. When a snapshot reaches the retention threshold, it is automatically deleted. The retention period must be greater than or equal to the creation frequency.

1. (*Optional*) Configure the **Exclusion parameters** to exclude specific volumes from the scheduled backups. Excluded volumes will not be backed up when the policy runs.

   1. To exclude boot volumes, select ** Exclude boot volumes**. If you exclude boot volumes, only data (non-boot) volumes will be backed up by the policy. In other words, it will not create snapshots of volumes that are attached to instances as a boot volume.

   1. To exclude specific volume types, choose **Exclude specific volume types**, and then select the volume types to exclude. Only volumes of the remaining types will be backed up by the policy. 

   1. To exclude volumes that have specific tags, choose **Add tag**, and then specify the tag keys and values. The policy will not create snapshots of volumes that have any of the specified tags.

1. (*Optional*) In the **Advanced settings**, specify additional actions that the policy should perform.

   1. To copy assigned tags from the source volumes to their snapshots, select **Copy tags from volumes**.

   1. With **Extend deletion** disabled:
      + If a source volume is deleted, Amazon Data Lifecycle Manager continues to delete previously created snapshots up to, but not including, the last one based on the retention period. If you want Amazon Data Lifecycle Manager to delete all snapshots, including the last one, select **Extend deletion**.
      + If a policy is deleted or enters the `error` or `disabled` state, Amazon Data Lifecycle Manager stops deleting snapshots. If you want Amazon Data Lifecycle Manager to continue deleting snapshots, including the last one, select **Extend deletion**.
**Note**  
If you enable extend deletion, you override both behaviors described above simultaneously.

   1. To copy snapshots created by the policy to other Regions, select **Create cross-Region copy** and then select up to 3 destination Regions.
      + If the source snapshot is encrypted, or if encryption by default is enabled for the destination Region, the copied snapshots are encrypted using the default KMS key for EBS encryption in the destination Region.
      + If the source snapshot is unencrypted and encryption by default is disabled for the destination Region, the copied snapshots are unencrypted.

1. (*Optional*) To add a tag to the policy, choose **Add tag** and then specify the tag key and value pair.

1. Choose **Create default policy**.
**Note**  
If you get the `Role with name AWSDataLifecycleManagerDefaultRole already exists` error, see [Troubleshoot Amazon Data Lifecycle Manager issues](dlm-troubleshooting.md) for more information.

------
#### [ AWS CLI ]

**To create a default policy for EBS snapshots**  
Use the [ create-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html) command. You can specify the request parameters in one of two methods, depending on your use case or preferences:
+ **Method 1**

  ```
  $ aws dlm create-lifecycle-policy \
  --state ENABLED | DISABLED \
  --description "policy_description" \
  --execution-role-arn role_arn \
  --default-policy VOLUME \
  --create-interval creation_frequency_in_days (1-7) \
  --retain-interval retention_period_in_days (2-14) \
  --copy-tags | --no-copy-tags \
  --extend-deletion | --no-extend-deletion \
  --cross-region-copy-targets TargetRegion=destination_region_code \
  --exclusions ExcludeBootVolumes=true | false, ExcludeTags=[{Key=tag_key,Value=tag_value}], ExcludeVolumeTypes="standard | gp2 | gp3 | io1 | io2 | st1 | sc1"
  ```

  For example, to create a default policy for EBS snapshots that targets all volumes in the Region, uses the default IAM role, runs daily (default), and retains snapshots for 7 days (default), you need to specify the following parameters:

  ```
  $ aws dlm create-lifecycle-policy \
  --state ENABLED \
  --description "Daily default snapshot policy" \
  --execution-role-arn arn:aws:iam::account_id:role/AWSDataLifecycleManagerDefaultRole \
  --default-policy VOLUME
  ```
+ **Method 2**

  ```
  $ aws dlm create-lifecycle-policy \
  --state ENABLED | DISABLED \
  --description "policy_description" \
  --execution-role-arn role_arn \
  --default-policy VOLUME \
  --policy-details file://policyDetails.json
  ```

  Where `policyDetails.json` includes the following:

  ```
  {
      "PolicyLanguage": "SIMPLIFIED",
      "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
      "ResourceType": "VOLUME",
      "CopyTags": true | false,
      "CreateInterval": creation_frequency_in_days (1-7),
      "RetainInterval": retention_period_in_days (2-14),
      "ExtendDeletion": true | false, 
      "CrossRegionCopyTargets": [{"TargetRegion":"destination_region_code"}],
      "Exclusions": {
          "ExcludeBootVolume": true | false,
  		"ExcludeVolumeTypes": ["standard | gp2 | gp3 | io1 | io2 | st1 | sc1"],
          "ExcludeTags": [{ 
              "Key": "exclusion_tag_key",
              "Value": "exclusion_tag_value"
          }]
      }
  }
  ```

------

## Create default policy for EBS-backed AMIs
<a name="default-ami-policy"></a>

The following procedure shows you how to create a default policy for EBS-backed AMIs.

------
#### [ Console ]

**To create a default policy for EBS-backed AMIs**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation panel, choose **Lifecycle Manager** and then choose **Create lifecycle policy**.

1. For **Policy type**, choose **Default policy** and then choose **EBS-backed AMI policy**.

1. For **Description**, enter a brief description for the policy.

1. For **IAM role**, choose the IAM role that has permissions to manage AMIs.

   We recommend that you choose **Default** to use the default IAM role provided by Amazon Data Lifecycle Manager. However, you can also use a custom IAM role that you previously created.

1. For **Creation frequency**, specify how often you want the policy to run and create AMIs from your instances.

   The frequency that you specify also determines which instances are backed up. The policy will only back up instances that have not been backed up by any other means within the specified frequency. For example, if you specify a creation frequency of 3 days, the policy will only create AMIs from instances that have not been backed up within the last 3 days.

1. For **Retention period**, specify how long you want the policy to retain the AMIs that it creates. When an AMI reaches the retention threshold, it is automatically deregistered and its associated snapshots are deleted. The retention period must be greater than or equal to the creation frequency.

1. (*Optional*) Configure the **Exclusion parameters** to exclude specific instances from the scheduled backups. Excluded instances will not be backed up when the policy runs.

   1. To exclude instances that have specific tags, choose **Add tag**, and then specify the tag keys and values. The policy will not create AMIs from instances that have any of the specified tags.

1. (*Optional*) In the **Advanced settings**, specify additional actions that the policy should perform.

   1. To copy assigned tags from the source instances to their AMIs, select **Copy tags from instances**.

   1. With **Extend deletion** disabled:
      + If a source instance is terminated, Amazon Data Lifecycle Manager continues to deregister previously created AMIs up to, but not including, the last one based on the retention period. If you want Amazon Data Lifecycle Manager to deregister all AMIs, including the last one, select **Extend deletion**.
      + If a policy is deleted or enters the `error` or `disabled` state, Amazon Data Lifecycle Manager stops deregistering AMIs. If you want Amazon Data Lifecycle Manager to continue deregistering AMIs, including the last one, select **Extend deletion**.
**Note**  
If you enable extended deletion, you override both behaviors described above simultaneously.

   1. To copy AMIs created by the policy to other Regions, select **Create cross-Region copy** and then select up to 3 destination Regions.
      + If the source AMI is encrypted, or if encryption by default is enabled for the destination Region, the copied AMIs are encrypted using the default KMS key for EBS encryption in the destination Region.
      + If the source AMI is unencrypted and encryption by default is disabled for the destination Region, the copied AMIs are unencrypted.

1. (*Optional*) To add a tag to the policy, choose **Add tag** and then specify the tag key and value pair.

1. Choose **Create default policy**.
**Note**  
If you get the `Role with name AWSDataLifecycleManagerDefaultRoleForAMIManagement already exists` error, see [Troubleshoot Amazon Data Lifecycle Manager issues](dlm-troubleshooting.md) for more information.

------
#### [ AWS CLI ]

**To create a default policy for EBS-backed AMIs**  
Use the [ create-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html) command. You can specify the request parameters in one of two methods, depending on your use case or preferences:
+ **Method 1**

  ```
  $ aws dlm create-lifecycle-policy \
  --state ENABLED | DISABLED \
  --description "policy_description" \
  --execution-role-arn role_arn \
  --default-policy INSTANCE \
  --create-interval creation_frequency_in_days (1-7) \
  --retain-interval retention_period_in_days (2-14) \
  --copy-tags | --no-copy-tags \
  --extend-deletion | --no-extend-deletion \
  --cross-region-copy-targets TargetRegion=destination_region_code \
  --exclusions ExcludeTags=[{Key=tag_key,Value=tag_value}]
  ```

  For example, to create a default policy for EBS-backed AMIs that targets all instances in the Region, uses the default IAM role, runs daily (default), and retains AMIs for 7 days (default), you need to specify the following parameters:

  ```
  $ aws dlm create-lifecycle-policy \
  --state ENABLED \
  --description "Daily default AMI policy" \
  --execution-role-arn arn:aws:iam::account_id:role/AWSDataLifecycleManagerDefaultRoleForAMIManagement \
  --default-policy INSTANCE
  ```
+ **Method 2**

  ```
  $ aws dlm create-lifecycle-policy \
  --state ENABLED | DISABLED \
  --description "policy_description" \
  --execution-role-arn role_arn \
  --default-policy INSTANCE \
  --policy-details file://policyDetails.json
  ```

  Where `policyDetails.json` includes the following:

  ```
  {
      "PolicyLanguage": "SIMPLIFIED",
      "PolicyType": "IMAGE_MANAGEMENT",
      "ResourceType": "INSTANCE",
      "CopyTags": true | false,
      "CreateInterval": creation_frequency_in_days (1-7),
      "RetainInterval": retention_period_in_days (2-14),
      "ExtendDeletion": true | false, 
  	"CrossRegionCopyTargets": [{"TargetRegion":"destination_region_code"}],
      "Exclusions": {
          "ExcludeTags": [{ 
              "Key": "exclusion_tag_key",
              "Value": "exclusion_tag_value"
          }]
      }
  }
  ```

------

# Enable Data Lifecycle Manager default policies across accounts and Regions
<a name="dlm-stacksets"></a>

Using CloudFormation StackSets, you can enable Amazon Data Lifecycle Manager default policies across multiple accounts and AWS Regions with a single operation.

You can use stack sets to enable default policies in one of the following ways:
+ **Across an AWS organization** — Ensures that default policies are enabled and configured consistently across an entire AWS organization or specific organizational units in an organization. This is done using *service-managed permissions*. CloudFormation StackSets creates the required IAM roles on your behalf.
+ **Across specific AWS accounts** — Ensures that default policies are enabled and configured consistently across specific target accounts. This requires *self-managed permissions*. You create the IAM roles required to establish the trust relationship between the stack set administrator account and the target accounts.

For more information, see [ Permission models for stack sets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html#stacksets-concepts-stackset-permission-models) in the *AWS CloudFormation User Guide*.

Use the following procedures to enable Amazon Data Lifecycle Manager default policies across an entire AWS organization, across specific OUs, or across specific target accounts.

**Prerequisites**

Do one of the following, depending on how you are enabling the default policies:
+ (Across AWS organizations) You must [ enable all features in your organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html) and [ activate trusted access with AWS Organizations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-activate-trusted-access.html). You must also use the organization's management account or a [ delegated administrator account](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-delegated-admin.html).
+ (Across specific target accounts) You must [ grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) by creating the roles required to establish a trusted relationship between stack set administrator account and target accounts.

------
#### [ Console ]

**To enable default policies across an AWS organization or across specific target accounts**

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. In the navigation pane, choose **StackSets**, then choose **Create StackSet**.

1. For **Permissions**, do one of the following, depending on how you are enabling the default policies:
   + (Across an AWS organization) Choose **Service-managed permissions**.
   + (Across specific target accounts) Choose **Self-service permissions**. Then, for **IAM admin role ARN**, select the IAM service role that that you created for the administrator account, and for **IAM execution role name**, enter the name of the IAM service role that you created in the target accounts.

1. For **Prepare template**, choose **Use a sample template**.

1. For **Sample templates**, do one of the following:
   + (Default policy for EBS snapshots) Select **Create Amazon Data Lifecycle Manager default policies for EBS Snapshots.**
   + (Default policy for EBS-backed AMIs) Select **Create Amazon Data Lifecycle Manager default policies for EBS-backed AMIs**.

1. Choose **Next**.

1. For **StackSet name** and **StackSet description**, enter a descriptive name and brief description.

1. In the **Parameters** section, configure the default policy settings as needed.
**Note**  
For critical workloads, we recommend **CreateInterval = 1 day** and **RetainInterval = 7 days**.

1. Choose **Next**.

1. (Optional) For **Tags**, specify tags to help you identify the StackSet and stack resources.

1. For **Managed execution**, choose **Active**.

1. Choose **Next**.

1. For **Add stacks to stack set**, choose **Deploy new stacks**.

1. Do one of the following, depending on how you are enabling the default policies:
   + (Across AWS organization) For **Deployment targets** choose one of the following options:
     + To deploy across an entire AWS organization, choose **Deploy to organization**.
     + To deploy to specific organizational units (OU), choose **Deploy to organizational units**, and then for **OU ID**, enter the OU ID. To add additional OUs, choose **Add another OU**.
   + (Across specific target accounts) For **Accounts**, do one of the following:
     + To deploy to specific target accounts, choose **Deploy stacks in accounts**, and then for **Account numbers**, enter the IDs of the target accounts.
     + To deploy to all accounts in a specific OU, choose **Deploy stack to all accounts in an organizational unit**, and then for **Organization numbers**, enter the ID of the target OU.

1. For **Automatic deployment**, choose **Activated**.

1. For **Account removal behavior**, choose **Retain stacks**.

1. For **Specify regions**, select specific Regions in which to enable default policies, or choose **Add all Regions** to enable default policies in all Regions.

1. Choose **Next**.

1. Review the stack set settings, select **I acknowledge that CloudFormation might create IAM resources**, and then choose **Submit**.

------
#### [ AWS CLI ]

**To enable default policies across an AWS organization**

1. Create the stack set. Use the [ create-stack-set](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/create-stack-set.html) command.

   For `--permission-model`, specify `SERVICE_MANAGED`. 

   For `--template-url`, specify one of the following template URLs:
   + (Default policies for EBS-backed AMIs) `https://s3.amazonaws.com/cloudformation-stackset-sample-templates-us-east-1/DataLifecycleManagerAMIDefaultPolicy.yaml`
   + (Default policies for EBS snapshots) `https://s3.amazonaws.com/cloudformation-stackset-sample-templates-us-east-1/DataLifecycleManagerEBSSnapshotDefaultPolicy.yaml`

   For `--parameters`, specify the settings for the default policies. For supported parameters, parameter descriptions, and valid values, download the template using the URL and then view the template using a text editor.

   For `--auto-deployment`, specify `Enabled=true, RetainStacksOnAccountRemoval=true`.

   ```
   $ aws cloudformation create-stack-set \
   --stack-set-name stackset_name \
   --permission-model SERVICE_MANAGED \
   --template-url template_url \
   --parameters "ParameterKey=param_name_1,ParameterValue=param_value_1" "ParameterKey=param_name_2,ParameterValue=param_value_2" \
   --auto-deployment "Enabled=true, RetainStacksOnAccountRemoval=true"
   ```

1. Deploy the stack set. Use the [ create-stack-instances](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/create-stack-instances.html) command.

   For `--stack-set-name`, specify the name of the stack set you created in the previous step.

   For `--deployment-targets OrganizationalUnitIds`, specify the ID of the root OU to deploy to an entire organization, or OU IDs to deploy to specific OUs in the organization.

   For `--regions`, specify the AWS Regions in which to enable the default policies.

   ```
   $ aws cloudformation create-stack-instances \
   --stack-set-name stackset_name \
   --deployment-targets OrganizationalUnitIds='["root_ou_id"]' | '["ou_id_1", "ou_id_2]' \
   --regions '["region_1", "region_2"]'
   ```

**To enable default policies across specific target accounts**

1. Create the stack set. Use the [ create-stack-set](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/create-stack-set.html) command.

   For `--template-url`, specify one of the following template URLs:
   + (Default policies for EBS-backed AMIs) `https://s3.amazonaws.com/cloudformation-stackset-sample-templates-us-east-1/DataLifecycleManagerAMIDefaultPolicy.yaml`
   + (Default policies for EBS snapshots) `https://s3.amazonaws.com/cloudformation-stackset-sample-templates-us-east-1/DataLifecycleManagerEBSSnapshotDefaultPolicy.yaml`

   For `--administration-role-arn`, specify the ARN of the IAM service role that you previously created for the stack set administrator. 

   For `--execution-role-name`, specify the name of IAM service role that you created in the target accounts.

   For `--parameters`, specify the settings for the default policies. For supported parameters, parameter descriptions, and valid values, download the template using the URL and then view the template using a text editor.

   For `--auto-deployment`, specify `Enabled=true, RetainStacksOnAccountRemoval=true`.

   ```
   $ aws cloudformation create-stack-set \
   --stack-set-name stackset_name \
   --template-url template_url \
   --parameters "ParameterKey=param_name_1,ParameterValue=param_value_1" "ParameterKey=param_name_2,ParameterValue=param_value_2" \
   --administration-role-arn administrator_role_arn \
   --execution-role-name target_account_role \									
   --auto-deployment "Enabled=true, RetainStacksOnAccountRemoval=true"
   ```

1. Deploy the stack set. Use the [ create-stack-instances](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/create-stack-instances.html) command.

   For `--stack-set-name`, specify the name of the stack set you created in the previous step.

   For `--accounts`, specify the IDs of the target AWS accounts.

   For `--regions`, specify the AWS Regions in which to enable the default policies.

   ```
   $ aws cloudformation create-stack-instances \
   --stack-set-name stackset_name \
   --accounts '["account_ID_1","account_ID_2"]' \
   --regions '["region_1", "region_2"]'
   ```

------

# Create Amazon Data Lifecycle Manager custom policy for EBS snapshots
<a name="snapshot-ami-policy"></a>

The following procedure shows you how to use Amazon Data Lifecycle Manager to automate Amazon EBS snapshot lifecycles.

**Topics**
+ [

## Create a snapshot lifecycle policy
](#create-snap-policy)
+ [

## Considerations for snapshot lifecycle policies
](#snapshot-considerations)
+ [

## Additional resources
](#snapshot-additional-resources)
+ [Automate application-consistent snapshots](automate-app-consistent-backups.md)
+ [Other use cases for pre and post scripts](script-other-use-cases.md)
+ [How pre and post scripts work](script-flow.md)
+ [Identify snapshots created with pre and post scripts](dlm-script-tags.md)
+ [Monitor pre and post scripts](dlm-script-monitoring.md)

## Create a snapshot lifecycle policy
<a name="create-snap-policy"></a>

Use one of the following procedures to create a snapshot lifecycle policy.

------
#### [ Console ]

**To create a snapshot policy**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Elastic Block Store**, **Lifecycle Manager**, and then choose **Create lifecycle policy**.

1. On the **Select policy type** screen, choose **EBS snapshot policy** and then choose **Next**.

1. In the **Target resources** section, do the following:

   1. For **Target resource types**, choose the type of resource to back up. Choose `Volume` to create snapshots of individual volumes, or choose `Instance` to create multi-volume snapshots from the volumes attached to an instance.

   1. (*Outpost and Local Zone customers only*) Specify where the target resources are located.

      For **Target resource location**, specify where the target resources are located.
      + To target resources in a Region, choose **AWS Region**. Amazon Data Lifecycle Manager will back up all resources of the specified type that have matching target tags in the current Region only. Snapshots are created in the same Region.
      + To target resources in Local Zones, choose **AWS Local Zones**. Amazon Data Lifecycle Manager will back up all resources of the specified type that have matching target tags across all Local Zones in the current Region only. Snapshots can be created in the same Local Zone as the source resource, or in its parent Region.
      + To target resources an Outpost, choose **AWS Outpost**. Amazon Data Lifecycle Manager will back up all resources of the specified type that have matching target tags across all Outposts in your account. Snapshots can be created on the same Outpost as the source resource, or in its parent Region.

   1. For **Target resource tags**, choose the resource tags that identify the volumes or instances to back up. Only resources that have the specified tag key and value pairs are backed up by the policy.

1. For **Description**, enter a brief description for the policy.

1. For **IAM role**, choose the IAM role that has permissions to manage snapshots and to describe volumes and instances. To use the default role provided by Amazon Data Lifecycle Manager. choose **Default role**. Alternatively, to use a custom IAM role that you previously created, choose **Choose another role** and then select the role to use.

1. For **Policy tags**, add the tags to apply to the lifecycle policy. You can use these tags to identify and categorize your policies.

1. For **Policy status**, choose **Enable** to start the policy runs at the next scheduled time, or **Disable policy** to prevent the policy from running. If you do not enable the policy now, it will not start creating snapshots until you manually enable it after creation.

1. (*Policies that target instances only*) Exclude volumes from multi-volume snapshot sets.

   By default, Amazon Data Lifecycle Manager will create snapshots of all the volumes attached to targeted instances. However, you can choose to create snapshots of a subset of the attached volumes. In the **Parameters** section, do the following:
   + If you do not want to create snapshots of the root volumes attached to the targeted instances, select **Exclude root volume**. If you select this option, only the data (non-root) volumes that are attached to targeted instances will be included in the multi-volume snapshot sets.
   + If you want to create snapshots of a subset of the data (non-root) volumes attached to the targeted instances, select **Exclude specific data volumes**, and then specify the tags that are to be used to identify the data volumes that should not be snapshotted. Amazon Data Lifecycle Manager will not create snapshots of data volumes that have any of the specified tags. Amazon Data Lifecycle Manager will create snapshots only of data volumes that do not have any of the specified tags.

1. Choose **Next**.

1. On the **Configure schedule** screen, configure the policy schedules. A policy can have up to 4 schedules. Schedule 1 is mandatory. Schedules 2, 3, and 4 are optional. For each policy schedule that you add, do the following:

   1. In the **Schedule details** section do the following:

      1. For **Schedule name**, specify a descriptive name for the schedule.

      1. For **Frequency** and the related fields, configure the interval between policy runs.

         You can configure policy runs on a daily, weekly, monthly, or yearly schedule. Alternatively, choose **Custom cron expression** to specify an interval of up to one year. For more information, see [Cron and rate expressions](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-scheduled-rule-pattern.html) in the *Amazon EventBridge User Guide*.
**Note**  
If you need to enable **snapshot archiving** for the schedule, then you must select either the **monthly** or **yearly** frequency, or you must specify a cron expression with a creation frequency of at least 28 days.  
If specify a monthly frequency that creates snapshots on a specific day in a specific week (for example, the second Thursday of the month), then for count-based schedule, the retention count for the archive tier must be 4 or more.

      1. For **Starting at**, specify the time at which the policy runs are scheduled to start. The first policy run starts within an hour after the scheduled time. The time must be entered in the `hh:mm` UTC format.

      1. For **Retention type**, specify the retention policy for snapshots created by the schedule.

         You can retain snapshots based on either their total count or their age.
         + Count-based retention
           + With snapshot archiving disabled, the range is `1` to `1000`. When the retention threshold is reached, the oldest snapshot is permanently deleted.
           + With snapshot archiving enabled, the range is `0` (archive immediately after creation) to `1000`. When the retention threshold is reached, the oldest snapshot is converted to a full snapshot and it is moved to the archive tier.
         + Age-based retention
           + With snapshot archiving disabled, the range is `1` day to `100` years. When the retention threshold is reached, the oldest snapshot is permanently deleted.
           + With snapshot archiving enabled, the range is `0` days (archive immediately after creation) to `100` years. When the retention threshold is reached, the oldest snapshot is converted to a full snapshot and it is moved to the archive tier.
**Note**  
All schedules must have the same retention type (age-based or count-based). You can specify the retention type for Schedule 1 only. Schedules 2, 3, and 4 inherit the retention type from Schedule 1. Each schedule can have its own retention count or period.
If you enable fast snapshot restore, cross-Region copy, or snapshot sharing, then you must specify a retention count of `1` or more, or a retention period of `1` day or longer.

      1. (*AWS Outposts and Local Zone customers only*) Specify the snapshot destination.

         For **Snapshot destination**, specify the destination for snapshots created by the policy.
         + If the policy targets resources in a Region, snapshots must be created in the same Region. AWS Region is selected for you.
         + If the policy targets resources in a Local Zone, you can create snapshots in the same Local Zone as the source resource, or in its parent Region.
         + If the policy targets resources on an Outpost, you can create snapshots on the same Outpost as the source resource, or in its parent Region.

   1. Configure tagging for snapshots.

      In the **Tagging** section, do the following:

      1. To copy all of the user-defined tags from the source volume to the snapshots created by the schedule, select **Copy tags from source**.

      1. To specify additional tags to assign to snapshots created by this schedule, choose **Add tags**.

   1. Configure pre and post scripts for application-consistent snapshots.

      For more information, see [Automate application-consistent snapshots with Data Lifecycle Manager](automate-app-consistent-backups.md).

   1. (*Policies that target volumes only*) Configure snapshot archiving.

      In the **Snapshot archiving** section, do the following:
**Note**  
You can enable snapshot archiving for only one schedule in a policy.

      1. To enable snapshot archiving for the schedule, select **Archive snapshots created by this schedule**.
**Note**  
You can enable snapshot archiving only if the snapshot creation frequency is monthly or yearly, or if you specify a cron expression with a creation frequency of at least 28 days.

      1. Specify the retention rule for snapshots in the archive tier.
         + For **count-based schedules**, specify the number of snapshots to retain in the archive tier. When the retention threshold is reached, the oldest snapshot is permanently deleted from the archive tier. For example, if you specify 3, the schedule will retain a maximum of 3 snapshots in the archive tier. When the fourth snapshot is archived, the oldest of the three existing snapshots in the archive tier is deleted.
         + For **age-based schedules**, specify the time period for which to retain snapshots in the archive tier. When the retention threshold is reached, the oldest snapshot is permanently deleted from the archive tier. For example, if you specify 120 days, the schedule will automatically delete snapshots from the archive tier when they reach that age.
**Important**  
The minimum retention period for archived snapshots is 90 days. You must specify a retention rule that retains the snapshot for at least 90 days.

   1. Enable fast snapshot restore.

      To enable fast snapshot restore for snapshots created by the schedule, in the **Fast snapshot restore** section, select **Enable fast snapshot restore**. If you enable fast snapshot restore, you must choose the Availability Zones in which to enable it. If the schedule uses an age-based retention schedule, you must specify the period for which to enable fast snapshot restore for each snapshot. If the schedule uses count-based retention, you must specify the maximum number of snapshots to enable for fast snapshot restore.

      If the schedule creates snapshots on an Outpost, you can't enable fast snapshot restore. Fast snapshot restore is not supported with local snapshots that are stored on an Outpost.
**Note**  
You are billed for each minute that fast snapshot restore is enabled for a snapshot in a particular Availability Zone. Charges are pro-rated with a minimum of one hour.

   1. Configure cross-Region copy.

      To copy snapshots created by the schedule to an Outpost or to a different Region, in the **Cross-Region copy** section, select **Enable cross-Region copy**.

      If the schedule creates snapshots in a Region, you can copy the snapshots to up to three additional Regions or Outposts in your account. You must specify a separate cross-Region copy rule for each destination Region or Outpost.

      For each Region or Outpost, you can choose different retention policies and you can choose whether to copy all tags or no tags. If the source snapshot is encrypted, or if encryption by default is enabled, the copied snapshots are encrypted. If the source snapshot is unencrypted, you can enable encryption. If you do not specify a KMS key, the snapshots are encrypted using the default KMS key for EBS encryption in each destination Region. If you specify a KMS key for the destination Region, then the selected IAM role must have access to the KMS key.
**Note**  
You must ensure that you do not exceed the number of concurrent snapshot copies per Region.

       If the policy creates snapshots on an Outpost, then you can't copy the snapshots to a Region or to another Outpost and the cross-Region copy settings are not available.

   1. Configure cross-account sharing.

      In the **Cross-account sharing**, configure the policy to automatically share the snapshots created by the schedule with other AWS accounts. Do the following:

      1. To enable sharing with other AWS accounts, select **Enable cross-account sharing**.

      1. To add the accounts with which to share the snapshots, choose **Add account**, enter the 12-digit AWS account ID, and choose **Add**.

      1. To automatically unshare shared snapshots after a specific period, select**Unshare automatically**. If you choose to automatically unshare shared snapshots, the period after which to automatically unshare the snapshots cannot be longer than the period for which the policy retains its snapshots. For example, if the policy's retention configuration retains snapshots for a period of 5 days, you can configure the policy to automatically unshare shared snapshots after periods up to 4 days. This applies to policies with age-based and count-based snapshot retention configurations.

         If you do not enable automatic unsharing, the snapshot is shared until it is deleted.
**Note**  
You can only share snapshots that are unencrypted or that are encrypted using a customer managed key. You can't share snapshots that are encrypted with the default EBS encryption KMS key. If you share encrypted snapshots, then you must also share the KMS key that was used to encrypt the source volume with the target accounts. For more information, see [ Allowing users in other accounts to use a KMS key](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the *AWS Key Management Service Developer Guide*.

   1. To add additional schedules, choose **Add another schedule**, which is located at the top of the screen. For each additional schedule, complete the fields as described previously in this topic.

   1. After you have added the required schedules, choose **Review policy**.

1. Review the policy summary, and then choose **Create policy**.
**Note**  
If you get the `Role with name AWSDataLifecycleManagerDefaultRole already exists` error, see [Troubleshoot Amazon Data Lifecycle Manager issues](dlm-troubleshooting.md) for more information.

------
#### [ Command line ]

Use the [create-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html) command to create a snapshot lifecycle policy. For `PolicyType`, specify `EBS_SNAPSHOT_MANAGEMENT`.

**Note**  
To simplify the syntax, the following examples use a JSON file, `policyDetails.json`, that includes the policy details.

**Example 1—Snapshot lifecycle policy with two schedules**  
This example creates a snapshot lifecycle policy that creates snapshots of all volumes that have a tag key of `costcenter` with a value of `115`. The policy includes two schedules. The first schedule creates a snapshot every day at 03:00 UTC. The second schedule creates a weekly snapshot every Friday at 17:00 UTC.

```
aws dlm create-lifecycle-policy \
    --description "My volume policy" \
    --state ENABLED \
    --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRole \
    --policy-details file://policyDetails.json
```

The following is an example of the `policyDetails.json` file.

```
{
    "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
    "ResourceTypes": [
        "VOLUME"
    ],
    "TargetTags": [{
        "Key": "costcenter",
        "Value": "115"
    }],
    "Schedules": [{
        "Name": "DailySnapshots",
        "TagsToAdd": [{
            "Key": "type",
            "Value": "myDailySnapshot"
        }],
        "CreateRule": {
            "Interval": 24,
            "IntervalUnit": "HOURS",
            "Times": [
                "03:00"
            ]
        },
        "RetainRule": {
            "Count": 5
        },
        "CopyTags": false
    },
    {
        "Name": "WeeklySnapshots",
        "TagsToAdd": [{
            "Key": "type",
            "Value": "myWeeklySnapshot"
        }],
        "CreateRule": {
            "CronExpression": "cron(0 17 ? * FRI *)"
        },
        "RetainRule": {
            "Count": 5
        },
        "CopyTags": false
    }
]}
```

If the request succeeds, the command returns the ID of the newly created policy. The following is example output.

```
{
   "PolicyId": "policy-0123456789abcdef0"
}
```

**Example 2—Snapshot lifecycle policy that targets instances and creates snapshots of a subset of data (non-root) volumes**  
This example creates a snapshot lifecycle policy that creates multi-volume snapshot sets from instances tagged with `code=production`. The policy includes only one schedule. The schedule does not create snapshots of the data volumes that are tagged with `code=temp`.

```
aws dlm create-lifecycle-policy \
    --description "My volume policy" \
    --state ENABLED \
    --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRole \
    --policy-details file://policyDetails.json
```

The following is an example of the `policyDetails.json` file.

```
{
    "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
    "ResourceTypes": [
        "INSTANCE"
    ],
    "TargetTags": [{
        "Key": "code",
        "Value": "production"
    }],
    "Parameters": {
        "ExcludeDataVolumeTags": [{
            "Key": "code",
            "Value": "temp"
        }]
    },
    "Schedules": [{
        "Name": "DailySnapshots",
        "TagsToAdd": [{
            "Key": "type",
            "Value": "myDailySnapshot"
        }],
        "CreateRule": {
            "Interval": 24,
            "IntervalUnit": "HOURS",
            "Times": [
                "03:00"
            ]
        },
        "RetainRule": {
            "Count": 5
        },
        "CopyTags": false
    }
]}
```

If the request succeeds, the command returns the ID of the newly created policy. The following is example output.

```
{
   "PolicyId": "policy-0123456789abcdef0"
}
```

**Example 3—Snapshot lifecycle policy that automates local snapshots of Outpost resources**  
This example creates a snapshot lifecycle policy that creates snapshots of volumes tagged with `team=dev` across all of your Outposts. The policy creates the snapshots on the same Outposts as the source volumes. The policy creates snapshots every `12` hours starting at `00:00` UTC.

```
aws dlm create-lifecycle-policy \
    --description "My local snapshot policy" \
    --state ENABLED \
    --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRole \
    --policy-details file://policyDetails.json
```

The following is an example of the `policyDetails.json` file.

```
{
    "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
    "ResourceTypes": "VOLUME",
	"ResourceLocations": "OUTPOST",
    "TargetTags": [{
        "Key": "team",
        "Value": "dev"
    }],
    "Schedules": [{
        "Name": "on-site backup",
        "CreateRule": {
            "Interval": 12,
            "IntervalUnit": "HOURS",
            "Times": [
                "00:00"
            ],
	"Location": [
		"OUTPOST_LOCAL"
	]
        },
        "RetainRule": {
            "Count": 1
        },
        "CopyTags": false
    }
]}
```

**Example 4—Snapshot lifecycle policy that creates snapshots in a Region and copies them to an Outpost**  
The following example policy creates snapshots of volumes that are tagged with `team=dev`. Snapshots are created in the same Region as the source volume. Snapshots are created every `12` hours starting at `00:00` UTC, and retains a maximum of `1` snapshot. The policy also copies the snapshots to Outpost `arn:aws:outposts:us-east-1:123456789012:outpost/op-1234567890abcdef0`, encrypts the copied snapshots using the default encryption KMS key, and retains the copies for `1` month.

```
aws dlm create-lifecycle-policy \
    --description "Copy snapshots to Outpost" \
    --state ENABLED \
    --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRole \
    --policy-details file://policyDetails.json
```

The following is an example of the `policyDetails.json` file.

```
{
    "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
    "ResourceTypes": "VOLUME",
    "ResourceLocations": "CLOUD",
    "TargetTags": [{
        "Key": "team",
        "Value": "dev"
    }],
    "Schedules": [{
        "Name": "on-site backup",
        "CopyTags": false,
        "CreateRule": {
            "Interval": 12,
            "IntervalUnit": "HOURS",
            "Times": [
                "00:00"
            ],
            "Location": "CLOUD"
        },
        "RetainRule": {
            "Count": 1
        },
        "CrossRegionCopyRules" : [
        {
            "Target": "arn:aws:outposts:us-east-1:123456789012:outpost/op-1234567890abcdef0",
            "Encrypted": true,
            "CopyTags": true,
            "RetainRule": {
                "Interval": 1,
                "IntervalUnit": "MONTHS"
            }
        }]
    }
]}
```

**Example 5—Snapshot lifecycle policy with an archive-enabled, age-based schedule**  
This example creates a snapshot lifecycle policy that targets volumes tagged with `Name=Prod`. The policy has one age-based schedule that creates snapshots on the first day of each month at 09:00. The schedule retains each snapshot in the standard tier for one day, after which it moves them to the archive tier. Snapshots are stored in the archive tier for 90 days before being deleted.

```
aws dlm create-lifecycle-policy \
    --description "Copy snapshots to Outpost" \
    --state ENABLED \
    --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRole \
    --policy-details file://policyDetails.json
```

The following is an example of the `policyDetails.json` file.

```
{
    "ResourceTypes": [ "VOLUME"],
    "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
    "Schedules" : [
      {
        "Name": "sched1",
        "TagsToAdd": [
          {"Key":"createdby","Value":"dlm"}
        ],
        "CreateRule": {
          "CronExpression": "cron(0 9 1 * ? *)"
        },
        "CopyTags": true,
        "RetainRule":{
          "Interval": 1,
          "IntervalUnit": "DAYS"
        },
        "ArchiveRule": {
            "RetainRule":{
              "RetentionArchiveTier": {
                 "Interval": 90,
                 "IntervalUnit": "DAYS"
              }
            }
        }
      }
    ],
    "TargetTags": [
      {
        "Key": "Name",
        "Value": "Prod"
      }
    ]
}
```

**Example 6—Snapshot lifecycle policy with an archive-enabled, count-based schedule**  
This example creates a snapshot lifecycle policy that targets volumes tagged with `Purpose=Test`. The policy has one count-based schedule that creates snapshots on the first day of each month at 09:00. The schedule archives snapshots immediately after creation and retains a maximum of three snapshots in the archive tier.

```
aws dlm create-lifecycle-policy \
    --description "Copy snapshots to Outpost" \
    --state ENABLED \
    --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRole \
    --policy-details file://policyDetails.json
```

The following is an example of the `policyDetails.json` file.

```
{
    "ResourceTypes": [ "VOLUME"],
    "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
    "Schedules" : [
      {
        "Name": "sched1",
        "TagsToAdd": [
          {"Key":"createdby","Value":"dlm"}
        ],
        "CreateRule": {
          "CronExpression": "cron(0 9 1 * ? *)"
        },
        "CopyTags": true,
        "RetainRule":{
          "Count": 0
        },
        "ArchiveRule": {
            "RetainRule":{
              "RetentionArchiveTier": {
                 "Count": 3
              }
            }
        }
      }
    ],
    "TargetTags": [
      {
        "Key": "Purpose",
        "Value": "Test"
      }
    ]
}
```

------

## Considerations for snapshot lifecycle policies
<a name="snapshot-considerations"></a>

The following **general considerations** apply to snapshot lifecycle policies:
+ Snapshot lifecycle policies target only instances or volumes that are in the same Region as the policy.
+ The first snapshot creation operation starts within one hour after the specified start time. Subsequent snapshot creation operations start within one hour of their scheduled time.
+ You can create multiple policies to back up a volume or instance. For example, if a volume has two tags, where tag *A* is the target for policy *A* to create a snapshot every 12 hours, and tag *B* is the target for policy *B* to create a snapshot every 24 hours, Amazon Data Lifecycle Manager creates snapshots according to the schedules for both policies. Alternatively, you can achieve the same result by creating a single policy that has multiple schedules. For example, you can create a single policy that targets only tag *A*, and specify two schedules — one for every 12 hours and one for every 24 hours.
+ Target resource tags are case sensitive.
+ If you remove the target tags from a resource that is targeted by a policy, Amazon Data Lifecycle Manager no longer manages existing snapshots in the standard tier and archive tier; you must manually delete them if they are no longer needed.
+ If you create a policy that targets instances, and new volumes are attached to a target instance after the policy has been created, the newly-added volumes are included in the backup at the next policy run. All volumes attached to the instance at the time of the policy run are included.
+ If you create a policy with a custom cron-based schedule that is configured to create only one snapshot, the policy will not automatically delete that snapshot when the retention threshold is reached. You must manually delete the snapshot if it is no longer needed.
+ If you create an age-based policy where the retention period is shorter than the creation frequency, Amazon Data Lifecycle Manager will always retain the last snapshot until the next one is created. For example, if an age-based policy creates one snapshot every month with a retention period of seven days, Amazon Data Lifecycle Manager will retain each snapshot for one month, even though the retention period is seven days.

The following considerations apply to **[snapshot archiving](snapshot-archive.md)**:
+ You can enable snapshot archiving only for snapshot policies that target volumes.
+ You can specify an archiving rule for only one schedule for each policy.
+ If you are using the console, you can enable snapshot archiving only if the schedule has a monthly or yearly creation frequency, or if the schedule has a cron expression with a creation frequency of at least 28 days.

  If you are using the AWS CLI, AWS API, or AWS SDK, you can enable snapshot archiving only if the schedule has a cron expression with a creation frequency of at least 28 days.
+ The minimum retention period in the archive tier is 90 days.
+ When a snapshot is archived, it is converted to a full snapshot when it is moved to the archive tier. This could result in higher snapshot storage costs. For more information, see [Pricing and billing for archiving Amazon EBS snapshots](snapshot-archive-pricing.md).
+ Fast snapshot restore and snapshot sharing are disabled for snapshots when they are archived.
+ If, in the case of a leap year, your retention rule results in an archive retention period of less than 90 days, Amazon Data Lifecycle Manager ensures that snapshots are retained for the minimum 90-day period.
+ If you manually archive a snapshot created by Amazon Data Lifecycle Manager, and the snapshot is still archived when the schedule's retention threshold is reached, Amazon Data Lifecycle Manager no longer manages that snapshot. However, if you restore the snapshot to the standard tier before the schedule's retention threshold is reached, the schedule will continue to manage the snapshot as per the retention rules.
+ If you permanently or temporarily restore a snapshot archived by Amazon Data Lifecycle Manager to the standard tier, and the snapshot is still in the standard tier when the schedule's retention threshold is reached, Amazon Data Lifecycle Manager no longer manages the snapshot. However, if you re-archive the snapshot before the schedule's retention threshold is reached, the schedule will delete the snapshot when the retention threshold is met.
+ Snapshots archived by Amazon Data Lifecycle Manager count towards your `Archived snapshots per volume` and `In-progress snapshot archives per account` quotas.
+ If a schedule is unable to archive a snapshot after retrying for 24 hours, the snapshot remains in the standard tier and it is scheduled for deletion based on the time that it would have been deleted from the archive tier. For example, if the schedule archives snapshots for 120 days, it remains in the standard tier for 120 days after the failed archiving before being permanently deleted. For count-based schedules, the snapshot does not count towards the schedule's retention count.
+ Snapshots must be archived in the same Region in which they were created. If you enabled cross-Region copy and snapshot archiving, Amazon Data Lifecycle Manager does not archive the snapshot copy.
+ Snapshots archived by Amazon Data Lifecycle Manager are tagged with the `aws:dlm:archived=true` system tag. Additionally, snapshots created by an archive-enabled, age-based schedule are tagged with the `aws:dlm:expirationTime` system tag, which indicates the date and time at which the snapshot is scheduled to be archived.

The following considerations apply to **excluding root volumes and data (non-root) volumes**:
+ If you choose to exclude boot volumes and you specify tags that consequently exclude all of the additional data volumes attached to an instance, then Amazon Data Lifecycle Manager will not create any snapshots for the affected instance, and it will emit a `SnapshotsCreateFailed` CloudWatch metric. For more information, see [Monitor policies using CloudWatch](monitor-dlm-cw-metrics.md).

The following considerations apply to **deleting volumes or terminating instances targeted by snapshot lifecycle policies**:
+ If you delete a volume or terminate an instance targeted by a policy with a count-based retention schedule, Amazon Data Lifecycle Manager no longer manages snapshots in the standard tier and archive tier that were created from the deleted volume or instance. You must manually delete those earlier snapshots if they are no longer needed.
+ If you delete a volume or terminate an instance targeted by a policy with an age-based retention schedule, the policy continues to delete snapshots from the standard tier and archive tier that were created from the deleted volume or instance on the defined schedule, up to, but not including, the last snapshot. You must manually delete the last snapshot if it is no longer needed.

The following considerations apply to snapshot lifecycle policies and ** [fast snapshot restore](ebs-fast-snapshot-restore.md)**:
+ Amazon Data Lifecycle Manager can enable fast snapshot restore only for snapshots with a size of 16 TiB or less. For more information, see [Amazon EBS fast snapshot restore](ebs-fast-snapshot-restore.md).
+ A snapshot that is enabled for fast snapshot restore remains enabled even if you delete or disable the policy, disable fast snapshot restore for the policy, or disable fast snapshot restore for the Availability Zone. You must disable fast snapshot restore for these snapshots manually.
+ If you enable fast snapshot restore for a policy and you exceed the maximum number of snapshots that can be enabled for fast snapshot restore, Amazon Data Lifecycle Manager creates snapshots as scheduled but does not enable them for fast snapshot restore. After a snapshot that is enabled for fast snapshot restore is deleted, the next snapshot that Amazon Data Lifecycle Manager creates is enabled for fast snapshot restore.
+ When fast snapshot restore is enabled for a snapshot, it takes 60 minutes per TiB to optimize the snapshot. We recommend that you configure your schedules so that each snapshot is fully optimized before Amazon Data Lifecycle Manager creates the next snapshot.
+ If you enable fast snapshot restore for a policy that targets instances, Amazon Data Lifecycle Manager enables fast snapshot restore for each snapshot in the multi-volume snapshot set individually. If Amazon Data Lifecycle Manager fails to enable fast snapshot restore for one of the snapshots in the multi-volume snapshot set, it will still attempt to enable fast snapshot restore for the remaining snapshots in the snapshot set.
+ You are billed for each minute that fast snapshot restore is enabled for a snapshot in a particular Availability Zone. Charges are pro-rated with a minimum of one hour. For more information, see [Pricing and Billing](ebs-fast-snapshot-restore.md#fsr-pricing).
**Note**  
Depending on the configuration of your lifecycle policies, you could have multiple snapshots enabled for fast snapshot restore in multiple Availability Zones simultaneously.

The following considerations apply to snapshot lifecycle policies and ** [Multi-Attach](ebs-volumes-multi.md) enabled volumes**:
+ When creating a lifecycle policy that targets instances that have the same Multi-Attach enabled volume, Amazon Data Lifecycle Manager initiates a snapshot of the volume for each attached instance. Use the *timestamp* tag to identify the set of time-consistent snapshots that are created from the attached instances.

The following considerations apply to **sharing snapshots across accounts**:
+ You can only share snapshots that are unencrypted or that are encrypted using a customer managed key.
+ You can't share snapshots that are encrypted with the default EBS encryption KMS key.
+ If you share encrypted snapshots, you must also share the KMS key that was used to encrypt the source volume with the target accounts. For more information, see [Allowing users in other accounts to use a KMS key](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the *AWS Key Management Service Developer Guide*.

The following considerations apply to snapshots policies and ** [snapshot archiving](snapshot-archive.md)**:
+ If you manually archive a snapshot that was created by a policy, and that snapshot is in the archive tier when the policy’s retention threshold is reached, Amazon Data Lifecycle Manager will not delete the snapshot. Amazon Data Lifecycle Manager does not manage snapshots while they are stored in the archive tier. If you no longer need snapshots that are stored in the archive tier, you must manually delete them.

The following considerations apply to snapshot policies and [Recycle Bin](recycle-bin.md):
+ If Amazon Data Lifecycle Manager deletes a snapshot and sends it to the Recycle Bin when the policy's retention threshold is reached, and you manually restore the snapshot from the Recycle Bin, you must manually delete that snapshot when it is no longer needed. Amazon Data Lifecycle Manager will no longer manage the snapshot.
+ If you manually delete a snapshot that was created by a policy, and that snapshot is in the Recycle Bin when the policy’s retention threshold is reached, Amazon Data Lifecycle Manager will not delete the snapshot. Amazon Data Lifecycle Manager does not manage the snapshots while they are stored in the Recycle Bin.

  If the snapshot is restored from the Recycle Bin before the policy's retention threshold is reached, Amazon Data Lifecycle Manager will delete the snapshot when the policy's retention threshold is reached.

  If the snapshot is restored from the Recycle Bin after the policy's retention threshold is reached, Amazon Data Lifecycle Manager will no longer delete the snapshot. You must manually delete the snapshot when it is no longer needed.

The following considerations apply to snapshot lifecycle policies that are in the **error** state:
+ For policies with age-based retention schedules, snapshots that are set to expire while the policy is in the `error` state are retained indefinitely. You must delete the snapshots manually. When you re-enable the policy, Amazon Data Lifecycle Manager resumes deleting snapshots as their retention periods expire.
+ For policies with count-based retention schedules, the policy stops creating and deleting snapshots while it is in the `error` state. When you re-enable the policy, Amazon Data Lifecycle Manager resumes creating snapshots, and it resumes deleting snapshots as the retention threshold is met.

The following considerations apply to snapshot policies and **[snapshot lock](ebs-snapshot-lock.md)**:
+ If you manually lock a snapshot created by Amazon Data Lifecycle Manager, and that snapshot is still locked when its retention threshold is reached, Amazon Data Lifecycle Manager no longer manages that snapshot. You must manually delete the snapshot if it is no longer needed.
+ If you manually lock a snapshot that was created and enabled for fast snapshot restore by Amazon Data Lifecycle Manager, and the snapshot is still locked when its retention threshold is reached, Amazon Data Lifecycle Manager will not disable fast snapshot restore or delete the snapshot. You must manually disable fast snapshot restore and delete the snapshot if it is no longer needed.
+ If you manually register a snapshot that was created by Amazon Data Lifecycle Manager with an AMI and then lock that snapshot, and that snapshot is still locked and associated with the AMI when its retention threshold is reached, Amazon Data Lifecycle Manager will continue to attempt to delete that snapshot. When the AMI is deregistered and the snapshot is unlocked, Amazon Data Lifecycle Manager will automatically delete the snapshot.

## Additional resources
<a name="snapshot-additional-resources"></a>

For more information, see the [ Automating Amazon EBS snapshot and AMI management using Amazon Data Lifecycle Manager](https://aws.amazon.com/blogs/storage/automating-amazon-ebs-snapshot-and-ami-management-using-amazon-dlm/) AWS storage blog.

# Automate application-consistent snapshots with Data Lifecycle Manager
<a name="automate-app-consistent-backups"></a>

You can automate application-consistent snapshots with Amazon Data Lifecycle Manager by enabling pre and post scripts in your snapshot lifecycle policies that target instances.

Amazon Data Lifecycle Manager integrates with AWS Systems Manager (Systems Manager) to support application-consistent snapshots. Amazon Data Lifecycle Manager uses Systems Manager (SSM) command documents that include pre and post scripts to automate the actions needed to complete application-consistent snapshots. Before Amazon Data Lifecycle Manager initiates snapshot creation, it runs the commands in the pre script to freeze and flush I/O. After Amazon Data Lifecycle Manager initiates snapshot creation, it runs the commands in the post script to thaw I/O.

Using Amazon Data Lifecycle Manager, you can automate application-consistent snapshots of the following:
+ Windows applications using Volume Shadow Copy Service (VSS)
+ SAP HANA using an AWS managed SSDM document. For more information, see [ Amazon EBS snapshots for SAP HANA](https://docs.aws.amazon.com/sap/latest/sap-hana/ebs-sap-hana.html).
+ Self-managed databases, such as MySQL, PostgreSQL or InterSystems IRIS, using SSM document templates

**Topics**
+ [

## Requirements for using pre and post scripts
](#app-consistent-prereqs)
+ [

## Getting started with application-consistent snapshots
](#app-consistent-get-started)
+ [

## Considerations for VSS Backups with Amazon Data Lifecycle Manager
](#app-consistent-vss)
+ [

## Shared responsibility for application-consistent snapshots
](#shared-responsibility)

## Requirements for using pre and post scripts
<a name="app-consistent-prereqs"></a>

The following table outlines the requirements for using pre and post scripts with Amazon Data Lifecycle Manager.


|  | Application-consistent snapshots |  | 
| --- |--- |--- |
| Requirement | VSS Backup | Custom SSM document | Other use cases | 
| --- |--- |--- |--- |
| SSM Agent installed and running on target instances | ✓ | ✓ | ✓ | 
| VSS system requirements met on target instances | ✓ |  |  | 
| VSS enabled instance profile associated with target instances | ✓ |  |  | 
| VSS components installed on target instances | ✓ |  |  | 
| Prepare SSM document with pre and post script commands |  | ✓ | ✓ | 
| Prepare Amazon Data Lifecycle Manager IAM role run pre and post scripts | ✓ | ✓ | ✓ | 
| Create snapshot policy that targets instances and is configured for pre and post scripts | ✓ | ✓ | ✓ | 

## Getting started with application-consistent snapshots
<a name="app-consistent-get-started"></a>

This section explains the steps you need to follow to automate application-consistent snapshots using Amazon Data Lifecycle Manager.

### Step 1: Prepare target instances
<a name="prep-instances"></a>

You need to prepare the targeted instances for application-consistent snapshots using Amazon Data Lifecycle Manager. Do one of the following, depending on your use case.

------
#### [ Prepare for VSS Backups ]

**To prepare your target instances for VSS backups**

1. Install the SSM Agent on your target instances, if it is not already installed. If SSM Agent is already installed on your target instances, skip this step. 

   For more information, see [ Working with SSM Agent on EC2 instances for Windows server](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-windows.html).

1. Ensure that the SSM Agent is running. For more information, see [ Checking SSM Agent status and starting the agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html).

1. Set up Systems Manager for Amazon EC2 instances. For more information, see [Setting up Systems Manager for Amazon EC2 instances](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up-ec2.html) in the *AWS Systems Manager User Guide*.

1. [ Ensure the system requirements for VSS backups are met](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/application-consistent-snapshots-prereqs.html).

1. [ Attach a VSS-enabled instance profile to the target instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vss-iam-reqs.html).

1. [ Install the VSS components](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/application-consistent-snapshots-getting-started.html).

------
#### [ Prepare for SAP HANA backups ]

**To prepare your target instances for SAP HANA backups**

1. Prepare the SAP HANA environment on your target instances. 

   1. Set up your instance with SAP HANA. If you don't already have an existing SAP HANA environment, then you can refer to the [ SAP HANA Environment Setup on AWS](https://docs.aws.amazon.com/sap/latest/sap-hana/std-sap-hana-environment-setup.html).

   1. Login to the SystemDB as a suitable administrator user.

   1. Create a database backup user to be used with Amazon Data Lifecycle Manager.

      ```
      CREATE USER username PASSWORD password NO FORCE_FIRST_PASSWORD_CHANGE;
      ```

      For example, the following command creates a user named `dlm_user` with password `password`.

      ```
      CREATE USER dlm_user PASSWORD password NO FORCE_FIRST_PASSWORD_CHANGE;
      ```

   1. Assign the `BACKUP OPERATOR` role to the database backup user that you created in the previous step.

      ```
      GRANT BACKUP OPERATOR TO username
      ```

      For example, the following command assigns the role to a user named `dlm_user`.

      ```
      GRANT BACKUP OPERATOR TO dlm_user
      ```

   1. Log in to the operating system as the administrator, for example `sidadm`.

   1. Create an `hdbuserstore` entry to store connection information so that the SAP HANA SSM document can connect to SAP HANA without users having to enter the information.

      ```
      hdbuserstore set DLM_HANADB_SNAPSHOT_USER localhost:3hana_instance_number13 username password
      ```

      For example:

      ```
      hdbuserstore set DLM_HANADB_SNAPSHOT_USER localhost:30013 dlm_user password
      ```

   1. Test the connection.

      ```
      hdbsql -U DLM_HANADB_SNAPSHOT_USER "select * from dummy"
      ```

1. Install the SSM Agent on your target instances, if it is not already installed. If SSM Agent is already installed on your target instances, skip this step. 

   For more information, see [ Manually installing SSM Agent on EC2 instances for Linux](https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-linux.html).

1. Ensure that the SSM Agent is running. For more information, see [ Checking SSM Agent status and starting the agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html).

1. Set up Systems Manager for Amazon EC2 instances. For more information, see [Setting up Systems Manager for Amazon EC2 instances](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up-ec2.html) in the *AWS Systems Manager User Guide*.

------
#### [ Prepare for custom SSM documents ]

**To prepare your target instances custom SSM documents**

1. Install the SSM Agent on your target instances, if it is not already installed. If SSM Agent is already installed on your target instances, skip this step. 
   + (Linux instances) [ Manually installing SSM Agent on EC2 instances for Linux](https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-linux.html)
   + (Windows instances) [ Working with SSM Agent on EC2 instances for Windows Server](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-windows.html)

1. Ensure that the SSM Agent is running. For more information, see [ Checking SSM Agent status and starting the agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html).

1. Set up Systems Manager for Amazon EC2 instances. For more information, see [Setting up Systems Manager for Amazon EC2 instances](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up-ec2.html) in the *AWS Systems Manager User Guide*.

------

### Step 2: Prepare SSM document
<a name="prep-ssm-doc"></a>

**Note**  
This step is required only for custom SSM documents. It is not required for VSS Backup or SAP HANA. For VSS Backups and SAP HANA, Amazon Data Lifecycle Manager uses the AWS managed SSM document.

If you are automating application-consistent snapshots for a self-managed database, such as MySQL, PostgreSQL, or InterSystems IRIS, you must create an SSM command document that includes a pre script to freeze and flush I/O before snapshot creation is initiated, and a post script to thaw I/O after snapshot creation is initiated.

If your MySQL, PostgreSQL, or InterSystems IRIS database uses standard configurations, you can create an SSM command document using the sample SSM document content below. If your MySQL, PostgreSQL, or InterSystems IRIS database uses a non-standard configuration, you can use the sample content below as a starting point for your SSM command document and then customize it to meet your requirements. Alternatively, if you want to create a new SSM document from scratch, you can use the empty SSM document template below and add your pre and post commands in the appropriate document sections.

**Note the following:**  
It is your responsibility to ensure that the SSM document performs the correct and required actions for your database configuration.
Snapshots are guaranteed to be application-consistent only if the pre and post scripts in your SSM document can successfully freeze, flush, and thaw I/O.
The SSM document must include required fields for `allowedValues`, including `pre-script`, `post-script`, and `dry-run`. Amazon Data Lifecycle Manager will execute commands on your instance based on the contents of those sections. If your SSM document does not have those sections, then Amazon Data Lifecycle Manager will treat it as a failed execution.

------
#### [ MySQL sample document content ]

```
###===============================================================================###
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.

# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
###===============================================================================###
schemaVersion: '2.2'
description: Amazon Data Lifecycle Manager Pre/Post script for MySQL databases
parameters:
  executionId:
    type: String
    default: None
    description: (Required) Specifies the unique identifier associated with a pre and/or post execution
    allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$
  command:
  # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. 
  # 'dry-run' option is intended for validating the document execution without triggering any commands
  # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully 
  # trigger pre and post script actions.
    type: String
    default: 'dry-run'
    description: (Required) Specifies whether pre-script and/or post-script should be executed.
    allowedValues:
    - pre-script
    - post-script
    - dry-run

mainSteps:
- action: aws:runShellScript
  description: Run MySQL Database freeze/thaw commands
  name: run_pre_post_scripts
  precondition:
    StringEquals:
    - platformType
    - Linux
  inputs:
    runCommand:
    - |
      #!/bin/bash

      ###===============================================================================###
      ### Error Codes
      ###===============================================================================###
      # The following Error codes will inform Data Lifecycle Manager of the type of error 
      # and help guide handling of the error. 
      # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field.
      # 1 Pre-script failed during execution - 201
      # 2 Post-script failed during execution - 202
      # 3 Auto thaw occurred before post-script was initiated - 203
      # 4 Pre-script initiated while post-script was expected - 204
      # 5 Post-script initiated while pre-script was expected - 205
      # 6 Application not ready for pre or post-script initiation - 206

      ###=================================================================###
      ### Global variables
      ###=================================================================###
      START=$(date +%s)
      # For testing this script locally, replace the below with OPERATION=$1.
      OPERATION={{ command }}
      FS_ALREADY_FROZEN_ERROR='freeze failed: Device or resource busy'
      FS_ALREADY_THAWED_ERROR='unfreeze failed: Invalid argument'
      FS_BUSY_ERROR='mount point is busy'

      # Auto thaw is a fail safe mechanism to automatically unfreeze the application after the 
      # duration specified in the global variable below. Choose the duration based on your
      # database application's tolerance to freeze.
      export AUTO_THAW_DURATION_SECS="60"

      # Add all pre-script actions to be performed within the function below
      execute_pre_script() {
          echo "INFO: Start execution of pre-script"
          # Check if filesystem is already frozen. No error code indicates that filesystem 
          # is not currently frozen and that the pre-script can proceed with freezing the filesystem.
          check_fs_freeze
          # Execute the DB commands to flush the DB in preparation for snapshot
          snap_db
          # Freeze the filesystem. No error code indicates that filesystem was succefully frozen
          freeze_fs

          echo "INFO: Schedule Auto Thaw to execute in ${AUTO_THAW_DURATION_SECS} seconds."
          $(nohup bash -c execute_schedule_auto_thaw  >/dev/null 2>&1 &)
      }

      # Add all post-script actions to be performed within the function below
      execute_post_script() {
          echo "INFO: Start execution of post-script"
          # Unfreeze the filesystem. No error code indicates that filesystem was successfully unfrozen.
          unfreeze_fs
          thaw_db
      }

      # Execute Auto Thaw to automatically unfreeze the application after the duration configured 
      # in the AUTO_THAW_DURATION_SECS global variable.
      execute_schedule_auto_thaw() {
          sleep ${AUTO_THAW_DURATION_SECS}
          execute_post_script
      }

      # Disable Auto Thaw if it is still enabled
      execute_disable_auto_thaw() {
          echo "INFO: Attempting to disable auto thaw if enabled"
          auto_thaw_pgid=$(pgrep -f execute_schedule_auto_thaw | xargs -i ps -hp {} -o pgid)
          if [ -n "${auto_thaw_pgid}" ]; then
              echo "INFO: execute_schedule_auto_thaw process found with pgid ${auto_thaw_pgid}"
              sudo pkill -g ${auto_thaw_pgid}
              rc=$?
              if [ ${rc} != 0 ]; then
                  echo "ERROR: Unable to kill execute_schedule_auto_thaw process. retval=${rc}"
              else
                  echo "INFO: Auto Thaw  has been disabled"
              fi
          fi
      }

      # Iterate over all the mountpoints and check if filesystem is already in freeze state.
      # Return error code 204 if any of the mount points are already frozen.
      check_fs_freeze() {
          for target in $(lsblk -nlo MOUNTPOINTS)
          do
              # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems.
              # Hence, we will skip the root and boot mountpoints while checking if filesystem is in freeze state.
              if [ $target == '/' ]; then continue; fi
              if [[ "$target" == *"/boot"* ]]; then continue; fi

              error_message=$(sudo mount -o remount,noatime $target 2>&1)
              # Remount will be a no-op without a error message if the filesystem is unfrozen.
              # However, if filesystem is already frozen, remount will fail with busy error message.
              if [ $? -ne 0 ];then
                  # If the filesystem is already in frozen, return error code 204
                  if [[ "$error_message" == *"$FS_BUSY_ERROR"* ]];then
                      echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204"
                      exit 204
                  fi
                  # If the check filesystem freeze failed due to any reason other than the filesystem already frozen, return 201
                  echo "ERROR: Failed to check_fs_freeze on mountpoint $target due to error - $errormessage"
                  exit 201
              fi
          done
      } 

      # Iterate over all the mountpoints and freeze the filesystem.
      freeze_fs() {
          for target in $(lsblk -nlo MOUNTPOINTS)
          do
              # Freeze of the root and boot filesystems is dangerous. Hence, skip filesystem freeze 
              # operations for root and boot mountpoints.
              if [ $target == '/' ]; then continue; fi
              if [[ "$target" == *"/boot"* ]]; then continue; fi
              echo "INFO: Freezing $target"
              error_message=$(sudo fsfreeze -f $target 2>&1)
              if [ $? -ne 0 ];then
                  # If the filesystem is already in frozen, return error code 204
                  if [[ "$error_message" == *"$FS_ALREADY_FROZEN_ERROR"* ]]; then
                      echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204"
                      sudo mysql -e 'UNLOCK TABLES;'
                      exit 204
                  fi
                  # If the filesystem freeze failed due to any reason other than the filesystem already frozen, return 201
                  echo "ERROR: Failed to freeze mountpoint $targetdue due to error - $errormessage"
                  thaw_db
                  exit 201
              fi
              echo "INFO: Freezing complete on $target"
          done
      }

      # Iterate over all the mountpoints and unfreeze the filesystem.
      unfreeze_fs() {
          for target in $(lsblk -nlo MOUNTPOINTS)
          do
              # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems.
              # Hence, will skip the root and boot mountpoints during unfreeze as well.
              if [ $target == '/' ]; then continue; fi
              if [[ "$target" == *"/boot"* ]]; then continue; fi
              echo "INFO: Thawing $target"
              error_message=$(sudo fsfreeze -u $target 2>&1)
              # Check if filesystem is already unfrozen (thawed). Return error code 204 if filesystem is already unfrozen.
              if [ $? -ne 0 ]; then
                  if [[ "$error_message" == *"$FS_ALREADY_THAWED_ERROR"* ]]; then
                      echo "ERROR: Filesystem ${target} is already in thaw state. Return Error Code: 205"
                      exit 205
                  fi
                  # If the filesystem unfreeze failed due to any reason other than the filesystem already unfrozen, return 202
                  echo "ERROR: Failed to unfreeze mountpoint $targetdue due to error - $errormessage"
                  exit 202
              fi
              echo "INFO: Thaw complete on $target"
          done    
      }

      snap_db() {
          # Run the flush command only when MySQL DB service is up and running
          sudo systemctl is-active --quiet mysqld.service
          if [ $? -eq 0 ]; then
              echo "INFO: Execute MySQL Flush and Lock command."
              sudo mysql -e 'FLUSH TABLES WITH READ LOCK;'
              # If the MySQL Flush and Lock command did not succeed, return error code 201 to indicate pre-script failure
              if [ $? -ne 0 ]; then
                  echo "ERROR: MySQL FLUSH TABLES WITH READ LOCK command failed."
                  exit 201
              fi
              sync
          else 
              echo "INFO: MySQL service is inactive. Skipping execution of MySQL Flush and Lock command."
          fi
      }

      thaw_db() {
          # Run the unlock command only when MySQL DB service is up and running
          sudo systemctl is-active --quiet mysqld.service
          if [ $? -eq 0 ]; then
              echo "INFO: Execute MySQL Unlock"
              sudo mysql -e 'UNLOCK TABLES;'
          else 
              echo "INFO: MySQL service is inactive. Skipping execution of MySQL Unlock command."
          fi
      }

      export -f execute_schedule_auto_thaw
      export -f execute_post_script
      export -f unfreeze_fs
      export -f thaw_db

      # Debug logging for parameters passed to the SSM document
      echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}"

      # Based on the command parameter value execute the function that supports 
      # pre-script/post-script operation
      case ${OPERATION} in
          pre-script)
              execute_pre_script
              ;;
          post-script)
              execute_post_script
              execute_disable_auto_thaw
              ;;
          dry-run)
              echo "INFO: dry-run option invoked - taking no action"
              ;;
          *)
              echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run."
              exit 1 # return failure
              ;;
      esac

      END=$(date +%s)
      # Debug Log for profiling the script time
      echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."
```

------
#### [ PostgreSQL sample document content ]

```
###===============================================================================###
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.

# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
###===============================================================================###
schemaVersion: '2.2'
description: Amazon Data Lifecycle Manager Pre/Post script for PostgreSQL databases
parameters:
  executionId:
    type: String
    default: None
    description: (Required) Specifies the unique identifier associated with a pre and/or post execution
    allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$
  command:
  # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. 
  # 'dry-run' option is intended for validating the document execution without triggering any commands
  # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully 
  # trigger pre and post script actions.
    type: String
    default: 'dry-run'
    description: (Required) Specifies whether pre-script and/or post-script should be executed.
    allowedValues:
    - pre-script
    - post-script
    - dry-run

mainSteps:
- action: aws:runShellScript
  description: Run PostgreSQL Database freeze/thaw commands
  name: run_pre_post_scripts
  precondition:
    StringEquals:
    - platformType
    - Linux
  inputs:
    runCommand:
    - |
      #!/bin/bash

      ###===============================================================================###
      ### Error Codes
      ###===============================================================================###
      # The following Error codes will inform Data Lifecycle Manager of the type of error 
      # and help guide handling of the error. 
      # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field.
      # 1 Pre-script failed during execution - 201
      # 2 Post-script failed during execution - 202
      # 3 Auto thaw occurred before post-script was initiated - 203
      # 4 Pre-script initiated while post-script was expected - 204
      # 5 Post-script initiated while pre-script was expected - 205
      # 6 Application not ready for pre or post-script initiation - 206

      ###===============================================================================###
      ### Global variables
      ###===============================================================================###
      START=$(date +%s)
      OPERATION={{ command }}
      FS_ALREADY_FROZEN_ERROR='freeze failed: Device or resource busy'
      FS_ALREADY_THAWED_ERROR='unfreeze failed: Invalid argument'
      FS_BUSY_ERROR='mount point is busy'

      # Auto thaw is a fail safe mechanism to automatically unfreeze the application after the 
      # duration specified in the global variable below. Choose the duration based on your
      # database application's tolerance to freeze.
      export AUTO_THAW_DURATION_SECS="60"

      # Add all pre-script actions to be performed within the function below
      execute_pre_script() {
          echo "INFO: Start execution of pre-script"
          # Check if filesystem is already frozen. No error code indicates that filesystem 
          # is not currently frozen and that the pre-script can proceed with freezing the filesystem.
          check_fs_freeze
          # Execute the DB commands to flush the DB in preparation for snapshot
          snap_db
          # Freeze the filesystem. No error code indicates that filesystem was succefully frozen
          freeze_fs

          echo "INFO: Schedule Auto Thaw to execute in ${AUTO_THAW_DURATION_SECS} seconds."
          $(nohup bash -c execute_schedule_auto_thaw  >/dev/null 2>&1 &)
      }

      # Add all post-script actions to be performed within the function below
      execute_post_script() {
          echo "INFO: Start execution of post-script"
          # Unfreeze the filesystem. No error code indicates that filesystem was successfully unfrozen
          unfreeze_fs
      }

      # Execute Auto Thaw to automatically unfreeze the application after the duration configured 
      # in the AUTO_THAW_DURATION_SECS global variable.
      execute_schedule_auto_thaw() {
          sleep ${AUTO_THAW_DURATION_SECS}
          execute_post_script
      }

      # Disable Auto Thaw if it is still enabled
      execute_disable_auto_thaw() {
          echo "INFO: Attempting to disable auto thaw if enabled"
          auto_thaw_pgid=$(pgrep -f execute_schedule_auto_thaw | xargs -i ps -hp {} -o pgid)
          if [ -n "${auto_thaw_pgid}" ]; then
              echo "INFO: execute_schedule_auto_thaw process found with pgid ${auto_thaw_pgid}"
              sudo pkill -g ${auto_thaw_pgid}
              rc=$?
              if [ ${rc} != 0 ]; then
                  echo "ERROR: Unable to kill execute_schedule_auto_thaw process. retval=${rc}"
              else
                  echo "INFO: Auto Thaw  has been disabled"
              fi
          fi
      }

      # Iterate over all the mountpoints and check if filesystem is already in freeze state.
      # Return error code 204 if any of the mount points are already frozen.
      check_fs_freeze() {
          for target in $(lsblk -nlo MOUNTPOINTS)
          do
              # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems.
              # Hence, we will skip the root and boot mountpoints while checking if filesystem is in freeze state.
              if [ $target == '/' ]; then continue; fi
              if [[ "$target" == *"/boot"* ]]; then continue; fi

              error_message=$(sudo mount -o remount,noatime $target 2>&1)
              # Remount will be a no-op without a error message if the filesystem is unfrozen.
              # However, if filesystem is already frozen, remount will fail with busy error message.
              if [ $? -ne 0 ];then
                  # If the filesystem is already in frozen, return error code 204
                  if [[ "$error_message" == *"$FS_BUSY_ERROR"* ]];then
                      echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204"
                      exit 204
                  fi
                  # If the check filesystem freeze failed due to any reason other than the filesystem already frozen, return 201
                  echo "ERROR: Failed to check_fs_freeze on mountpoint $target due to error - $errormessage"
                  exit 201
              fi
          done
      } 

      # Iterate over all the mountpoints and freeze the filesystem.
      freeze_fs() {
          for target in $(lsblk -nlo MOUNTPOINTS)
          do
              # Freeze of the root and boot filesystems is dangerous. Hence, skip filesystem freeze 
              # operations for root and boot mountpoints.
              if [ $target == '/' ]; then continue; fi
              if [[ "$target" == *"/boot"* ]]; then continue; fi
              echo "INFO: Freezing $target"
              error_message=$(sudo fsfreeze -f $target 2>&1)
              if [ $? -ne 0 ];then
                  # If the filesystem is already in frozen, return error code 204
                  if [[ "$error_message" == *"$FS_ALREADY_FROZEN_ERROR"* ]]; then
                      echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204"
                      exit 204
                  fi
                  # If the filesystem freeze failed due to any reason other than the filesystem already frozen, return 201
                  echo "ERROR: Failed to freeze mountpoint $targetdue due to error - $errormessage"
                  exit 201
              fi
              echo "INFO: Freezing complete on $target"
          done
      }

      # Iterate over all the mountpoints and unfreeze the filesystem.
      unfreeze_fs() {
          for target in $(lsblk -nlo MOUNTPOINTS)
          do
              # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems.
              # Hence, will skip the root and boot mountpoints during unfreeze as well.
              if [ $target == '/' ]; then continue; fi
              if [[ "$target" == *"/boot"* ]]; then continue; fi
              echo "INFO: Thawing $target"
              error_message=$(sudo fsfreeze -u $target 2>&1)
              # Check if filesystem is already unfrozen (thawed). Return error code 204 if filesystem is already unfrozen.
              if [ $? -ne 0 ]; then
                  if [[ "$error_message" == *"$FS_ALREADY_THAWED_ERROR"* ]]; then
                      echo "ERROR: Filesystem ${target} is already in thaw state. Return Error Code: 205"
                      exit 205
                  fi
                  # If the filesystem unfreeze failed due to any reason other than the filesystem already unfrozen, return 202
                  echo "ERROR: Failed to unfreeze mountpoint $targetdue due to error - $errormessage"
                  exit 202
              fi
              echo "INFO: Thaw complete on $target"
          done
      }

      snap_db() {
          # Run the flush command only when PostgreSQL DB service is up and running
          sudo systemctl is-active --quiet postgresql
          if [ $? -eq 0 ]; then
              echo "INFO: Execute Postgres CHECKPOINT"
              # PostgreSQL command to flush the transactions in memory to disk
              sudo -u postgres psql -c 'CHECKPOINT;'
              # If the PostgreSQL Command did not succeed, return error code 201 to indicate pre-script failure
              if [ $? -ne 0 ]; then
                  echo "ERROR: Postgres CHECKPOINT command failed."
                  exit 201
              fi
              sync
          else 
              echo "INFO: PostgreSQL service is inactive. Skipping execution of CHECKPOINT command."
          fi
      }

      export -f execute_schedule_auto_thaw
      export -f execute_post_script
      export -f unfreeze_fs

      # Debug logging for parameters passed to the SSM document
      echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}"

      # Based on the command parameter value execute the function that supports 
      # pre-script/post-script operation
      case ${OPERATION} in
          pre-script)
              execute_pre_script
              ;;
          post-script)
              execute_post_script
              execute_disable_auto_thaw
              ;;
          dry-run)
              echo "INFO: dry-run option invoked - taking no action"
              ;;
          *)
              echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run."
              exit 1 # return failure
              ;;
      esac

      END=$(date +%s)
      # Debug Log for profiling the script time
      echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."
```

------
#### [ InterSystems IRIS sample document content ]

```
###===============================================================================###
# MIT License
# 
# Copyright (c) 2024 InterSystems
# 
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# 
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# 
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
###===============================================================================###
schemaVersion: '2.2'
description: SSM Document Template for Amazon Data Lifecycle Manager Pre/Post script feature for InterSystems IRIS.
parameters:
  executionId:
    type: String
    default: None
    description: Specifies the unique identifier associated with a pre and/or post execution
    allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$
  command:
    type: String
    # Data Lifecycle Manager will trigger the pre-script and post-script actions. You can also use this SSM document with 'dry-run' for manual testing purposes.
    default: 'dry-run'
    description: (Required) Specifies whether pre-script and/or post-script should be executed.
    #The following allowedValues will allow Data Lifecycle Manager to successfully trigger pre and post script actions.
    allowedValues:
    - pre-script
    - post-script
    - dry-run

mainSteps:
- action: aws:runShellScript
  description: Run InterSystems IRIS Database freeze/thaw commands
  name: run_pre_post_scripts
  precondition:
    StringEquals:
    - platformType
    - Linux
  inputs:
    runCommand:
    - |
      #!/bin/bash
      ###===============================================================================###
      ### Global variables
      ###===============================================================================###
      DOCKER_NAME=iris
      LOGDIR=./
      EXIT_CODE=0
      OPERATION={{ command }}
      START=$(date +%s)
      
      # Check if Docker is installed
      # By default if Docker is present, script assumes that InterSystems IRIS is running in Docker
      # Leave only the else block DOCKER_EXEC line, if you run InterSystems IRIS non-containerised (and Docker is present).
      # Script assumes irissys user has OS auth enabled, change the OS user or supply login/password depending on your configuration.
      if command -v docker &> /dev/null
      then
        DOCKER_EXEC="docker exec $DOCKER_NAME"
      else
        DOCKER_EXEC="sudo -i -u irissys"
      fi
      
                    
      # Add all pre-script actions to be performed within the function below
      execute_pre_script() {
        echo "INFO: Start execution of pre-script"
        
        # find all iris running instances
        iris_instances=$($DOCKER_EXEC iris qall 2>/dev/null | tail -n +3 | grep '^up' | cut -c5-  | awk '{print $1}')
        echo "`date`: Running iris instances $iris_instances"
      
        # Only for running instances
        for INST in $iris_instances; do
      
          echo "`date`: Attempting to freeze $INST"
      
          # Detailed instances specific log
          LOGFILE=$LOGDIR/$INST-pre_post.log
          
          #check Freeze status before starting
          $DOCKER_EXEC irissession $INST -U '%SYS' "##Class(Backup.General).IsWDSuspendedExt()"
          freeze_status=$?
          if [ $freeze_status -eq 5 ]; then
            echo "`date`:   ERROR: $INST IS already FROZEN"
            EXIT_CODE=204
          else
            echo "`date`:   $INST is not frozen"
            # Freeze
            # Docs: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=Backup.General#ExternalFreeze
            $DOCKER_EXEC irissession $INST -U '%SYS' "##Class(Backup.General).ExternalFreeze(\"$LOGFILE\",,,,,,600,,,300)"
            status=$?
      
            case $status in
              5) echo "`date`:   $INST IS FROZEN"
                ;;
              3) echo "`date`:   $INST FREEZE FAILED"
                EXIT_CODE=201
                ;;
              *) echo "`date`:   ERROR: Unknown status code: $status"
                EXIT_CODE=201
                ;;
            esac
            echo "`date`:   Completed freeze of $INST"
          fi
        done
        echo "`date`: Pre freeze script finished"
      }
                    
      # Add all post-script actions to be performed within the function below
      execute_post_script() {
        echo "INFO: Start execution of post-script"
      
        # find all iris running instances
        iris_instances=$($DOCKER_EXEC iris qall 2>/dev/null | tail -n +3 | grep '^up' | cut -c5-  | awk '{print $1}')
        echo "`date`: Running iris instances $iris_instances"
      
        # Only for running instances
        for INST in $iris_instances; do
      
          echo "`date`: Attempting to thaw $INST"
      
          # Detailed instances specific log
          LOGFILE=$LOGDIR/$INST-pre_post.log
      
          #check Freeze status befor starting
          $DOCKER_EXEC irissession $INST -U '%SYS' "##Class(Backup.General).IsWDSuspendedExt()"
          freeze_status=$?
          if [ $freeze_status -eq 5 ]; then
            echo "`date`:  $INST is in frozen state"
            # Thaw
            # Docs: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=Backup.General#ExternalFreeze
            $DOCKER_EXEC irissession $INST -U%SYS "##Class(Backup.General).ExternalThaw(\"$LOGFILE\")"
            status=$?
      
            case $status in
              5) echo "`date`:   $INST IS THAWED"
                  $DOCKER_EXEC irissession $INST -U%SYS "##Class(Backup.General).ExternalSetHistory(\"$LOGFILE\")"
                ;;
              3) echo "`date`:   $INST THAW FAILED"
                  EXIT_CODE=202
                ;;
              *) echo "`date`:   ERROR: Unknown status code: $status"
                  EXIT_CODE=202
                ;;
            esac
            echo "`date`:   Completed thaw of $INST"
          else
            echo "`date`:   ERROR: $INST IS already THAWED"
            EXIT_CODE=205
          fi
        done
        echo "`date`: Post thaw script finished"
      }
      
      # Debug logging for parameters passed to the SSM document
        echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}"
                    
      # Based on the command parameter value execute the function that supports 
      # pre-script/post-script operation
      case ${OPERATION} in
        pre-script)
          execute_pre_script
          ;;
        post-script)
          execute_post_script
            ;;
        dry-run)
          echo "INFO: dry-run option invoked - taking no action"
          ;;
        *)
          echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run."
          # return failure
          EXIT_CODE=1
          ;;
      esac
                    
      END=$(date +%s)
      # Debug Log for profiling the script time
      echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."
      exit $EXIT_CODE
```

For more information, see the [ GitHub repository](https://github.com/intersystems-community/aws/blob/master/README.md).

------
#### [ Empty document template ]

```
###===============================================================================###
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.

# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
###===============================================================================###
schemaVersion: '2.2'
description: SSM Document Template for Amazon Data Lifecycle Manager Pre/Post script feature
parameters:
  executionId:
    type: String
    default: None
    description: (Required) Specifies the unique identifier associated with a pre and/or post execution
    allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$
  command:
  # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. 
  # 'dry-run' option is intended for validating the document execution without triggering any commands
  # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully 
  # trigger pre and post script actions.
    type: String
    default: 'dry-run'
    description: (Required) Specifies whether pre-script and/or post-script should be executed.
    allowedValues:
    - pre-script
    - post-script
    - dry-run

mainSteps:
- action: aws:runShellScript
  description: Run Database freeze/thaw commands
  name: run_pre_post_scripts
  precondition:
    StringEquals:
    - platformType
    - Linux
  inputs:
    runCommand:
    - |
      #!/bin/bash

      ###===============================================================================###
      ### Error Codes
      ###===============================================================================###
      # The following Error codes will inform Data Lifecycle Manager of the type of error 
      # and help guide handling of the error. 
      # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field.
      # 1 Pre-script failed during execution - 201
      # 2 Post-script failed during execution - 202
      # 3 Auto thaw occurred before post-script was initiated - 203
      # 4 Pre-script initiated while post-script was expected - 204
      # 5 Post-script initiated while pre-script was expected - 205
      # 6 Application not ready for pre or post-script initiation - 206

      ###===============================================================================###
      ### Global variables
      ###===============================================================================###
      START=$(date +%s)
      # For testing this script locally, replace the below with OPERATION=$1.
      OPERATION={{ command }}

      # Add all pre-script actions to be performed within the function below
      execute_pre_script() {
          echo "INFO: Start execution of pre-script"
      }

      # Add all post-script actions to be performed within the function below
      execute_post_script() {
          echo "INFO: Start execution of post-script"
      }

      # Debug logging for parameters passed to the SSM document
      echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}"

      # Based on the command parameter value execute the function that supports 
      # pre-script/post-script operation
      case ${OPERATION} in
          pre-script)
              execute_pre_script
              ;;
          post-script)
              execute_post_script
              ;;
          dry-run)
              echo "INFO: dry-run option invoked - taking no action"
              ;;
          *)
              echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run."
              exit 1 # return failure
              ;;
      esac

      END=$(date +%s)
      # Debug Log for profiling the script time
      echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."
```

------

Once you have your SSM document content, use one of the following procedures to create the custom SSM document.

------
#### [ Console ]

**To create the SSM command document**

1. Open the AWS Systems Manager console at [ https://console.aws.amazon.com//systems-manager/](https://console.aws.amazon.com//systems-manager/).

1. In the navigation pane, choose **Documents**, then choose **Create document**, **Command or Session**.

1. For **Name**, enter a descriptive name for the document.

1. For **Target type**, select **/AWS::EC2::Instance**.

1. For **Document type**, select **Command**.

1. In the **Content** field, select **YAML** and then paste the document content.

1. In the **Document tags** section, add a tag with a tag key of `DLMScriptsAccess`, and a tag value of `true`.
**Important**  
The `DLMScriptsAccess:true` tag is required by the **AWSDataLifecycleManagerSSMFullAccess** AWS managed policy used in *Step 3: Prepare Amazon Data Lifecycle Manager IAM role*. The policy uses the `aws:ResourceTag` condition key to restrict access to SSM documents that have this tag.

1. Choose **Create document**.

------
#### [ AWS CLI ]

**To create the SSM command document**  
Use the [ create-document](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-document.html) command. For `--name`, specify a descriptive name for the document. For `--document-type`, specify `Command`. For `--content`, specify the path to the .yaml file with the SSM document content. For `--tags`, specify `"Key=DLMScriptsAccess,Value=true"`.

```
$ aws ssm create-document \
--content file://path/to/file/documentContent.yaml \
--name "document_name" \
--document-type "Command" \
--document-format YAML \
--tags "Key=DLMScriptsAccess,Value=true"
```

------

### Step 3: Prepare Amazon Data Lifecycle Manager IAM role
<a name="prep-iam-role"></a>

**Note**  
This step is needed if:  
You create or update a pre/post script-enabled snapshot policy that uses a custom IAM role.
You use the command line to create or update a pre/post script-enabled snapshot policy that uses the default.
If you use the console to create or update a pre/post script-enabled snapshot policy that uses the default role for managing snapshots (**AWSDataLifecycleManagerDefaultRole**), skip this step. In this case, we automatically attach the **AWSDataLifecycleManagerSSMFullAccess** policy to that role.

You must ensure that the IAM role that you use for policy grants Amazon Data Lifecycle Manager permission to perform the SSM actions required to run pre and post scripts on instances targeted by the policy.

Amazon Data Lifecycle Manager provides a managed policy (**AWSDataLifecycleManagerSSMFullAccess**) that includes the required permissions. You can attach this policy to your IAM role for managing snapshots to ensure that it includes the permissions.

**Important**  
The AWSDataLifecycleManagerSSMFullAccess managed policy uses the `aws:ResourceTag` condition key to restrict access to specific SSM documents when using pre and post scripts. To allow Amazon Data Lifecycle Manager to access the SSM documents, you must ensure that your SSM documents are tagged with `DLMScriptsAccess:true`.

Alternatively, you can manually create a custom policy or assign the required permissions directly to the IAM role that you use. You can use the same permissions that are defined in the AWSDataLifecycleManagerSSMFullAccess managed policy, however, the `aws:ResourceTag` condition key is optional. If you decide to not include that condition key, then you do not need to tag your SSM documents with `DLMScriptsAccess:true`.

Use one of the following methods to add the **AWSDataLifecycleManagerSSMFullAccess** policy to your IAM role.

------
#### [ Console ]

**To attach the managed policy to your custom role**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation panel, choose **Roles**.

1. Search for and select your custom role for managing snapshots.

1. On the **Permissions** tab, choose **Add permissions**, **Attach policies**.

1. Search for and select the **AWSDataLifecycleManagerSSMFullAccess** managed policy, and then choose **Add permissions**.

------
#### [ AWS CLI ]

**To attach the managed policy to your custom role**  
Use the [ attach-role-policy](https://docs.aws.amazon.com/cli/latest/reference/iam/attach-role-policy.html) command. For `---role-name`, specify the name of your custom role. For `--policy-arn`, specify `arn:aws:iam::aws:policy/AWSDataLifecycleManagerSSMFullAccess`.

```
$ aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AWSDataLifecycleManagerSSMFullAccess \
--role-name your_role_name
```

------

### Step 4: Create snapshot lifecycle policy
<a name="prep-policy"></a>

To automate application-consistent snapshots, you must create a snapshot lifecycle policy that targets instances, and configure pre and post scripts for that policy.

------
#### [ Console ]

**To create the snapshot lifecycle policy**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Elastic Block Store**, **Lifecycle Manager**, and then choose **Create lifecycle policy**.

1. On the **Select policy type** screen, choose **EBS snapshot policy** and then choose **Next**.

1. In the **Target resources** section, do the following:

   1. For **Target resource types**, choose `Instance`.

   1. For **Target resource tags**, specify the resource tags that identify the instances to back up. Only resources that have the specified tags will be backed up.

1. For **IAM role**, either choose ** AWSDataLifecycleManagerDefaultRole** (the default role for managing snapshots), or choose a custom role that you created and prepared for pre and post scripts.

1. Configure the schedules and additional options as needed. We recommend that you schedule snapshot creation times for time periods that match your workload, such as during maintenance windows.

   For SAP HANA, we recommend that you enable fast snapshot restore.
**Note**  
If you enable a schedule for VSS Backups, you can't enable **Exclude specific data volumes** or **Copy tags from source**.

1. In the **Pre and post scripts** section, select **Enable pre and post scripts**, and then do the following, depending on your workload:
   + To create application-consistent snapshots of your Windows applications, select **VSS Backup**.
   + To create application-consistent snapshots of your SAP HANA workloads, select **SAP HANA**.
   + To create application-consistent snapshots of all other databases and workloads, including your self-managed MySQL, PostgreSQL, or InterSystems IRIS databases, using a custom SSM document, select **Custom SSM document**.

     1. For **Automate option**, choose **Pre and post scripts**.

     1. For **SSM document**, select the SSM document that you prepared.

1. Depending on the option you selected, configure the following additional options:
   + **Script timeout** — (*Custom SSM document only*) The timeout period after which Amazon Data Lifecycle Manager fails the script run attempt if it has not completed. If a script does not complete within its timeout period, Amazon Data Lifecycle Manager fails the attempt. The timeout period applies to the pre and post scripts individually. The minimum and default timeout period is 10 seconds. And the maximum timeout period is 120 seconds.
   + **Retry failed scripts** — Select this option to retry scripts that do not complete within their timeout period. If the pre script fails, Amazon Data Lifecycle Manager retries entire snapshot creation process, including running the pre and post scripts. If the post script fails, Amazon Data Lifecycle Manager retries the post script only; in this case, the pre script will have completed and the snapshot might have been created.
   + **Default to crash-consistent snapshots** — Select this option to default to crash-consistent snapshots if the pre script fails to run. This is the default snapshot creation behavior for Amazon Data Lifecycle Manager if pre and post scripts is not enabled. If you enabled retries, Amazon Data Lifecycle Manager will default to crash-consistent snapshots only after all retry attempts have been exhausted. If the pre script fails and you do not default to crash-consistent snapshots, Amazon Data Lifecycle Manager will not create snapshots for the instance during that schedule run.
**Note**  
If you are creating snapshots for SAP HANA, then you might want to disabled this option. Crash-consistent snapshots of SAP HANA workloads can't restored in the same manner.

1. Choose **Create default policy**.
**Note**  
If you get the `Role with name AWSDataLifecycleManagerDefaultRole already exists` error, see [Troubleshoot Amazon Data Lifecycle Manager issues](dlm-troubleshooting.md) for more information.

------
#### [ AWS CLI ]

**To create the snapshot lifecycle policy**  
Use the [ create-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html) command, and include the `Scripts` parameters in `CreateRule`. For more information about the parameters, see the [https://docs.aws.amazon.com/dlm/latest/APIReference/API_Script.html](https://docs.aws.amazon.com/dlm/latest/APIReference/API_Script.html).

```
$ aws dlm create-lifecycle-policy \
--description "policy_description" \
--state ENABLED \
--execution-role-arn iam_role_arn \
--policy-details file://policyDetails.json
```

Where `policyDetails.json` includes one of the following, depending on your use case:
+ **VSS Backup**

  ```
  {
      "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
      "ResourceTypes": [
          "INSTANCE"
      ],
      "TargetTags": [{
          "Key": "tag_key",
          "Value": "tag_value"
      }],
      "Schedules": [{
          "Name": "schedule_name",
          "CreateRule": {
              "CronExpression": "cron_for_creation_frequency", 
              "Scripts": [{ 
                  "ExecutionHandler":"AWS_VSS_BACKUP",
                  "ExecuteOperationOnScriptFailure":true|false,
                  "MaximumRetryCount":retries (0-3)
              }]
          },
          "RetainRule": {
              "Count": retention_count
          }
      }]
  }
  ```
+ **SAP HANA backups**

  ```
  {
      "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
      "ResourceTypes": [
          "INSTANCE"
      ],
      "TargetTags": [{
          "Key": "tag_key",
          "Value": "tag_value"
      }],
      "Schedules": [{
          "Name": "schedule_name",
          "CreateRule": {
              "CronExpression": "cron_for_creation_frequency", 
              "Scripts": [{ 
                  "Stages": ["PRE","POST"],
                  "ExecutionHandlerService":"AWS_SYSTEMS_MANAGER",
                  "ExecutionHandler":"AWSSystemsManagerSAP-CreateDLMSnapshotForSAPHANA",
                  "ExecuteOperationOnScriptFailure":true|false,
                  "ExecutionTimeout":timeout_in_seconds (10-120), 
                  "MaximumRetryCount":retries (0-3)
              }]
          },
          "RetainRule": {
              "Count": retention_count
          }
      }]
  }
  ```
+ **Custom SSM document**

  ```
  {
      "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
      "ResourceTypes": [
          "INSTANCE"
      ],
      "TargetTags": [{
          "Key": "tag_key",
          "Value": "tag_value"
      }],
      "Schedules": [{
          "Name": "schedule_name",
          "CreateRule": {
              "CronExpression": "cron_for_creation_frequency", 
              "Scripts": [{ 
                  "Stages": ["PRE","POST"],
                  "ExecutionHandlerService":"AWS_SYSTEMS_MANAGER",
                  "ExecutionHandler":"ssm_document_name|arn",
                  "ExecuteOperationOnScriptFailure":true|false,
                  "ExecutionTimeout":timeout_in_seconds (10-120), 
                  "MaximumRetryCount":retries (0-3)
              }]
          },
          "RetainRule": {
              "Count": retention_count
          }
      }]
  }
  ```

------

## Considerations for VSS Backups with Amazon Data Lifecycle Manager
<a name="app-consistent-vss"></a>

With Amazon Data Lifecycle Manager, you can back up and restore VSS (Volume Shadow Copy Service)-enabled Windows applications running on Amazon EC2 instances. If the application has a VSS writer registered with Windows VSS, then Amazon Data Lifecycle Manager creates a snapshot that will be application-consistent for that application.

**Note**  
Amazon Data Lifecycle Manager currently supports application-consistent snapshots of resources running on Amazon EC2 only, specifically for backup scenarios where application data can be restored by replacing an existing instance with a new instance created from the backup. Not all instance types or applications are supported for VSS backups. For more information, see [ Application-consistent Windows VSS snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/application-consistent-snapshots.html) in the *Amazon EC2 User Guide*. 

**Unsupported instance types**  
The following Amazon EC2 instance types are not supported for VSS backups. If your policy targets one of these instance types, Amazon Data Lifecycle Manager might still create VSS backups, but the snapshots might not be tagged with the required system tags. Without these tags, the snapshots will not be managed by Amazon Data Lifecycle Manager after creation. You might need to manually delete these snapshots.
+ T3: `t3.nano` \$1 `t3.micro`
+ T3a: `t3a.nano` \$1 `t3a.micro`
+ T2: `t2.nano` \$1 `t2.micro`

## Shared responsibility for application-consistent snapshots
<a name="shared-responsibility"></a>

**You must ensure that:**
+ The SSM Agent is installed, up-to-date, and running on your target instances
+ Systems Manager has permissions to perform the required actions on the target instances
+ Amazon Data Lifecycle Manager has permissions to perform the Systems Manager actions required to run pre and post scripts on the target instances.
+ For custom workloads, such as self-managed MySQL, PostgreSQL, or InterSystems IRIS databases, the SSM document that you use includes the correct and required actions for freezing, flushing, and thawing I/O for your database configuration.
+ Snapshot creation times align with your workload schedule. For example, try to schedule snapshot creation during scheduled maintenance windows.

**Amazon Data Lifecycle Manager ensures that:**
+ Snapshot creation is initiated within 60 minutes of the scheduled snapshot creation time.
+ Pre scripts run before the snapshot creation is initiated.
+ Post scripts run after the pre script succeeds and the snapshot creation has been initiated. Amazon Data Lifecycle Manager runs the post script only if the pre script succeeds. If the pre script fails, Amazon Data Lifecycle Manager will not run the post script.
+ Snapshots are tagged with the appropriate tags on creation.
+ CloudWatch metrics and events are emitted when scripts are initiated, and when they fail or succeed.

# Other use cases for Data Lifecycle Manager pre and post scripts
<a name="script-other-use-cases"></a>

In addition to using pre and post scripts for automating application-consistent snapshots, you can use pre and post scripts together, or individually, to automate other administrative tasks before or after snapshot creation. For example:
+ Using a pre script to apply patches before creating snapshots. This can help you create snapshots after applying your regular weekly or monthly software updates.
**Note**  
If you choose to run a pre script only, **Default to crash-consistent snapshots** is enabled by default.
+ Using a post script to apply patches after creating snapshots. This can help you create snapshots before applying your regular weekly or monthly software updates.

## Getting started for other use cases
<a name="dlm-script-other"></a>

This section explains the steps you need perform when using pre and/or post scripts for **uses cases other than application-consistent snapshots**.

### Step 1: Prepare target instances
<a name="dlm-script-other-prep-instance"></a>

**To prepare your target instances for pre and/or post scripts**

1. Install the SSM Agent on your target instances, if it is not already installed. If SSM Agent is already installed on your target instances, skip this step. 
   + (Linux instances) [ Manually installing SSM Agent on EC2 instances for Linux](https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-linux.html)
   + (Windows instances) [ Working with SSM Agent on EC2 instances for Windows Server](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-windows.html)

1. Ensure that the SSM Agent is running. For more information, see [ Checking SSM Agent status and starting the agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html).

1. Set up Systems Manager for Amazon EC2 instances. For more information, see [Setting up Systems Manager for Amazon EC2 instances](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-setting-up-ec2.html) in the *AWS Systems Manager User Guide*.

### Step 2: Prepare SSM document
<a name="dlm-script-other-prep-document"></a>

You must create an SSM command document that includes the pre and/or post scripts with the commands you want to run.

You can create an SSM document using the empty SSM document template below and add your pre and post script commands in the appropriate document sections.

**Note the following:**  
It is your responsibility to ensure that the SSM document performs the correct and required actions for your workload.
The SSM document must include required fields for `allowedValues`, including `pre-script`, `post-script`, and `dry-run`. Amazon Data Lifecycle Manager will execute commands on your instance based on the contents of those sections. If your SSM document does not have those sections, then Amazon Data Lifecycle Manager will treat it as a failed execution.

```
###===============================================================================###
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.

# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
###===============================================================================###
schemaVersion: '2.2'
description: SSM Document Template for Amazon Data Lifecycle Manager Pre/Post script feature
parameters:
  executionId:
    type: String
    default: None
    description: (Required) Specifies the unique identifier associated with a pre and/or post execution
    allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$
  command:
  # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. 
  # 'dry-run' option is intended for validating the document execution without triggering any commands
  # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully 
  # trigger pre and post script actions.
    type: String
    default: 'dry-run'
    description: (Required) Specifies whether pre-script and/or post-script should be executed.
    allowedValues:
    - pre-script
    - post-script
    - dry-run

mainSteps:
- action: aws:runShellScript
  description: Run Database freeze/thaw commands
  name: run_pre_post_scripts
  precondition:
    StringEquals:
    - platformType
    - Linux
  inputs:
    runCommand:
    - |
      #!/bin/bash

      ###===============================================================================###
      ### Error Codes
      ###===============================================================================###
      # The following Error codes will inform Data Lifecycle Manager of the type of error 
      # and help guide handling of the error. 
      # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field.
      # 1 Pre-script failed during execution - 201
      # 2 Post-script failed during execution - 202
      # 3 Auto thaw occurred before post-script was initiated - 203
      # 4 Pre-script initiated while post-script was expected - 204
      # 5 Post-script initiated while pre-script was expected - 205
      # 6 Application not ready for pre or post-script initiation - 206

      ###===============================================================================###
      ### Global variables
      ###===============================================================================###
      START=$(date +%s)
      # For testing this script locally, replace the below with OPERATION=$1.
      OPERATION={{ command }}

      # Add all pre-script actions to be performed within the function below
      execute_pre_script() {
          echo "INFO: Start execution of pre-script"
      }

      # Add all post-script actions to be performed within the function below
      execute_post_script() {
          echo "INFO: Start execution of post-script"
      }

      # Debug logging for parameters passed to the SSM document
      echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}"

      # Based on the command parameter value execute the function that supports 
      # pre-script/post-script operation
      case ${OPERATION} in
          pre-script)
              execute_pre_script
              ;;
          post-script)
              execute_post_script
              ;;
          dry-run)
              echo "INFO: dry-run option invoked - taking no action"
              ;;
          *)
              echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run."
              exit 1 # return failure
              ;;
      esac

      END=$(date +%s)
      # Debug Log for profiling the script time
      echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."
```

### Step 3: Prepare Amazon Data Lifecycle Manager IAM role
<a name="dlm-script-other-prep-role"></a>

**Note**  
This step is needed if:  
You create or update a pre/post script-enabled snapshot policy that uses a custom IAM role.
You use the command line to create or update a pre/post script-enabled snapshot policy that uses the default.
If you use the console to create or update a pre/post script-enabled snapshot policy that uses the default role for managing snapshots (**AWSDataLifecycleManagerDefaultRole**), skip this step. In this case, we automatically attach the **AWSDataLifecycleManagerSSMFullAccess** policy to that role.

You must ensure that that IAM role that you use for the policy grants Amazon Data Lifecycle Manager permission to perform the SSM actions required to run pre and post scripts on instances targeted by the policy.

Amazon Data Lifecycle Manager provides a managed policy (**AWSDataLifecycleManagerSSMFullAccess**) that includes the required permissions. You can attach this policy to your IAM role for managing snapshots to ensure that it includes the permissions.

**Important**  
The AWSDataLifecycleManagerSSMFullAccess managed policy uses the `aws:ResourceTag` condition key to restrict access to specific SSM documents when using pre and post scripts. To allow Amazon Data Lifecycle Manager to access the SSM documents, you must ensure that your SSM documents are tagged with `DLMScriptsAccess:true`.

Alternatively, you can manually create a custom policy or assign the required permissions directly to the IAM role that you use. You can use the same permissions that are defined in the AWSDataLifecycleManagerSSMFullAccess managed policy, however, the `aws:ResourceTag` condition key is optional. If you decide to not use that condition key, then you do not need to tag your SSM documents with `DLMScriptsAccess:true`.

Use one of the following methods to add the **AWSDataLifecycleManagerSSMFullAccess** policy to your IAM role.

------
#### [ Console ]

**To attach the managed policy to your custom role**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation panel, choose **Roles**.

1. Search for and select your custom role for managing snapshots.

1. On the **Permissions** tab, choose **Add permissions**, **Attach policies**.

1. Search for and select the **AWSDataLifecycleManagerSSMFullAccess** managed policy, and then choose **Add permissions**.

------
#### [ AWS CLI ]

**To attach the managed policy to your custom role**  
Use the [ attach-role-policy](https://docs.aws.amazon.com/cli/latest/reference/iam/attach-role-policy.html) command. For `---role-name`, specify the name of your custom role. For `--policy-arn`, specify `arn:aws:iam::aws:policy/AWSDataLifecycleManagerSSMFullAccess`.

```
$ aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AWSDataLifecycleManagerSSMFullAccess \
--role-name your_role_name
```

------

### Create snapshot lifecycle policy
<a name="dlm-script-other-prep-policy"></a>

------
#### [ Console ]

**To create the snapshot lifecycle policy**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Elastic Block Store**, **Lifecycle Manager**, and then choose **Create lifecycle policy**.

1. On the **Select policy type** screen, choose **EBS snapshot policy** and then choose **Next**.

1. In the **Target resources** section, do the following:

   1. For **Target resource types**, choose `Instance`.

   1. For **Target resource tags**, specify the resource tags that identify the instances to back up. Only resources that have the specified tags will be backed up.

1. For **IAM role**, either choose **AWSDataLifecycleManagerDefaultRole** (the default role for managing snapshots), or choose a custom role that you created and prepared for pre and post scripts.

1. Configure the schedules and additional options as needed. We recommend that you schedule snapshot creation times for time periods that match your workload, such as during maintenance windows.

1. In the **Pre and post scripts** section, select **Enable pre and post scripts** and then do the following:

   1. Select **Custom SSM document**.

   1. For **Automate option**, choose the option that matches the scripts you want to run.

   1. For **SSM document**, select the SSM document that you prepared.

1. Configure the following additional options if needed:
   + **Script timeout** — The timeout period after which Amazon Data Lifecycle Manager fails the script run attempt if it has not completed. If a script does not complete within its timeout period, Amazon Data Lifecycle Manager fails the attempt. The timeout period applies to the pre and post scripts individually. The minimum and default timeout period is 10 seconds. And the maximum timeout period is 120 seconds.
   + **Retry failed scripts** — Select this option to retry scripts that do not complete within their timeout period. If the pre script fails, Amazon Data Lifecycle Manager retries entire snapshot creation process, including running the pre and post scripts. If the post script fails, Amazon Data Lifecycle Manager retries the post script only; in this case, the pre script will have completed and the snapshot might have been created.
   + **Default to crash-consistent snapshots** — Select this option to default to crash-consistent snapshots if the pre script fails to run. This is the default snapshot creation behavior for Amazon Data Lifecycle Manager if pre and post scripts is not enabled. If you enabled retries, Amazon Data Lifecycle Manager will default to crash-consistent snapshots only after all retry attempts have been exhausted. If the pre script fails and you do not default to crash-consistent snapshots, Amazon Data Lifecycle Manager will not create snapshots for the instance during that schedule run.

1. Choose **Create default policy**.
**Note**  
If you get the `Role with name AWSDataLifecycleManagerDefaultRole already exists` error, see [Troubleshoot Amazon Data Lifecycle Manager issues](dlm-troubleshooting.md) for more information.

------
#### [ AWS CLI ]

**To create the snapshot lifecycle policy**  
Use the [ create-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html) command, and include the `Scripts` parameters in `CreateRule`. For more information about the parameters, see the [https://docs.aws.amazon.com/dlm/latest/APIReference/API_Script.html](https://docs.aws.amazon.com/dlm/latest/APIReference/API_Script.html).

```
$ aws dlm create-lifecycle-policy \
--description "policy_description" \
--state ENABLED \
--execution-role-arn iam_role_arn \
--policy-details file://policyDetails.json
```

Where `policyDetails.json` includes the following.

```
{
    "PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
    "ResourceTypes": [
        "INSTANCE"
    ],
    "TargetTags": [{
        "Key": "tag_key",
        "Value": "tag_value"
    }],
    "Schedules": [{
        "Name": "schedule_name",
        "CreateRule": {
            "CronExpression": "cron_for_creation_frequency", 
            "Scripts": [{ 
                "Stages": ["PRE" | "POST" | "PRE","POST"],
                "ExecutionHandlerService":"AWS_SYSTEMS_MANAGER",
                "ExecutionHandler":"ssm_document_name|arn",
                "ExecuteOperationOnScriptFailure":true|false,
                "ExecutionTimeout":timeout_in_seconds (10-120), 
                "MaximumRetryCount":retries (0-3)
            }]
        },
        "RetainRule": {
            "Count": retention_count
        }
    }]
}
```

------

# How Amazon Data Lifecycle Manager pre and post scripts work
<a name="script-flow"></a>

The following image shows the process flow for pre and post scripts when using custom SSM documents. This does not apply to VSS Backups.

![\[Amazon Data Lifecycle Manager pre and post script process flow\]](http://docs.aws.amazon.com/ebs/latest/userguide/images/dlm-scripts.png)


At the scheduled snapshot creation time, the following actions and cross-service interactions occur.

1. Amazon Data Lifecycle Manager initiates the pre script action by calling the SSM document and passing the `pre-script` parameter.
**Note**  
Steps 1 to 3 occur only if you run pre scripts. If you run post scripts only, steps 1 to 3 are skipped.

1. Systems Manager sends pre script commands to the SSM Agent running on the target instances. The SSM Agent runs the commands on the instance, and sends status information back to Systems Manager.

   For example, if the SSM document is used to create application-consistent snapshots, the pre script might freeze and flush I/O to ensure that all buffered data is written to the volume before the snapshot is taken.

1. Systems Manager sends pre script command status updates to Amazon Data Lifecycle Manager. If the pre script fails, Amazon Data Lifecycle Manager takes one of the following actions, depending on how you configure the pre and post script options:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/script-flow.html)

1. Amazon Data Lifecycle Manager initiates snapshot creation.

1. Amazon Data Lifecycle Manager initiates the post script action by calling the SSM document and passing the `post-script` parameter.
**Note**  
Steps 5 to 7 occur only if you run pre scripts. If you run post scripts only, steps 1 to 3 are skipped.

1. Systems Manager sends post script commands to the SSM Agent running on the target instances. The SSM Agent runs the commands on the instance, and sends status information back to Systems Manager.

   For example, if the SSM document enables application-consistent snapshots, this post script might thaw I/O to ensure that your databases resume normal I/O operations after the snapshot has been taken.

1. If you run a post script and Systems Manager indicates that it completed successfully, the process completes.

   If the post script fails, Amazon Data Lifecycle Manager takes one of the following actions, depending on how you configure the pre and post script options:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ebs/latest/userguide/script-flow.html)

   Keep in mind that if the post script fails, the pre script (if enabled) will have completed successfully, and the snapshots might have been created. You might need to take further action on the instance to ensure that it is operating as expected. For example if the pre script paused and flushed I/O, but the post script failed to thaw I/O, you might need to configure your database to auto-thaw I/O or you need to manually thaw I/O.

1. The snapshot creation process might complete after the post script completes. The time taken to complete the snapshot depends on the snapshot size.

# Identify snapshots created with Data Lifecycle Manager pre and post scripts
<a name="dlm-script-tags"></a>

Amazon Data Lifecycle Manager automatically assigns the following system tags to snapshots created with pre and post scripts.
+ Key: `aws:dlm:pre-script`; Value: `SUCCESS`\$1`FAILED`

  A tag value of `SUCCESS` indicates that the pre script executed successfully. A tag value of `FAILED` indicates that the pre script did not execute successfully. 
+ Key: `aws:dlm:post-script`; Value: `SUCCESS`\$1`FAILED`

  A tag value of `SUCCESS` indicates that the post script executed successfully. A tag value of `FAILED` indicates that the post script did not execute successfully. 

For custom SSM documents and SAP HANA backups, you can infer successful application-consistent snapshot creation if the snapshot is tagged with both `aws:dlm:pre-script:SUCCESS` and `aws:dlm:post-script:SUCCESS`.

Additionally, application-consistent snapshots created using VSS backup are automatically tagged with:
+ Key: `AppConsistent tag`; Value: `true`\$1`false`

  A tag value of `true` indicates that the VSS backup succeeded and that the snapshots are application-consistent. A tag value of `false` indicates that the VSS backup did not succeed and that the snapshots are not application-consistent.

# Monitor Amazon Data Lifecycle Manager pre and post scripts
<a name="dlm-script-monitoring"></a>

**Amazon CloudWatch metrics**  
Amazon Data Lifecycle Manager publishes the following CloudWatch metrics when pre and post scripts fail and succeed and when VSS backups fail and succeed.
+ `PreScriptStarted`
+ `PreScriptCompleted`
+ `PreScriptFailed`
+ `PostScriptStarted`
+ `PostScriptCompleted`
+ `PostScriptFailed`
+ `VSSBackupStarted`
+ `VSSBackupCompleted`
+ `VSSBackupFailed`

For more information, see [Monitor Data Lifecycle Manager policies using CloudWatch](monitor-dlm-cw-metrics.md).

**Amazon EventBridge**  
Amazon Data Lifecycle Manager emits the following Amazon EventBridge event when a pre or post script is initiated, succeeds, or fails
+ `DLM Pre Post Script Notification`

For more information, see [Monitor Data Lifecycle Manager policies using EventBridge](monitor-cloudwatch-events.md).

# Create Amazon Data Lifecycle Manager custom policy for EBS-backed AMIs
<a name="ami-policy"></a>

The following procedure shows you how to use Amazon Data Lifecycle Manager to automate EBS-backed AMI lifecycles.

**Topics**
+ [

## Create an AMI lifecycle policy
](#create-ami-policy)
+ [

## Considerations for AMI lifecycle policies
](#ami-considerations)
+ [

## Additional resources
](#ami-additional-resources)

## Create an AMI lifecycle policy
<a name="create-ami-policy"></a>

Use one of the following procedures to create an AMI lifecycle policy.

------
#### [ Console ]

**To create an AMI policy**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Elastic Block Store**, **Lifecycle Manager**, and then choose **Create lifecycle policy**.

1. On the **Select policy type** screen, choose **EBS-backed AMI policy**, and then choose **Next**.

1. In the **Target resources** section, for **Target resource tags**, choose the resource tags that identify the volumes or instances to back up. The policy backs up only the resources that have the specified tag key and value pairs.

1. For **Description**, enter a brief description for the policy.

1. For **IAM role**, choose the IAM role that has permissions to manage AMIs and snapshot and to describe instances. To use the default role provided by Amazon Data Lifecycle Manager, choose **Default role**. Alternatively, to use a custom IAM role that you previously created, choose **Choose another role**, and then select the role to use.

1. For **Policy tags**, add the tags to apply to the lifecycle policy. You can use these tags to identify and categorize your policies.

1. For **Policy status after creation**, choose **Enable policy** to start running the policy at the next scheduled time, or **Disable policy** to prevent the policy from running. If you do not enable the policy now, it will not start creating AMIs until you manually enable it after creation.

1. In the **Instance reboot** section, indicate whether instances should be rebooted before AMI creation. To prevent the targeted instances from being rebooted, choose **No**. Choosing **No** could cause data consistency issues. To reboot instances before AMI creation, choose **Yes**. Choosing this ensures data consistency, but could result in multiple targeted instances rebooting simultaneously.

1. Choose **Next**.

1. On the **Configure schedule** screen, configure the policy schedules. A policy can have up to four schedules. Schedule 1 is mandatory. Schedules 2, 3, and 4 are optional. For each policy schedule that you add, do the following:

   1. In the **Schedule details** section do the following:

      1. For **Schedule name**, specify a descriptive name for the schedule.

      1. For **Frequency** and the related fields, configure the interval between policy runs.

         You can configure policy runs on a daily, weekly, monthly, or yearly schedule. Alternatively, choose **Custom cron expression** to specify an interval of up to one year. For more information, see [Cron and rate expressions](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-scheduled-rule-pattern.html) in the *Amazon EventBridge User Guide*.

      1. For **Starting at**, specify the time to start the policy runs. The first policy run starts within an hour after the time that you schedule. You must enter the time in the `hh:mm` UTC format.

      1. For **Retention type**, specify the retention policy for AMIs created by the schedule.

         You can retain AMIs based on either their total count or their age.

         For count-based retention, the range is `1` to `1000`. After the maximum count is reached, the oldest AMI is deregistered when a new one is created.

         For age-based retention, the range is `1` day to `100` years. After the retention period of each AMI expires, it is deregistered.
**Note**  
All schedules must have the same retention type. You can specify the retention type for Schedule 1 only. Schedules 2, 3, and 4 inherit the retention type from Schedule 1. Each schedule can have its own retention count or period.

   1. Configure tagging for AMIs.

      In the **Tagging** section, do the following:

      1. To copy all of the user-defined tags from the source instance to the AMIs created by the schedule, select **Copy tags from source**.

      1. By default, AMIs created by the schedule are automatically tagged with the ID of the source instance. To prevent this automatic tagging from happening, for **Variable tags**, remove the `instance-id:$(instance-id)` tile.

      1. To specify additional tags to assign to AMIs created by this schedule, choose **Add tags**.

   1. Configure AMI deprecation.

      To deprecate AMIs when they should no longer be used, in the **AMI deprecation** section, select **Enable AMI deprecation for this schedule** and then specify the AMI deprecation rule. The AMI deprecation rule specifies when AMIs are to be deprecated.

      If the schedule uses count-based AMI retention, you must specify the number of oldest AMIs to deprecate. The deprecation count must be less than or equal to the schedule's AMI retention count, and it can't be greater than 1000. For example, if the schedule is configured to retain a maximum of 5 AMIs, then you can configure the scheduled to deprecate up to old 5 oldest AMIs.

      If the schedule uses age-based AMI retention, you must specify the period after which AMIs are to be deprecated. The deprecation count must be less than or equal to the schedule's AMI retention period, and it can't be greater than 10 years (120 months, 520 weeks, or 3650 days). For example, if the schedule is configured to retain AMIs for 10 days, then you can configure the scheduled to deprecate AMIs after periods up to 10 days after creation.

   1. Configure cross-Region copying.

      To copy AMIs created by the schedule to different Regions, in the **Cross-Region copy** section, select **Enable cross-Region copy**. You can copy AMIs to up to three additional Regions in your account. You must specify a separate cross-Region copy rule for each destination Region.

      For each destination Region, you can specify the following:
      + A retention policy for the AMI copy. When the retention period expires, the copy in the destination Region is automatically deregistered.
      + Encryption status for the AMI copy. If the source AMI is encrypted, or if encryption by default is enabled, the copied AMIs are always encrypted. If the source AMI is unencrypted and encryption by default is disabled, you can optionally enable encryption. If you do not specify a KMS key, the AMIs are encrypted using the default KMS key for EBS encryption in each destination Region. If you specify a KMS key for the destination Region, then the selected IAM role must have access to the KMS key.
      + A deprecation rule for the AMI copy. When the deprecation period expires, the AMI copy is automatically deprecated. The deprecation period must be less than or equal to the copy retention period, and it can't be greater than 10 years.
      + Whether to copy all tags or no tags from the source AMI.
**Note**  
Do not exceed the number of concurrent AMI copies per Region.

   1. To add additional schedules, choose **Add another schedule**, which is located at the top of the screen. For each additional schedule, complete the fields as described previously in this topic.

   1. After you have added the required schedules, choose **Review policy**.

1. Review the policy summary, and then choose **Create policy**.
**Note**  
If you get the `Role with name AWSDataLifecycleManagerDefaultRoleForAMIManagement already exists` error, see [Troubleshoot Amazon Data Lifecycle Manager issues](dlm-troubleshooting.md) for more information.

------
#### [ Command line ]

Use the [create-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html) command to create an AMI lifecycle policy. For `PolicyType`, specify `IMAGE_MANAGEMENT`.

**Note**  
To simplify the syntax, the following examples use a JSON file, `policyDetails.json`, that includes the policy details.

**Example 1: Age-based retention and AMI deprecation**  
This example creates an AMI lifecycle policy that creates AMIs of all instances that have a tag key of `purpose` with a value of `production` without rebooting the targeted instances. The policy includes one schedule that creates an AMI every day at `01:00` UTC. The policy retains AMIs for `2` days and deprecates them after `1` day. It also copies the tags from the source instance to the AMIs that it creates.

```
aws dlm create-lifecycle-policy \
    --description "My AMI policy" \
    --state ENABLED \
    --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRoleForAMIManagement \
    --policy-details file://policyDetails.json
```

The following is an example of the `policyDetails.json` file.

```
{
    "PolicyType": "IMAGE_MANAGEMENT",
    "ResourceTypes": [
        "INSTANCE"
    ],
    "TargetTags": [{
        "Key": "purpose",
        "Value": "production"
    }],
    "Schedules": [{
            "Name": "DailyAMIs",
            "TagsToAdd": [{
                "Key": "type",
                "Value": "myDailyAMI"
            }],
            "CreateRule": {
                "Interval": 24,
                "IntervalUnit": "HOURS",
                "Times": [
                    "01:00"
                ]
            },
            RetainRule":{
                "Interval" : 2,
                "IntervalUnit" : "DAYS"
            },
            DeprecateRule": {
                "Interval" : 1,
                "IntervalUnit" : "DAYS"
            },
            "CopyTags": true
        }
    ],
    "Parameters" : {
        "NoReboot":true
    }
}
```

If the request succeeds, the command returns the ID of the newly created policy. The following is example output.

```
{
   "PolicyId": "policy-9876543210abcdef0"
}
```

**Example 2: Count-based retention and AMI deprecation with cross-Region copy**  
This example creates an AMI lifecycle policy that creates AMIs of all instances that have a tag key of `purpose` with a value of `production` and reboots the target instances. The policy includes one schedule that creates an AMI every `6` hours starting at `17:30` UTC. The policy retains `3` AMIs and automatically deprecates the `2` oldest AMIs. It also has a cross-Region copy rule that copies AMIs to `us-east-1`, retains `2` AMI copies, and automatically deprecates the oldest AMI.

```
aws dlm create-lifecycle-policy \
    --description "My AMI policy" \
    --state ENABLED \
    --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRoleForAMIManagement \
    --policy-details file://policyDetails.json
```

The following is an example of the `policyDetails.json` file.

```
{
    "PolicyType": "IMAGE_MANAGEMENT",
    "ResourceTypes" : [
        "INSTANCE"
    ],
    "TargetTags": [{
        "Key":"purpose", 
        "Value":"production"
    }],
    "Parameters" : {
          "NoReboot": true
    },
    "Schedules" : [{
        "Name" : "Schedule1",
        "CopyTags": true,
        "CreateRule" : {
            "Interval": 6,
            "IntervalUnit": "HOURS",
            "Times" : ["17:30"]
        },
        "RetainRule":{
            "Count" : 3
        },
        "DeprecateRule":{
            "Count" : 2
        },
        "CrossRegionCopyRules": [{
            "TargetRegion": "us-east-1",
            "Encrypted": true,
            "RetainRule":{
                "IntervalUnit": "DAYS",
                "Interval": 2
            },
            "DeprecateRule":{
                "IntervalUnit": "DAYS",
                "Interval": 1
            },
            "CopyTags": true
        }]
    }]
}
```

------

## Considerations for AMI lifecycle policies
<a name="ami-considerations"></a>

The following **general considerations** apply to creating AMI lifecycle policies:
+ AMI lifecycle policies target only instances that are in the same Region as the policy.
+ The first AMI creation operation starts within one hour after the specified start time. Subsequent AMI creation operations start within one hour of their scheduled time.
+ When Amazon Data Lifecycle Manager deregisters an AMI, it automatically deletes it backing snapshots.
+ Target resource tags are case sensitive.
+ If you remove the target tags from an instance that is targeted by a policy, Amazon Data Lifecycle Manager no longer manages existing AMIs in the standard; you must manually delete them if they are no longer needed.
+ You can create multiple policies to back up an instance. For example, if an instance has two tags, where tag *A* is the target for policy *A* to create an AMI every 12 hours,and tag *B* is the target for policy *B* to create an AMI every 24 hours, Amazon Data Lifecycle Manager creates AMIs according to the schedules for both policies. Alternatively, you can achieve the same result by creating a single policy that has multiple schedules. For example, you can create a single policy that targets only tag *A*, and specify two schedules — one for every 12 hours and one for every 24 hours.
+ New volumes that are attached to a target instance after the policy has been created are automatically included in the backup at the next policy run. All volumes attached to the instance at the time of the policy run are included.
+ If you create a policy with a custom cron-based schedule that is configured to create only one AMI, the policy will not automatically deregister that AMI when the retention threshold is reached. You must manually deregister the AMI if it is no longer needed.
+ If you create an age-based policy where the retention period is shorter than the creation frequency, Amazon Data Lifecycle Manager will always retain the last AMI until the next one is created. For example, if an age-based policy creates one AMI every month with a retention period of seven days, Amazon Data Lifecycle Manager will retain each AMI for one month, even though the retention period is seven days.
+ For count-based policies, Amazon Data Lifecycle Manager always creates AMIs according to the creation frequency before attempting to deregister the oldest AMI according to the retention policy.
+ It can take several hours to successfully deregister an AMI and to delete its associated backing snapshots. If Amazon Data Lifecycle Manager creates the next AMI before the previously created AMI is successfully deregistered, you could temporarily retain a number of AMIs that is greater than your retention count. 

The following considerations apply to **terminating instances targeted by a policy:**
+ If you terminate an instance that was targeted by a policy with a count-based retention schedule, the policy no longer manages the AMIs that it previously created from the terminated instance. You must manually deregister those earlier AMIs if they are no longer needed.
+ If you terminate an instance that was targeted by a policy with an age-based retention schedule, the policy continues to deregister AMIs that were previously created from the terminated instance on the defined schedule, up to, but not including, the last AMI. You must manually deregister the last AMI if it is no longer needed.

The following considerations apply to AMI policies and **AMI deprecation:**
+ If you increase the AMI deprecation count for a schedule with count-based retention, the change is applied to all AMIs (existing and new) created by the schedule.
+ If you increase the AMI deprecation period for a schedule with age-based retention, the change is applied to new AMIs only. Existing AMIs are not affected.
+ If you remove the AMI deprecation rule from a schedule, Amazon Data Lifecycle Manager will not cancel deprecation for AMIs that were previously deprecated by that schedule.
+ If you decrease the AMI deprecation count or period for a schedule, Amazon Data Lifecycle Manager will not cancel deprecation for AMIs that were previously deprecated by that schedule.
+ If you manually deprecate an AMI that was created by an AMI policy, Amazon Data Lifecycle Manager will not override the deprecation.
+ If you manually cancel deprecation for an AMI that was previously deprecated by an AMI policy, Amazon Data Lifecycle Manager will not override the cancellation.
+ If an AMI is created by multiple conflicting schedules, and one or more of those schedules do not have an AMI deprecation rule, Amazon Data Lifecycle Manager will not deprecate that AMI.
+ If an AMI is created by multiple conflicting schedules, and all of those schedules have an AMI deprecation rule, Amazon Data Lifecycle Manager will use the deprecation rule that results in the latest deprecation date.

The following considerations apply to AMI policies and [Recycle Bin](recycle-bin.md):
+ If Amazon Data Lifecycle Manager deregisters an AMI and sends it to the Recycle Bin when the policy's retention threshold is reached, and you manually restore that AMI from the Recycle Bin, you must manually deregister the AMI when it is no longer needed. Amazon Data Lifecycle Manager will no longer manage the AMI.
+ If you manually deregister an AMI that was created by a policy, and that AMI is in the Recycle Bin when the policy’s retention threshold is reached, Amazon Data Lifecycle Manager will not deregister the AMI. Amazon Data Lifecycle Manager does not manage AMIs while they are in the Recycle Bin.

  If the AMI is restored from the Recycle Bin before the policy's retention threshold is reached, Amazon Data Lifecycle Manager will deregister the AMI when the policy's retention threshold is reached.

  If the AMI is restored from the Recycle Bin after the policy's retention threshold is reached, Amazon Data Lifecycle Manager will no longer deregister the AMI. You must manually delete it when it is no longer needed.

The following considerations apply to AMI policies that are in the **error** state:
+ For policies with age-based retention schedules, AMIs that are set to expire while the policy is in the `error` state are retained indefinitely. You must deregister the AMIs manually. When you re-enable the policy, Amazon Data Lifecycle Manager resumes deregistering AMIs as their retention periods expire.
+ For policies with count-based retention schedules, the policy stops creating and deregistering AMIs while it is in the `error` state. When you re-enable the policy, Amazon Data Lifecycle Manager resumes creating AMIs, and it resumes deregistering AMIs as the retention threshold is met.

The following considerations apply to AMI policies and **[disabling AMIs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/disable-an-ami.html)**:
+ If you disable an AMI created by Amazon Data Lifecycle Manager, and that AMI is disabled when its retention threshold is reached, Amazon Data Lifecycle Manager will deregister the AMI and delete its associated snapshots.
+ If you disable an AMI created by Amazon Data Lifecycle Manager and you manually archive its associated snapshots, and those snapshots are archived when their retention threshold is met, Amazon Data Lifecycle Manager will not delete those snapshots and it will no longer manage them.

The following consideration applies to AMI policies and **[AMI deregistration protection](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/deregister-ami.html#ami-deregistration-protection)**:
+ If you manually enable deregistration protection for an AMI that was created by Amazon Data Lifecycle Manager, and it is still enabled when the AMI retention threshold is reached, Amazon Data Lifecycle Manager no longer manages that AMI. You must manually deregister the AMI and delete its underlying snapshots if it is no longer needed.

## Additional resources
<a name="ami-additional-resources"></a>

For more information, see the [ Automating Amazon EBS snapshot and AMI management using Amazon Data Lifecycle Manager](https://aws.amazon.com/blogs/storage/automating-amazon-ebs-snapshot-and-ami-management-using-amazon-dlm/) AWS storage blog.

# Automate cross-account snapshot copies with Data Lifecycle Manager
<a name="event-policy"></a>

Automating cross-account snapshot copies enables you to copy your Amazon EBS snapshots to specific Regions in an isolated account and encrypt those snapshots with an encryption key. This enables you to protect yourself against data loss in the event of your account being compromised.

Automating cross-account snapshot copies involves two accounts:
+ **Source account**—The source account is the account that creates and shares the snapshots with the target account. In this account, you must create an EBS snapshot policy that creates snapshots at set intervals and then shares them with other AWS accounts.
+ **Target account**—The target account is the account with destination account with which the snapshots are shared, and it is the account that creates copies of the shared snapshots. In this account, you must create a cross-account copy event policy that automatically copies snapshots that are shared with it by one or more specified source accounts.

**Note**  
Both the source account EBS snapshot policy and the target account cross-account copy event policy must be created in the same AWS Region. The target account can then copy snapshots to different destination Regions as needed.

**Topics**
+ [

## Create cross-account snapshot copy policies
](#create-cac-policy)
+ [

## Specify snapshot description filters
](#snapshot-descr-filters)
+ [

## Considerations for cross-account snapshot copy policies
](#event-policy-considerations)
+ [

## Additional resources
](#event-additional-resources)

## Create cross-account snapshot copy policies
<a name="create-cac-policy"></a>

To prepare the source and target accounts for cross-account snapshot copying, you need to perform the following steps:

**Topics**

### Step 1: Create the EBS snapshot policy (*Source account*)
<a name="create-snapshot-policy"></a>

In the source account, create an EBS snapshot policy that will create the snapshots and share them with the required target accounts.

When you create the policy, ensure that you enable cross-account sharing and that you specify the target AWS accounts with which to share the snapshots. These are the accounts with which the snapshots are to be shared. If you are sharing encrypted snapshots, then you must give the selected target accounts permission to use the KMS key used to encrypt the source volume. For more information, see [Step 2: Share the customer managed key (*Source account*)](#share-cmk).

**Note**  
Create this policy in the same AWS Region where you will create the target account's cross-account copy event policy in Step 3. Both policies must be in the same Region for cross-account snapshot sharing to work properly.
You can only share snapshots that are unencrypted or that are encrypted using a customer managed key. You can't share snapshots that are encrypted with the default EBS encryption KMS key. If you share encrypted snapshots, then you must also share the KMS key that was used to encrypt the source volume with the target accounts. For more information, see [ Allowing users in other accounts to use a KMS key](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the *AWS Key Management Service Developer Guide*.

For more information about creating an EBS snapshot policy, see [Create Amazon Data Lifecycle Manager custom policy for EBS snapshots](snapshot-ami-policy.md).

Use one of the following methods to create the EBS snapshot policy.

### Step 2: Share the customer managed key (*Source account*)
<a name="share-cmk"></a>

If you are sharing encrypted snapshots, you must grant the IAM role and the target AWS accounts (that you selected in the previous step) permissions to use the customer managed key that was used to encrypt the source volume.

**Note**  
Perform this step only if you are sharing encrypted snapshots. If you are sharing unencrypted snapshots, skip this step.

------
#### [ Console ]

****

1. Open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the Region selector in the upper-right corner of the page.

1. In the navigation pane, choose **Customer managed key** and then select the KMS key that you need to share with the target accounts.

   Make note of the KMS key ARN, you'll need this later.

1. On the **Key policy** tab, scroll down to the **Key users** section. Choose **Add**, enter the name of the IAM role that you selected in the previous step, and then choose **Add**.

1. On the **Key policy** tab, scroll down to the **Other AWS accounts** section. Choose **Add other AWS accounts**, and then add all of the target AWS accounts that you chose to share the snapshots with in the previous step.

1. Choose **Save changes**.

------
#### [ Command line ]

Use the [ get-key-policy](https://docs.aws.amazon.com/cli/latest/reference/kms/get-key-policy.html) command to retrieve the key policy that is currently attached to the KMS key.

For example, the following command retrieves the key policy for a KMS key with an ID of `9d5e2b3d-e410-4a27-a958-19e220d83a1e` and writes it to a file named `snapshotKey.json`.

```
$ aws kms get-key-policy \
    --policy-name default \
    --key-id 9d5e2b3d-e410-4a27-a958-19e220d83a1e \
    --query Policy \
    --output text > snapshotKey.json
```

Open the key policy using your preferred text editor. Add the ARN of the IAM role that you specified when you created the snapshot policy and the ARNs of the target accounts with which to share the KMS key.

For example, in the following policy, we added the ARN of the default IAM role, and the ARN of the root account for target account `222222222222.`

**Tip**  
To follow the principle of least privilege, do not allow full access to `kms:CreateGrant`. Instead, use the `kms:GrantIsForAWSResource` condition key to allow the user to create grants on the KMS key only when the grant is created on the user's behalf by an AWS service, as shown in the following example.

```
{
    "Sid" : "Allow use of the key",
    "Effect" : "Allow",
    "Principal" : {
        "AWS" : [
            "arn:aws:iam::111111111111:role/service-role/AWSDataLifecycleManagerDefaultRole",
            "arn:aws:iam::222222222222:root"
        ]
    },
    "Action" : [ 
        "kms:Encrypt", 
        "kms:Decrypt", 
        "kms:ReEncrypt*", 
        "kms:GenerateDataKey*", 
        "kms:DescribeKey" 
    ],
    "Resource" : "*"
}, 
{
    "Sid" : "Allow attachment of persistent resources",
    "Effect" : "Allow",
    "Principal" : {
        "AWS" : [
            "arn:aws:iam::111111111111:role/service-role/AWSDataLifecycleManagerDefaultRole",
            "arn:aws:iam::222222222222:root"
        ]
    },
    "Action" : [ 
        "kms:CreateGrant", 
        "kms:ListGrants", 
        "kms:RevokeGrant"
    ],
    "Resource" : "*",
    "Condition" : {
        "Bool" : {
          "kms:GrantIsForAWSResource" : "true"
        }
    }
}
```

Save and close the file. Then use the [ put-key-policy](https://docs.aws.amazon.com/cli/latest/reference/kms/put-key-policy.html) command to attach the updated key policy to the KMS key. 



```
$ aws kms put-key-policy \
    --policy-name default \
    --key-id 9d5e2b3d-e410-4a27-a958-19e220d83a1e \
    --policy file://snapshotKey.json
```

------

### Step 3: Create cross-account copy event policy (*Target account*)
<a name="cac-policy"></a>

In the target account, you must create a cross-account copy event policy that will automatically copy snapshots that are shared by the required source accounts.

This policy runs in the target account only when one of the specified source accounts shares snapshot with the account.

**Note**  
Create this policy in the same AWS Region as the source account's EBS snapshot policy created in Step 1. Both policies must be in the same Region for cross-account snapshot sharing to work properly. You can then configure this policy to copy snapshots to different destination Regions as needed.

Use one of the following methods to create the cross-account copy event policy.

------
#### [ Console ]

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Elastic Block Store**, **Lifecycle Manager**, and then choose **Create lifecycle policy**.

1. On the **Select policy type** screen, choose **Cross-account copy event policy**, and then choose **Next**.

1. For **Policy description**, enter a brief description for the policy.

1. For **Policy tags**, add the tags to apply to the lifecycle policy. You can use these tags to identify and categorize your policies.

1. In the **Event settings** section, define the snapshot sharing event that will cause the policy to run. Do the following:

   1. For **Sharing accounts**, specify the source AWS accounts from which you want to copy the shared snapshots. Choose **Add account**, enter the 12-digit AWS account ID, and then choose **Add**.

   1. For **Filter by description**, enter the required snapshot description using a regular expression. Only snapshots that are shared by the specified source accounts and that have descriptions that match the specified filter are copied by the policy. For more information, see [Specify snapshot description filters](#snapshot-descr-filters).

1. For **IAM role**, choose the IAM role that has permissions to perform snapshot copy actions. To use the default role provided by Amazon Data Lifecycle Manager, choose **Default role**. Alternatively, to use a custom IAM role that you previously created, choose **Choose another role** and then select the role to use.

   If you are copying encrypted snapshots, you must grant the selected IAM role permissions to use the encryption KMS key used to encrypt the source volume. Similarly, if you are encrypting the snapshot in the destination Region using a different KMS key, you must grant the IAM role permission to use the destination KMS key. For more information, see [Step 4: Allow IAM role to use the required KMS keys (*Target account*)](#target_iam-role).

1. In the **Copy action** section, define the snapshot copy actions that the policy should perform when it is activated. The policy can copy snapshots to up to three Regions. You must specify a separate copy rule for each destination Region. For each rule that you add, do the following:

   1. For **Name**, enter a descriptive name for the copy action.

   1. For **Target Region**, select the Region to which to copy the snapshots.

   1. For **Expire**, specify how long to retain the snapshot copies in the target Region after creation.

   1. To encrypt the snapshot copy, for **Encryption**, select **Enable encryption**. If the source snapshot is encrypted, or if encryption by default is enabled for your account, the snapshot copy is always encrypted, even if you do not enable encryption here. If the source snapshot is unencrypted and encryption by default is not enabled for your account, you can choose to enable or disable encryption. If you enable encryption, but do not specify a KMS key, the snapshots are encrypted using the default encryption KMS key in each destination Region. If you specify a KMS key for the destination Region, you must have access to the KMS key.

1. To add additional snapshot copy actions, choose **Add new Regions**.

1. For **Policy status after creation**, choose **Enable policy** to start the policy runs at the next scheduled time, or **Disable policy** to prevent the policy from running. If you do not enable the policy now, it will not start copying snapshots until you manually enable it after creation.

1. Choose **Create policy**.

------
#### [ Command line ]

Use the [create-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html) command to create a policy. To create a cross-account copy event policy, for `PolicyType`, specify `EVENT_BASED_POLICY`.

For example, the following command creates a cross-account copy event policy in target account `222222222222`. The policy copies snapshots that are shared by source account `111111111111`. The policy copies snapshots to `sa-east-1` and `eu-west-2`. Snapshots copied to `sa-east-1` are unencrypted and they are retained for 3 days. Snapshots copied to `eu-west-2` are encrypted using KMS key `8af79514-350d-4c52-bac8-8985e84171c7` and they are retained for 1 month. The policy uses the default IAM role.

```
$ aws dlm create-lifecycle-policy \
    --description "Copy policy" \
    --state ENABLED \
    --execution-role-arn arn:aws:iam::222222222222:role/service-role/AWSDataLifecycleManagerDefaultRole \
    --policy-details file://policyDetails.json
```

The following shows the contents of the `policyDetails.json` file.

```
{
    "PolicyType" : "EVENT_BASED_POLICY",
    "EventSource" : {
        "Type" : "MANAGED_CWE",
        "Parameters": {
            "EventType" : "shareSnapshot",
            "SnapshotOwner": ["111111111111"]
        }
    },
    "Actions" : [{
        "Name" :"Copy Snapshot to Sao Paulo and London",
        "CrossRegionCopy" : [{
            "Target" : "sa-east-1",
             "EncryptionConfiguration" : {
                 "Encrypted" : false
             },
             "RetainRule" : {
             "Interval" : 3,
            "IntervalUnit" : "DAYS"
            }
        },
        {
            "Target" : "eu-west-2",
            "EncryptionConfiguration" : {
                 "Encrypted" : true,
                 "CmkArn" : "arn:aws:kms:eu-west-2:222222222222:key/8af79514-350d-4c52-bac8-8985e84171c7"
            },
            "RetainRule" : {
                "Interval" : 1,
                "IntervalUnit" : "MONTHS"
            }
        }]
    }]
}
```

If the request succeeds, the command returns the ID of the newly created policy. The following is example output.

```
{
    "PolicyId": "policy-9876543210abcdef0"
}
```

------

### Step 4: Allow IAM role to use the required KMS keys (*Target account*)
<a name="target_iam-role"></a>

If you are copying encrypted snapshots, you must grant the IAM role (that you selected in the previous step) permissions to use the customer managed key that was used to encrypt the source volume.

**Note**  
Only perform this step if you are copying encrypted snapshots. If you are copying unencrypted snapshots, skip this step.

Use one of the following methods to add the required policies to the IAM role.

------
#### [ Console ]

****

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, select **Roles**. Search for and select the IAM role that you selected when you created the cross-account copy event policy in the previous step. If you chose to use the default role, the role is named **AWSDataLifecycleManagerDefaultRole**. 

1. Choose **Add inline policy** and then select the **JSON** tab.

1. Replace the existing policy with the following, and specify the ARN of the KMS key that was used to encrypt the source volumes and that was shared with you by the source account in Step 2.
**Note**  
If you are copying from multiple source accounts, then you must specify the corresponding KMS key ARN from each source account.

   In the following example, the policy grants the IAM role permission to use KMS key `1234abcd-12ab-34cd-56ef-1234567890ab`, which was shared by source account `111111111111`, and KMS key `4567dcba-23ab-34cd-56ef-0987654321yz`, which exists in target account `222222222222`.
**Tip**  
To follow the principle of least privilege, do not allow full access to `kms:CreateGrant`. Instead, use the `kms:GrantIsForAWSResource` condition key to allow the user to create grants on the KMS key only when the grant is created on the user's behalf by an AWS service, as shown in the following example.

------
#### [ JSON ]

****  

   ```
    {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "kms:RevokeGrant",
                   "kms:CreateGrant",
                   "kms:ListGrants"
               ],
               "Resource": [
                   "arn:aws:kms:us-east-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab",
                   "arn:aws:kms:us-east-1:222222222222:key/4567dcba-23ab-34cd-56ef-0987654321yz"		
               ],
               "Condition": {
                   "Bool": {
                       "kms:GrantIsForAWSResource": "true"
                   }
               }
           },
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Encrypt",
                   "kms:Decrypt",
                   "kms:ReEncrypt*",
                   "kms:GenerateDataKey*",
                   "kms:DescribeKey"
               ],
               "Resource": [
                   "arn:aws:kms:us-east-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab",
                   "arn:aws:kms:us-east-1:222222222222:key/4567dcba-23ab-34cd-56ef-0987654321yz"
               ]
           }
       ]
   }
   ```

------

1. Choose **Review policy**

1. For **Name**, enter a descriptive name for the policy, and then choose **Create policy**.

------
#### [ Command line ]

Using your preferred text editor, create a new JSON file named `policyDetails.json`. Add the following policy and specify the ARN of the KMS key that was used to encrypt the source volumes and that was shared with you by the source account in Step 2.

**Note**  
If you are copying from multiple source accounts, then you must specify the corresponding KMS key ARN from each source account.

In the following example, the policy grants the IAM role permission to use KMS key `1234abcd-12ab-34cd-56ef-1234567890ab`, which was shared by source account `111111111111`, and KMS key `4567dcba-23ab-34cd-56ef-0987654321yz`, which exists in target account `222222222222`.

**Tip**  
To follow the principle of least privilege, do not allow full access to `kms:CreateGrant`. Instead, use the `kms:GrantIsForAWSResource` condition key to allow the user to create grants on the KMS key only when the grant is created on the user's behalf by an AWS service, as shown in the following example.

------
#### [ JSON ]

****  

```
 {
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:RevokeGrant",
                "kms:CreateGrant",
                "kms:ListGrants"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab",
                "arn:aws:kms:us-east-1:222222222222:key/4567dcba-23ab-34cd-56ef-0987654321yz"		
            ],
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:111111111111:key/1234abcd-12ab-34cd-56ef-1234567890ab",
                "arn:aws:kms:us-east-1:222222222222:key/4567dcba-23ab-34cd-56ef-0987654321yz"
            ]
        }
    ]
}
```

------

Save and close the file. Then use the [put-role-policy](https://docs.aws.amazon.com/cli/latest/reference/iam/put-role-policy.html) command to add the policy to the IAM role.

For example

```
$ aws iam put-role-policy \
    --role-name AWSDataLifecycleManagerDefaultRole \
    --policy-name CopyPolicy \
    --policy-document file://AdminPolicy.json
```

------

## Specify snapshot description filters
<a name="snapshot-descr-filters"></a>

When you create the snapshot copy policy in the target account, you must specify a snapshot description filter. The snapshot description filter enables you to specify an additional level of filtering that lets you control which snapshots are copied by the policy. This means that a snapshot is only copied by the policy if it is shared by one of the specified source accounts, and it has a snapshot description that matches the specified filter. In other words, if a snapshot is shared by one of the specified course accounts, but it does not have a description that matches the specified filter, it is not copied by the policy.

The snapshot filter description must be specified using a regular expression. It is a mandatory field when creating cross-account copy event policies using the console and the command line. The following are example regular expressions that can be used:
+ `.*`—This filter matches all snapshot descriptions. If you use this expression the policy will copy all snapshots that are shared by one of the specified source accounts.
+ `Created for policy: policy-0123456789abcdef0.*`—This filter matches only snapshots that are created by a policy with an ID of `policy-0123456789abcdef0`. If you use an expression like this, only snapshots that are shared with your account by one of the specified source accounts, and that have been created by a policy with the specified ID are copied by the policy.
+ `.*production.*`—This filter matches any snapshot that has the word `production` anywhere in its description. If you use this expression the policy will copy all snapshots that are shared by one of the specified source accounts and that have the specified text in their description.

## Considerations for cross-account snapshot copy policies
<a name="event-policy-considerations"></a>

The following considerations apply to cross-account copy event policies:
+ The source account EBS snapshot policy and the target account cross-account copy event policy must be created in the same AWS Region. After the snapshot is shared, the target account policy can copy the snapshot to different destination Regions as specified in the copy actions.
+ You can only copy snapshots that are unencrypted or that are encrypted using a customer managed key.
+ You can create a cross-account copy event policy to copy snapshots that are shared outside of Amazon Data Lifecycle Manager.
+ If you want to encrypt snapshots in the target account, then the IAM role selected for the cross-account copy event policy must have permission to use the required KMS key.

## Additional resources
<a name="event-additional-resources"></a>

For more information, see the [ Automating copying encrypted Amazon EBS snapshots across AWS accounts](https://aws.amazon.com/blogs/storage/automating-copying-encrypted-amazon-ebs-snapshots-across-aws-accounts/) AWS storage blog.

# Modify Amazon Data Lifecycle Manager policies
<a name="modify"></a>

Keep the following in mind when modifying Amazon Data Lifecycle Manager policies:
+ If you modify an AMI or snapshot policy by removing its target tags, the volumes or instances with those tags are no longer managed by the policy.
+ If you modify a schedule name, the snapshots or AMIs created under the old schedule name are no longer managed by the policy.
+ If you modify an age-based retention schedule to use a new time interval, the new interval is used only for new snapshots or AMIs created after the change. The new schedule does not affect the retention schedule of snapshots or AMIs created before the change.
+ You cannot change the retention schedule of a policy from count-based to age-based after creation. To make this change, you must create a new policy.
+ If you disable a policy with an age-based retention schedule, the snapshots or AMIs that are set to expire while the policy is disabled are retained indefinitely. You must delete the snapshots or deregister the AMIs manually. When you re-enable the policy, Amazon Data Lifecycle Manager resumes deleting snapshots or deregistering AMIs as their retention periods expire.
+ If you disable a policy with a count-based retention schedule, the policy stops creating and deleting snapshots or AMIs. When you re-enable the policy, Amazon Data Lifecycle Manager resumes creating snapshots and AMIs, and it resumes deleting snapshots or AMIs as the retention threshold is met.
+ If you disable a policy that has a snapshot archiving-enabled policy, snapshots that are in the archive tier at the time of disabling the policy are no longer managed by Amazon Data Lifecycle Manager. You must manually delete the snapshot if they are no longer needed.
+ If you enable snapshot archiving on a count-based schedule, the archiving rule applies to all new snapshots that are created and archived by the schedule, and also applies to existing snapshots that were previously created and archived by the schedule.
+ If you enable snapshot archiving on an age-based schedule, the archiving rule applies only to new snapshots created after enabling snapshot archiving. Existing snapshots created before enabling snapshot archiving continue to be deleted from their respective storage tiers, according to the schedule set when those snapshots were originally created and archived.
+ If you disable snapshot archiving for a count-based schedule, the schedule immediately stops archiving snapshots. Snapshots that were previously archived by the schedule remain in the archive tier and they will not be deleted by Amazon Data Lifecycle Manager.
+ If you disable snapshot archiving for an age-based schedule, the snapshots created by the policy and that are scheduled to be archived are permanently deleted at the scheduled archive date and time, as indicated by the `aws:dlm:expirationTime` system tag.
+ If you disable snapshot archiving for a schedule, the schedule immediately stops archiving snapshots. Snapshots that were previously archived by the schedule remain in the archive tier and they will not be deleted by Amazon Data Lifecycle Manager.
+ If you modify the archive retention count for a count-based schedule, the new retention count includes existing snapshots that were previously archived by the schedule.
+ If you modify the archive retention period for an age-based schedule, the new retention period applies only to snapshots that are archived after modifying the retention rule.

Use one of the following procedures to modify a lifecycle policy.

------
#### [ Console ]

**To modify a lifecycle policy**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Elastic Block Store**, **Lifecycle Manager**.

1. Select a lifecycle policy from the list.

1. Choose **Actions**, **Modify lifecycle policy**.

1. Modify the policy settings as needed. For example, you can modify the schedule, add or remove tags, or enable or disable the policy.

1. Choose **Modify policy**.

------
#### [ Command line ]

Use the [update-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/dlm/update-lifecycle-policy.html) command to modify the information in a lifecycle policy. To simplify the syntax, this example references a JSON file, `policyDetailsUpdated.json`, that includes the policy details.

```
aws dlm update-lifecycle-policy \
    --state DISABLED \
    --execution-role-arn arn:aws:iam::12345678910:role/AWSDataLifecycleManagerDefaultRole" \
    --policy-details file://policyDetailsUpdated.json
```

The following is an example of the `policyDetailsUpdated.json` file.

```
{
   "ResourceTypes":[
      "VOLUME"
   ],
   "TargetTags":[
      {
         "Key": "costcenter",
         "Value": "120"
      }
   ],
   "Schedules":[
      {
         "Name": "DailySnapshots",
         "TagsToAdd": [
            {
               "Key": "type",
               "Value": "myDailySnapshot"
            }
         ],
         "CreateRule": {
            "Interval": 12,
            "IntervalUnit": "HOURS",
            "Times": [
               "15:00"
            ]
         },
         "RetainRule": {
            "Count" :5
         },
         "CopyTags": false 
      }
   ]
}
```

To view the updated policy, use the `get-lifecycle-policy` command. You can see that the state, the value of the tag, the snapshot interval, and the snapshot start time were changed.

------

# Delete Amazon Data Lifecycle Manager policies
<a name="delete"></a>

Keep the following in mind when deleting Amazon Data Lifecycle Manager policies:
+ If you delete a policy, the snapshots or AMIs created by that policy are not automatically deleted. If you no longer need the snapshots or AMIs, you must delete them manually.
+ If you delete a policy that has a snapshot archiving-enabled policy, snapshots that are in the archive tier at the time of deleting the policy are no longer managed by Amazon Data Lifecycle Manager. You must manually delete the snapshot if they are no longer needed.
+ If you delete a policy with an archive-enabled, age-based schedule, the snapshots created by the policy and that are scheduled to be archived are permanently deleted at the scheduled archive date and time, as indicated by the `aws:dlm:expirationtime` system tag.

Use one of the following procedures to delete a lifecycle policy.

------
#### [ Console ]

**To delete a lifecycle policy**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Elastic Block Store**, **Lifecycle Manager**.

1. Select a lifecycle policy from the list.

1. Choose **Actions**, **Delete lifecycle policy**.

1. When prompted for confirmation, choose **Delete policy**.

------
#### [ Command line ]

Use the [delete-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/dlm/delete-lifecycle-policy.html) command to delete a lifecycle policy and free up the target tags specified in the policy for reuse. 

**Note**  
You can delete snapshots created only by Amazon Data Lifecycle Manager.

```
aws dlm delete-lifecycle-policy --policy-id policy-0123456789abcdef0
```

------

The [Amazon Data Lifecycle Manager API Reference](https://docs.aws.amazon.com/dlm/latest/APIReference/) provides descriptions and syntax for each of the actions and data types for the Amazon Data Lifecycle Manager Query API.

Alternatively, you can use one of the AWS SDKs to access the API in a way that's tailored to the programming language or platform that you're using. For more information, see [AWS SDKs](https://aws.amazon.com/developer/tools/).

# Control access to Amazon Data Lifecycle Manager using IAM
<a name="dlm-prerequisites"></a>

Access to Amazon Data Lifecycle Manager requires credentials. Those credentials must have permissions to access AWS resources, such as instances, volumes, snapshots, and AMIs.

The following IAM permissions are required to use Amazon Data Lifecycle Manager.

**Note**  
The `ec2:DescribeAvailabilityZones`, `ec2:DescribeRegions`, `kms:ListAliases`, and `kms:DescribeKey` permissions are required for console users only. If console access is not required, you can remove the permissions.
The ARN format of the *AWSDataLifecycleManagerDefaultRole* role differs depending on whether it was created using the console or the AWS CLI. If the role was created using the console, the ARN format is `arn:aws:iam::account_id:role/service-role/AWSDataLifecycleManagerDefaultRole`. If the role was created using the AWS CLI, the ARN format is `arn:aws:iam::account_id:role/AWSDataLifecycleManagerDefaultRole`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "dlm:*",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::111122223333:role/AWSDataLifecycleManagerDefaultRole",
                "arn:aws:iam::111122223333:role/AWSDataLifecycleManagerDefaultRoleForAMIManagement",
                "arn:aws:iam::111122223333:role/service-role/AWSDataLifecycleManagerDefaultRole",
                "arn:aws:iam::111122223333:role/service-role/AWSDataLifecycleManagerDefaultRoleForAMIManagement"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "iam:ListRoles",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeRegions",
                "kms:ListAliases",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        }
    ]
}
```

------

**Permissions for encryption**

Consider the following when working with Amazon Data Lifecycle Manager and encrypted resources.
+ If the source volume is encrypted, ensure that the Amazon Data Lifecycle Manager default roles (**AWSDataLifecycleManagerDefaultRole** and **AWSDataLifecycleManagerDefaultRoleForAMIManagement**) have permission to use the KMS keys used to encrypt the volume.
+ If you enable **Cross Region copy** for unencrypted snapshots or AMIs backed by unencrypted snapshots, and choose to enable encryption in the destination Region, ensure that the default roles have permission to use the KMS key needed to perform the encryption in the destination Region.
+ If you enable **Cross Region copy** for encrypted snapshots or AMIs backed by encrypted snapshots, ensure that the default roles have permission to use both the source and destination KMS keys. 
+ If you enable snapshot archiving for encrypted snapshots, ensure that the Amazon Data Lifecycle Manager default role (**AWSDataLifecycleManagerDefaultRole** has permission to use the KMS key used to encrypt the snapshot.

For more information, see [Allowing users in other accounts to use a KMS key](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the *AWS Key Management Service Developer Guide*.

For more information, see [Changing permissions for a user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html) in the *IAM User Guide*.

# AWS managed policies for Amazon Data Lifecycle Manager
<a name="managed-policies"></a>

An AWS managed policy is a standalone policy that is created and administered by AWS. AWS managed policies are designed to provide permissions for many common use cases. AWS managed policies make it more efficient for you to assign appropriate permissions to users, groups, and roles, than if you had to write the policies yourself.

However, you can't change the permissions defined in AWS managed policies. AWS occasionally updates the permissions defined in an AWS managed policy. When this occurs, the update affects all principal entities (users, groups, and roles) that the policy is attached to.

Amazon Data Lifecycle Manager provides AWS managed policies for common use cases. These policies make it more efficient to define the appropriate permissions and control access to your resources. The AWS managed policies provided by Amazon Data Lifecycle Manager are designed to be attached to roles that you pass to Amazon Data Lifecycle Manager.

**Topics**
+ [

## AWSDataLifecycleManagerServiceRole
](#AWSDataLifecycleManagerServiceRole)
+ [

## AWSDataLifecycleManagerServiceRoleForAMIManagement
](#AWSDataLifecycleManagerServiceRoleForAMIManagement)
+ [

## AWSDataLifecycleManagerSSMFullAccess
](#AWSDataLifecycleManagerSSMFullAccess)
+ [

## AWS managed policy updates
](#policy-update)

## AWSDataLifecycleManagerServiceRole
<a name="AWSDataLifecycleManagerServiceRole"></a>

The **AWSDataLifecycleManagerServiceRole** policy provides appropriate permissions to Amazon Data Lifecycle Manager to create and manage Amazon EBS snapshot policies and cross-account copy event policies.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateSnapshot",
                "ec2:CreateSnapshots",
                "ec2:DeleteSnapshot",
                "ec2:DescribeInstances",
                "ec2:DescribeVolumes",
                "ec2:DescribeSnapshots",
                "ec2:EnableFastSnapshotRestores",
                "ec2:DescribeFastSnapshotRestores",
                "ec2:DisableFastSnapshotRestores",
                "ec2:CopySnapshot",
                "ec2:ModifySnapshotAttribute",
                "ec2:DescribeSnapshotAttribute",
                "ec2:ModifySnapshotTier",
                "ec2:DescribeSnapshotTierStatus",
                "ec2:DescribeAvailabilityZones"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags"
            ],
            "Resource": "arn:aws:ec2:*::snapshot/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "events:PutRule",
                "events:DeleteRule",
                "events:DescribeRule",
                "events:EnableRule",
                "events:DisableRule",
                "events:ListTargetsByRule",
                "events:PutTargets",
                "events:RemoveTargets"
            ],
            "Resource": "arn:aws:events:*:*:rule/AwsDataLifecycleRule.managed-cwe.*"
        }
    ]
}
```

------

## AWSDataLifecycleManagerServiceRoleForAMIManagement
<a name="AWSDataLifecycleManagerServiceRoleForAMIManagement"></a>

The **AWSDataLifecycleManagerServiceRoleForAMIManagement** policy provides appropriate permissions to Amazon Data Lifecycle Manager to create and manage Amazon EBS-backed AMI policies.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "ec2:CreateTags",
            "Resource": [
                "arn:aws:ec2:*::snapshot/*",
                "arn:aws:ec2:*::image/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeImages",
                "ec2:DescribeInstances",
                "ec2:DescribeImageAttribute",
                "ec2:DescribeVolumes",
                "ec2:DescribeSnapshots"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DeleteSnapshot",
            "Resource": "arn:aws:ec2:*::snapshot/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:ResetImageAttribute",
                "ec2:DeregisterImage",
                "ec2:CreateImage",
                "ec2:CopyImage",
                "ec2:ModifyImageAttribute"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:EnableImageDeprecation",
                "ec2:DisableImageDeprecation"
            ],
            "Resource": "arn:aws:ec2:*::image/*"
        }
    ]
}
```

------

## AWSDataLifecycleManagerSSMFullAccess
<a name="AWSDataLifecycleManagerSSMFullAccess"></a>

Provides Amazon Data Lifecycle Manager permission to perform the Systems Manager actions required to run pre and post scripts on all Amazon EC2 instances.

**Important**  
The policy uses the `aws:ResourceTag` condition key to restrict access to specific SSM documents when using pre and post scripts. To allow Amazon Data Lifecycle Manager to access the SSM documents, you must ensure that your SSM documents are tagged with `DLMScriptsAccess:true`. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowSSMReadOnlyAccess",
            "Effect": "Allow",
            "Action": [
                "ssm:GetCommandInvocation",
                "ssm:ListCommands",
                "ssm:DescribeInstanceInformation"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AllowTaggedSSMDocumentsOnly",
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand",
                "ssm:DescribeDocument",
                "ssm:GetDocument"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:document/*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/DLMScriptsAccess": "true"
                }
            }
        },
        {
            "Sid": "AllowSpecificAWSOwnedSSMDocuments",
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand",
                "ssm:DescribeDocument",
                "ssm:GetDocument"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:document/AWSEC2-CreateVssSnapshot",
                "arn:aws:ssm:*:*:document/AWSSystemsManagerSAP-CreateDLMSnapshotForSAPHANA"
            ]
        },
        {
            "Sid": "AllowAllEC2Instances",
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:*:*:instance/*"
            ]
        }
    ]
}
```

------

## AWS managed policy updates
<a name="policy-update"></a>

AWS services maintain and update AWS managed policies. You can't change the permissions in AWS managed policies. Services occasionally add additional permissions to an AWS managed policy to support new features. This type of update affects all identities (users, groups, and roles) where the policy is attached. Services are most likely to update an AWS managed policy when a new feature is launched or when new operations become available. Services do not remove permissions from an AWS managed policy, so policy updates won't break your existing permissions.

The following table provides details about updates to AWS managed policies for Amazon Data Lifecycle Manager since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the [Document history for the Amazon EBS User Guide](doc-history.md).


| Change | Description | Date | 
| --- | --- | --- | 
| AWSDataLifecycleManagerServiceRole — Updated the policy permissions. | Amazon Data Lifecycle Manager added the ec2:DescribeAvailabilityZones action to grant snapshot policies permission to get information about Local Zones. | December 16, 2024 | 
| AWSDataLifecycleManagerSSMFullAccess — Updated the policy permissions. | Updated the policy to support application-consistent snapshots for SAP HANA using the AWSSystemsManagerSAP-CreateDLMSnapshotForSAPHANA SSM document. | November 17, 2023 | 
| AWSDataLifecycleManagerSSMFullAccess — Added a new AWS managed policy. | Amazon Data Lifecycle Manager added the AWSDataLifecycleManagerSSMFullAccess AWS managed policy. | November 7, 2023 | 
| AWSDataLifecycleManagerServiceRole — Added permissions to support snapshot archiving. | Amazon Data Lifecycle Manager added the ec2:ModifySnapshotTier and ec2:DescribeSnapshotTierStatus actions to grant snapshot policies permission to archive snapshots and to check the archive status for snapshots. | September 30, 2022 | 
| AWSDataLifecycleManagerServiceRoleForAMIManagement — Added permissions to support AMI deprecation. | Amazon Data Lifecycle Manager added the ec2:EnableImageDeprecation and ec2:DisableImageDeprecation actions to grant EBS-backed AMI policies permission to enable and disable AMI deprecation. | August 23, 2021 | 
| Amazon Data Lifecycle Manager started tracking changes | Amazon Data Lifecycle Manager started tracking changes for its AWS managed policies. | August 23, 2021 | 

# IAM service roles for Amazon Data Lifecycle Manager
<a name="service-role"></a>

An AWS Identity and Access Management (IAM) role is similar to a user, in that it is an AWS identity with permissions policies that determine what the identity can and can't do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. A service role is a role that an AWS service assumes to perform actions on your behalf. As a service that performs backup operations on your behalf, Amazon Data Lifecycle Manager requires that you pass it a role to assume when performing policy operations on your behalf. For more information about IAM roles, see [IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) in the *IAM User Guide*.

The role that you pass to Amazon Data Lifecycle Manager must have an IAM policy with the permissions that enable Amazon Data Lifecycle Manager to perform actions associated with policy operations, such as creating snapshots and AMIs, copying snapshots and AMIs, deleting snapshots, and deregistering AMIs. Different permissions are required for each of the Amazon Data Lifecycle Manager policy types. The role must also have Amazon Data Lifecycle Manager listed as a trusted entity, which enables Amazon Data Lifecycle Manager to assume the role.

**Topics**
+ [

## Default service roles for Amazon Data Lifecycle Manager
](#default-service-roles)
+ [

## Custom service roles for Amazon Data Lifecycle Manager
](#custom-role)

## Default service roles for Amazon Data Lifecycle Manager
<a name="default-service-roles"></a>

Amazon Data Lifecycle Manager uses the following default service roles:
+ **AWSDataLifecycleManagerDefaultRole**—default role for managing snapshots. It trusts only the `dlm.amazonaws.com` service to assume the role and it allows Amazon Data Lifecycle Manager to perform the actions required by snapshot and cross-account snapshot copy policies on your behalf. This role uses the ` AWSDataLifecycleManagerServiceRole` AWS managed policy.
**Note**  
The ARN format of the role differs depending on whether it was created using the console or the AWS CLI. If the role was created using the console, the ARN format is `arn:aws:iam::account_id:role/service-role/AWSDataLifecycleManagerDefaultRole`. If the role was created using the AWS CLI, the ARN format is `arn:aws:iam::account_id:role/AWSDataLifecycleManagerDefaultRole`.
+ **AWSDataLifecycleManagerDefaultRoleForAMIManagement**—default role for managing AMIs. It trusts only the `dlm.amazonaws.com` service to assume the role and it allows Amazon Data Lifecycle Manager to perform the actions required by EBS-backed AMI policies on your behalf. This role uses the `AWSDataLifecycleManagerServiceRoleForAMIManagement` AWS managed policy.

If you are using the Amazon Data Lifecycle Manager console, Amazon Data Lifecycle Manager automatically creates the ** AWSDataLifecycleManagerDefaultRole** service role the first time you create a snapshot or cross-account snapshot copy policy, and it automatically creates the ** AWSDataLifecycleManagerDefaultRoleForAMIManagement** service role the first time you create an EBS-backed AMI policy.

If you are not using the console, you can manually create the service roles using the [create-default-role](https://docs.aws.amazon.com/cli/latest/reference/dlm/create-default-role.html) command. For `--resource-type`, specify `snapshot` to create AWSDataLifecycleManagerDefaultRole, or `image` to create AWSDataLifecycleManagerDefaultRoleForAMIManagement.

```
$ aws dlm create-default-role --resource-type snapshot|image
```

If you delete the default service roles, and then need to create them again, you can use the same process to recreate them in your account.

## Custom service roles for Amazon Data Lifecycle Manager
<a name="custom-role"></a>

As an alternative to using the default service roles, you can create custom IAM roles with the required permissions and then select them when you create a lifecycle policy. 

**To create a custom IAM role**

1. Create roles with the following permissions.
   + Permissions required for managing snapshot lifecycle policies

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Effect": "Allow",
                 "Action": [
                     "ec2:CreateSnapshot",
                     "ec2:CreateSnapshots",
                     "ec2:DeleteSnapshot",
                     "ec2:DescribeInstances",
                     "ec2:DescribeVolumes",
                     "ec2:DescribeSnapshots",
                     "ec2:EnableFastSnapshotRestores",
                     "ec2:DescribeFastSnapshotRestores",
                     "ec2:DisableFastSnapshotRestores",
                     "ec2:CopySnapshot",
                     "ec2:ModifySnapshotAttribute",
                     "ec2:DescribeSnapshotAttribute",
                     "ec2:ModifySnapshotTier",
                     "ec2:DescribeSnapshotTierStatus",
                     "ec2:DescribeAvailabilityZones"
                 ],
                 "Resource": "*"
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "ec2:CreateTags"
                 ],
                 "Resource": "arn:aws:ec2:*::snapshot/*"
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "events:PutRule",
                     "events:DeleteRule",
                     "events:DescribeRule",
                     "events:EnableRule",
                     "events:DisableRule",
                     "events:ListTargetsByRule",
                     "events:PutTargets",
                     "events:RemoveTargets"
                 ],
                 "Resource": "arn:aws:events:*:*:rule/AwsDataLifecycleRule.managed-cwe.*"
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "ssm:GetCommandInvocation",
                     "ssm:ListCommands",
                     "ssm:DescribeInstanceInformation"
                 ],
                 "Resource": "*"
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "ssm:SendCommand",
                     "ssm:DescribeDocument",
                     "ssm:GetDocument"
                 ],
                 "Resource": [
                     "arn:aws:ssm:*:*:document/*"
                 ],
                 "Condition": {
                     "StringEquals": {
                         "aws:ResourceTag/DLMScriptsAccess": "true"
                     }
                 }
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "ssm:SendCommand",
                     "ssm:DescribeDocument",
                     "ssm:GetDocument"
                 ],
                 "Resource": [
                     "arn:aws:ssm:*::document/*"
                 ]
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "ssm:SendCommand"
                 ],
                 "Resource": [
                     "arn:aws:ec2:*:*:instance/*"
                 ],
                 "Condition": {
                     "StringNotLike": {
                         "aws:ResourceTag/DLMScriptsAccess": "false"
                     }
                 }
             }
         ]
     }
     ```

------
   + Permissions required for managing AMI lifecycle policies

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Effect": "Allow",
                 "Action": "ec2:CreateTags",
                 "Resource": [
                     "arn:aws:ec2:*::snapshot/*",
                     "arn:aws:ec2:*::image/*"
                 ]
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "ec2:DescribeImages",
                     "ec2:DescribeInstances",
                     "ec2:DescribeImageAttribute",
                     "ec2:DescribeVolumes",
                     "ec2:DescribeSnapshots"
                 ],
                 "Resource": "*"
             },
             {
                 "Effect": "Allow",
                 "Action": "ec2:DeleteSnapshot",
                 "Resource": "arn:aws:ec2:*::snapshot/*"
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "ec2:ResetImageAttribute",
                     "ec2:DeregisterImage",
                     "ec2:CreateImage",
                     "ec2:CopyImage",
                     "ec2:ModifyImageAttribute"
                 ],
                 "Resource": "*"
             },
             {
                 "Effect": "Allow",
                 "Action": [
                     "ec2:EnableImageDeprecation",
                     "ec2:DisableImageDeprecation"
                 ],
                 "Resource": "arn:aws:ec2:*::image/*"
             }
         ]
     }
     ```

------

   For more information, see [ Creating a Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.

1. Add a trust relationship to the roles.

   1. In the IAM console, choose **Roles**.

   1. Select the roles that you created, and then choose **Trust relationships**.

   1. Choose **Edit Trust Relationship**, add the following policy, and then choose **Update Trust Policy**.

------
#### [ JSON ]

****  

      ```
      {
      	"Version":"2012-10-17",		 	 	 
      	"Statement": [{
      		"Effect": "Allow",
      		"Principal": {
      			"Service": "dlm.amazonaws.com"
      		},
      		"Action": "sts:AssumeRole"
      	}]
      }
      ```

------

      We recommend that you use the `aws:SourceAccount` and `aws:SourceArn` condition keys to protect yourself against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html). For example, you could add the following condition block to the previous trust policy. The `aws:SourceAccount` is the owner of the lifecycle policy and the `aws:SourceArn` is the ARN of the lifecycle policy. If you don't know the lifecycle policy ID, you can replace that portion of the ARN with a wildcard (`*`) and then update the trust policy after you create the lifecycle policy.

      ```
      "Condition": {
          "StringEquals": {
              "aws:SourceAccount": "account_id"
          },
          "ArnLike": {
              "aws:SourceArn": "arn:partition:dlm:region:account_id:policy/policy_id"
          }
      }
      ```

# Monitor Amazon Data Lifecycle Manager policies
<a name="dlm-monitor-lifecycle"></a>

You can use the following features to monitor the lifecycle of your snapshots and AMIs.

**Topics**
+ [

## Console and AWS CLI
](#monitor-console-cli)
+ [

## AWS CloudTrail
](#monitor-lifecycle-cloudtrail)
+ [

# Monitor Data Lifecycle Manager policies using EventBridge
](monitor-cloudwatch-events.md)
+ [

# Monitor Data Lifecycle Manager policies using CloudWatch
](monitor-dlm-cw-metrics.md)

## Console and AWS CLI
<a name="monitor-console-cli"></a>

You can view your lifecycle policies using the Amazon EC2 console or the AWS CLI. Each snapshot and AMI created by a policy has a timestamp and policy-related tags. You can filter snapshots and AMIs using these tags to verify that your backups are being created as you intend.

## AWS CloudTrail
<a name="monitor-lifecycle-cloudtrail"></a>

With AWS CloudTrail, you can track user activity and API usage to demonstrate compliance with internal policies and regulatory standards. For more information, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/).

# Monitor Data Lifecycle Manager policies using EventBridge
<a name="monitor-cloudwatch-events"></a>

Amazon EBS and Amazon Data Lifecycle Manager emit events related to lifecycle policy actions. You can use AWS Lambda and Amazon CloudWatch Events to handle event notifications programmatically. Events are emitted on a best effort basis. For more information, see the [Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/).

The following events are available:

**Note**  
No events are emitted for AMI lifecycle policy actions.
+ `createSnapshot` — An Amazon EBS event emitted when a `CreateSnapshot` action succeeds or fails. For more information, see [Amazon EventBridge events for Amazon EBS](ebs-cloud-watch-events.md).
+ `DLM Policy State Change` — An Amazon Data Lifecycle Manager event emitted when a lifecycle policy enters an error state. The event contains a description of what caused the error.

  The following is an example of an event when the permissions granted by the IAM role are insufficient.

  ```
  {
      "version": "0",
      "id": "01234567-0123-0123-0123-0123456789ab",
      "detail-type": "DLM Policy State Change",
      source": "aws.dlm",
      "account": "123456789012",
      "time": "2018-05-25T13:12:22Z",
      "region": "us-east-1",
      "resources": [
          "arn:aws:dlm:us-east-1:123456789012:policy/policy-0123456789abcdef"
      ],
      "detail": {
          "state": "ERROR",
          "cause": "Role provided does not have sufficient permissions",
          "policy_id": "arn:aws:dlm:us-east-1:123456789012:policy/policy-0123456789abcdef"
      }
  }
  ```

  The following is an example of an event when a limit is exceeded.

  ```
  {
      "version": "0",
      "id": "01234567-0123-0123-0123-0123456789ab",
      "detail-type": "DLM Policy State Change",
      "source": "aws.dlm",
      "account": "123456789012",
      "time": "2018-05-25T13:12:22Z",
      "region": "us-east-1",
      "resources": [
          "arn:aws:dlm:us-east-1:123456789012:policy/policy-0123456789abcdef"
      ],
      "detail":{
          "state": "ERROR",
          "cause": "Maximum allowed active snapshot limit exceeded",
          "policy_id": "arn:aws:dlm:us-east-1:123456789012:policy/policy-0123456789abcdef"
      }
  }
  ```
+ `DLM Pre Post Script Notification` — An event that is emitted when a pre or post script is initiated, succeeds, or fails.

  The following is an example event when a VSS backup succeeds.

  ```
  {
      "version": "0",
      "id": "12345678-1234-1234-1234-123456789012",
      "detail-type": "DLM Pre Post Script Notification",
      "source": "aws.dlm",
      "account": "123456789012",
      "time": "2023-10-27T22:04:52Z",
      "region": "us-east-1",
      "resources": ["arn:aws:dlm:us-east-1:123456789012:policy/policy-01234567890abcdef"],
      "detail": {
          "script_stage": "",
          "result": "success",
          "cause": "",
          "policy_id": "arn:aws:dlm:us-east-1:123456789012:policy/policy-01234567890abcdef",
          "execution_handler": "AWS_VSS_BACKUP",
          "source": "arn:aws:ec2:us-east-1:123456789012:instance/i-01234567890abcdef",
          "resource_type": "EBS_SNAPSHOT",
          "resources": [{
              "status": "pending",
              "resource_id": "arn:aws:ec2:us-east-1::snapshot/snap-01234567890abcdef",
              "source": "arn:aws:ec2:us-east-1:123456789012:volume/vol-01234567890abcdef"
          }],
          "request_id": "a1b2c3d4-a1b2-a1b2-a1b2-a1b2c3d4e5f6",
          "start_time": "2023-10-27T22:03:29.370Z",
          "end_time": "2023-10-27T22:04:51.370Z",
          "timeout_time": ""
      }
  }
  ```

# Monitor Data Lifecycle Manager policies using CloudWatch
<a name="monitor-dlm-cw-metrics"></a>

You can monitor your Amazon Data Lifecycle Manager lifecycle policies using CloudWatch, which collects raw data and processes it into readable, near real-time metrics. You can use these metrics to see exactly how many Amazon EBS snapshots and EBS-backed AMIs are created, deleted, and copied by your policies over time. You can also set alarms that watch for certain thresholds, and send notifications or take actions when those thresholds are met.

Metrics are kept for a period of 15 months, so that you can access historical information and gain a better understanding of how your lifecycle policies perform over an extended period.

For more information about Amazon CloudWatch, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/).

**Topics**
+ [

## Supported metrics
](#metrics)
+ [

## View CloudWatch metrics for your policies
](#view-metrics)
+ [Graph metrics](#graph-metrics)
+ [

## Create a CloudWatch alarm for a policy
](#create-alarm)
+ [

## Example use cases
](#use-cases)
+ [

## Managing policies that report failed actions
](#manage)

## Supported metrics
<a name="metrics"></a>

The following Amazon Data Lifecycle Manager metrics are included in the `AWS/EBS` namespace. The metrics differ by policy type.

All metrics can be measured on the `DLMPolicyId` dimension. The most useful statistics are `sum` and `average`, and the unit of measure is `count`.

Choose a tab to view the metrics supported by that policy type.

------
#### [ EBS snapshot policies ]


| Metric | Description | 
| --- | --- | 
|  `ResourcesTargeted`  |  The number of resources targeted by the tags specified in a snapshot or EBS-backed AMI policy.  | 
|  `SnapshotsCreateStarted`  |  The number of snapshot create actions initiated by a snapshot policy. Each action is recorded only once, even if there are multiple subsequent retries. If a snapshot create action fails, Amazon Data Lifecycle Manager sends a `SnapshotsCreateFailed` metric.  | 
|  `SnapshotsCreateCompleted`  |  The number of snapshots created by a snapshot policy. This includes successful retries within 60 minutes of the scheduled time.  | 
|  `SnapshotsCreateFailed`  |  The number of snapshots that could not be created by a snapshot policy. This includes unsuccessful retries within 60 minutes from the scheduled time.  | 
|  `SnapshotsSharedCompleted`  |  The number of snapshots shared across accounts by a snapshot policy.  | 
|  `SnapshotsDeleteCompleted`  |  The number of snapshots deleted by a snapshot or EBS-backed AMI policy. This metric applies only to snapshots created by the policy. It does not apply to cross-Region snapshot copies created by the policy. This metric includes snapshots that are deleted when an EBS-backed AMI policy deregisters AMIs.  | 
|  `SnapshotsDeleteFailed`  |  The number of snapshots that could not be deleted by a snapshot or EBS-backed AMI policy. This metric applies only to snapshots created by the policy. It does not apply to cross-Region snapshot copies created by the policy. This metric includes snapshots that are deleted when an EBS-backed AMI policy deregisters AMIs.  | 
|  `SnapshotsCopiedRegionStarted`  |  The number of cross-Region snapshot copy actions initiated by a snapshot policy.  | 
|  `SnapshotsCopiedRegionCompleted`  |  The number of cross-Region snapshot copies created by a snapshot policy. This includes successful retries within 24 hours of the scheduled time.  | 
|  `SnapshotsCopiedRegionFailed`  |  The number of cross-Region snapshot copies that could not be created by a snapshot policy. This includes unsuccessful retries within 24 hours from the scheduled time.  | 
|  `SnapshotsCopiedRegionDeleteCompleted`  |  The number of cross-Region snapshot copies deleted, as designated by the retention rule, by a snapshot policy.  | 
|  `SnapshotsCopiedRegionDeleteFailed`  |  The number of cross-Region snapshot copies that could not be deleted, as designated by the retention rule, by a snapshot policy.  | 
|  `snapshotsArchiveDeletionFailed`  |  The number of archived snapshots that could not be deleted from the archive tier by a snapshot policy.  | 
|  `snapshotsArchiveScheduled`  |  The number of snapshots that were scheduled to be archived by a snapshot policy.  | 
|  `snapshotsArchiveCompleted`  |  The number of snapshots that were successfully archived by a snapshot policy.  | 
|  `snapshotsArchiveFailed`  |  The number of snapshots that could not be archived by a snapshot policy.  | 
|  `snapshotsArchiveDeletionCompleted`  |  The number of archived snapshots that were successfully deleted from the archive tier by a snapshot policy.  | 
|  `PreScriptStarted`  |  The number of instances for which a pre script was successfully initiated. If script retries are enabled, this metric can be emitted multiple times per policy run.  | 
|  `PreScriptCompleted`  |  The number of instances for which a pre script was successfully completed. The metric is emitted even if the pre script completes outside of the specified timeout period. If script retries are enabled, this metric can be emitted multiple times per policy run.  | 
|  `PreScriptFailed`  |  The number of instances for which a pre script failed to complete successfully. The metric is emitted even if the pre script completes outside of the specified timeout period. If script retries are enabled, this metric can be emitted multiple times per policy run.  | 
|  `PostScriptStarted`  |  The number of instances for which a post script was successfully initiated. If script retries are enabled, this metric can be emitted multiple times per policy run.  | 
|  PostScriptCompleted  |  The number of instances for which a post script was successfully completed. The metric is emitted even if the post script completes outside of the specified timeout period. If script retries are enabled, this metric can be emitted multiple times per policy run.  | 
|  PostScriptFailed  |  The number of instances for which a post script failed to complete successfully. The metric is emitted even if the post script completes outside of the specified timeout period. If script retries are enabled, this metric can be emitted multiple times per policy run.  | 
|  `VSSBackupStarted`  |  The number of instances for which a VSS backup was successfully initiated. If script retries are enabled, this metric can be emitted multiple times per policy run.  | 
|  `VSSBackupCompleted`  |  The number of instances for which a VSS backup was successfully completed. The metric is emitted even if the VSS backup completes outside of the timeout period. If script retries are enabled, this metric can be emitted multiple times per policy run.  | 
|  `VSSBackupFailed`  |  The number of instances for which a VSS backup failed to complete successfully. The metric is emitted even if the VSS backup completes outside of the timeout period. If script retries are enabled, this metric can be emitted multiple times per policy run.  | 

------
#### [ EBS-backed AMI policies ]

The following metrics can be used with EBS-backed AMI policies:


| Metric | Description | 
| --- | --- | 
|  `ResourcesTargeted`  |  The number of resources targeted by the tags specified in a snapshot or EBS-backed AMI policy.  | 
|  `SnapshotsDeleteCompleted`  |  The number of snapshots deleted by a snapshot or EBS-backed AMI policy. This metric applies only to snapshots created by the policy. It does not apply to cross-Region snapshot copies created by the policy. This metric includes snapshots that are deleted when an EBS-backed AMI policy deregisters AMIs.  | 
|  `SnapshotsDeleteFailed`  |  The number of snapshots that could not be deleted by a snapshot or EBS-backed AMI policy. This metric applies only to snapshots created by the policy. It does not apply to cross-Region snapshot copies created by the policy. This metric includes snapshots that are deleted when an EBS-backed AMI policy deregisters AMIs.  | 
|  `SnapshotsCopiedRegionDeleteCompleted`  |  The number of cross-Region snapshot copies deleted, as designated by the retention rule, by a snapshot policy.  | 
|  `SnapshotsCopiedRegionDeleteFailed`  |  The number of cross-Region snapshot copies that could not be deleted, as designated by the retention rule, by a snapshot policy.  | 
|  `ImagesCreateStarted`  |  The number of **CreateImage** actions initiated by an EBS-backed AMI policy.  | 
|  `ImagesCreateCompleted`  |  The number of AMIs created by an EBS-backed AMI policy.  | 
|  `ImagesCreateFailed`  |  The number of AMIs that could not be created by an EBS-backed AMI policy.  | 
|  `ImagesDeregisterCompleted`  |  The number of AMIs deregistered by an EBS-backed AMI policy.  | 
|  `ImagesDeregisterFailed`  |  The number of AMIs that could not be deregistered by an EBS-backed AMI policy.  | 
|  `ImagesCopiedRegionStarted`  |  The number of cross-Region copy actions initiated by an EBS-backed AMI policy.  | 
|  `ImagesCopiedRegionCompleted`  |  The number of cross-Region AMI copies created by an EBS-backed AMI policy.  | 
|  `ImagesCopiedRegionFailed`  |  The number of cross-Region AMI copies that could not be created by an EBS-backed AMI policy.  | 
|  `ImagesCopiedRegionDeregisterCompleted`  |  The number of cross-Region AMI copies deregistered, as designated by the retention rule, by an EBS-backed AMI policy.  | 
|  `ImagesCopiedRegionDeregisteredFailed`  |  The number of cross-Region AMI copies that could not be deregistered, as designated by the retention rule, by an EBS-backed AMI policy.  | 
|  `EnableImageDeprecationCompleted`  |  The number of AMIs that were marked for deprecation by an EBS-backed AMI policy.  | 
|  `EnableImageDeprecationFailed`  |  The number of AMIs that could not be marked for deprecation by an EBS-backed AMI policy.  | 
|  `EnableCopiedImageDeprecationCompleted`  |  The number of cross-Region AMI copies that were marked for deprecation by an EBS-backed AMI policy.  | 
|  `EnableCopiedImageDeprecationFailed`  |  The number of cross-Region AMI copies that could not be marked for deprecation by an EBS-backed AMI policy.  | 

------
#### [ Cross-account copy event policies ]

The following metrics can be used with cross-account copy event policies:


| Metric | Description | 
| --- | --- | 
|  `SnapshotsCopiedAccountStarted`  |  The number of cross-account snapshot copy actions initiated by a cross-account copy event policy.  | 
|  `SnapshotsCopiedAccountCompleted`  |  The number of snapshots copied from another account by a cross-account copy event policy. This includes successful retries within 24 hours of the scheduled time.  | 
|  `SnapshotsCopiedAccountFailed`  |  The number of snapshots that could not be copied from another account by a cross-account copy event policy. This includes unsuccessful retries within 24 hours of the scheduled time.  | 
|  `SnapshotsCopiedAccountDeleteCompleted`  |  The number of cross-Region snapshot copies deleted, as designated by the retention rule, by a cross-account copy event policy.  | 
|  `SnapshotsCopiedAccountDeleteFailed`  |  The number of cross-Region snapshot copies that could not be deleted, as designated by the retention rule, by a cross-account copy event policy.  | 

------

## View CloudWatch metrics for your policies
<a name="view-metrics"></a>

You can use the AWS Management Console or the command line tools to list the metrics that Amazon Data Lifecycle Manager sends to Amazon CloudWatch.

------
#### [ Amazon EC2 console ]

**To view metrics using the Amazon EC2 console**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Lifecycle Manager**.

1. Select a policy in the grid and then choose the **Monitoring** tab.

------
#### [ CloudWatch console ]

**To view metrics using the Amazon CloudWatch console**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Metrics**.

1. Select the **EBS** namespace and then select **Data Lifecycle Manager metrics**.

------
#### [ AWS CLI ]

**To list all the available metrics for Amazon Data Lifecycle Manager**  
Use the [list-metrics](https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/list-metrics.html) command.

```
$ C:\> aws cloudwatch list-metrics \
    --namespace AWS/EBS
```

**To list all the metrics for a specific policy**  
Use the [list-metrics](https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/list-metrics.html) command and specify the `DLMPolicyId` dimension.

```
$ C:\> aws cloudwatch list-metrics \
    --namespace AWS/EBS \
    --dimensions Name=DLMPolicyId,Value=policy-abcdef01234567890
```

**To list a single metric across all policies**  
Use the [list-metrics](https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/list-metrics.html) command and specify the `--metric-name` option.

```
$ C:\> aws cloudwatch list-metrics \
    --namespace AWS/EBS \
    --metric-name SnapshotsCreateCompleted
```

------

## Graph metrics for your policies
<a name="graph-metrics"></a>

After you create a policy, you can open the Amazon EC2 console and view the monitoring graphs for the policy on the **Monitoring** tab. Each graph is based on one of the available Amazon EC2 metrics.

The following graphs metrics are available:
+ Resources targeted (based on `ResourcesTargeted`)
+ Snapshot creation started (based on `SnapshotsCreateStarted`)
+ Snapshot creation completed (based on `SnapshotsCreateCompleted`)
+ Snapshot creation failed (based on `SnapshotsCreateFailed`)
+ Snapshot sharing completed (based on `SnapshotsSharedCompleted`)
+ Snapshot deletion completed (based on `SnapshotsDeleteCompleted`)
+ Snapshot deletion failed (based on `SnapshotsDeleteFailed`)
+ Snapshot cross-Region copy started (based on `SnapshotsCopiedRegionStarted`)
+ Snapshot cross-Region copy completed (based on `SnapshotsCopiedRegionCompleted`)
+ Snapshot cross-Region copy failed (based on `SnapshotsCopiedRegionFailed`)
+ Snapshot cross-Region copy deletion completed (based on `SnapshotsCopiedRegionDeleteCompleted`)
+ Snapshot cross-Region copy deletion failed (based on `SnapshotsCopiedRegionDeleteFailed`)
+ Snapshot cross-account copy started (based on `SnapshotsCopiedAccountStarted`)
+ Snapshot cross-account copy completed (based on `SnapshotsCopiedAccountCompleted`)
+ Snapshot cross-account copy failed (based on `SnapshotsCopiedAccountFailed`)
+ Snapshot cross-account copy deletion completed (based on `SnapshotsCopiedAccountDeleteCompleted`)
+ Snapshot cross-account copy deletion failed (based on `SnapshotsCopiedAccountDeleteFailed`)
+ AMI creation started (based on `ImagesCreateStarted`)
+ AMI creation completed (based on `ImagesCreateCompleted`)
+ AMI creation failed (based on `ImagesCreateFailed`)
+ AMI deregistration completed (based on `ImagesDeregisterCompleted`)
+ AMI deregistration failed (based on `ImagesDeregisterFailed`)
+ AMI cross-Region copy started (based on `ImagesCopiedRegionStarted`)
+ AMI cross-Region copy completed (based on `ImagesCopiedRegionCompleted`)
+ AMI cross-Region copy failed (based on `ImagesCopiedRegionFailed`)
+ AMI cross-Region copy deregistration completed (based on `ImagesCopiedRegionDeregisterCompleted`)
+ AMI cross-Region copy deregister failed (based on `ImagesCopiedRegionDeregisteredFailed`)
+ AMI enable deprecation completed (based on `EnableImageDeprecationCompleted`)
+ AMI enable deprecation failed (based on `EnableImageDeprecationFailed`)
+ AMI cross-Region copy enable deprecation completed (based on `EnableCopiedImageDeprecationCompleted`)
+ AMI cross-Region copy enable deprecation failed (based on `EnableCopiedImageDeprecationFailed`)

## Create a CloudWatch alarm for a policy
<a name="create-alarm"></a>

You can create a CloudWatch alarm that monitors CloudWatch metrics for your policies. CloudWatch will automatically send you a notification when the metric reaches a threshold that you specify. You can create a CloudWatch alarm using the CloudWatch console.

For more information about creating alarms using the CloudWatch console, see the following topic in the *Amazon CloudWatch User Guide*.
+ [ Create a CloudWatch Alarm Based on a Static Threshold](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html)
+ [ Create a CloudWatch Alarm Based on Anomaly Detection](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Anomaly_Detection_Alarm.html)

## Example use cases
<a name="use-cases"></a>

The following are example use cases.

**Topics**
+ [

### Example 1: ResourcesTargeted metric
](#case1)
+ [

### Example 2: SnapshotDeleteFailed metric
](#case2)
+ [

### Example 3: SnapshotsCopiedRegionFailed metric
](#case3)

### Example 1: ResourcesTargeted metric
<a name="case1"></a>

You can use the `ResourcesTargeted` metric to monitor the total number of resources that are targeted by a specific policy each time it is run. This enables you to trigger an alarm when the number of targeted resources is below or above an expected threshold.

For example, if you expect your daily policy to create backups of no more than `50` volumes, you can create an alarm that sends an email notification when the `sum` for `ResourcesTargeted` is greater than `50` over a `1` hour period. In this way, you can ensure that no snapshots have been unexpectedly created from volumes that have been incorrectly tagged.

You can use the following command to create this alarm:

```
$ C:\> aws cloudwatch put-metric-alarm \
    --alarm-name resource-targeted-monitor \
    --alarm-description "Alarm when policy targets more than 50 resources" \
    --metric-name ResourcesTargeted \
    --namespace AWS/EBS \
    --statistic Sum \
    --period 3600 \
    --threshold 50 \
    --comparison-operator GreaterThanThreshold \
    --dimensions "Name=DLMPolicyId,Value=policy_id" \
    --evaluation-periods 1 \
    --alarm-actions sns_topic_arn
```

### Example 2: SnapshotDeleteFailed metric
<a name="case2"></a>

You can use the `SnapshotDeleteFailed` metric to monitor for failures to delete snapshots as per the policy's snapshot retention rule. 

For example, if you've created a policy that should automatically delete snapshots every twelve hours, you can create an alarm that notifies your engineering team when the `sum` of `SnapshotDeletionFailed` is greater than `0` over a `1` hour period. This could help to investigate improper snapshot retention and to ensure that your storage costs are not increased by unnecessary snapshots.

You can use the following command to create this alarm:

```
$ C:\> aws cloudwatch put-metric-alarm \
    --alarm-name snapshot-deletion-failed-monitor \
    --alarm-description "Alarm when snapshot deletions fail" \
    --metric-name SnapshotsDeleteFailed \
    --namespace AWS/EBS \
    --statistic Sum \
    --period 3600 \
    --threshold 0 \
    --comparison-operator GreaterThanThreshold \
    --dimensions "Name=DLMPolicyId,Value=policy_id" \
    --evaluation-periods 1 \
    --alarm-actions sns_topic_arn
```

### Example 3: SnapshotsCopiedRegionFailed metric
<a name="case3"></a>

Use the `SnapshotsCopiedRegionFailed` metric to identify when your policies fail to copy snapshots to other Regions.

For example, if your policy copies snapshots across Regions daily, you can create an alarm that sends an SMS to your engineering team when the `sum` of `SnapshotCrossRegionCopyFailed` is greater than `0` over a `1` hour period. This can be useful for verifying whether subsequent snapshots in the lineage were successfully copied by the policy.

You can use the following command to create this alarm:

```
$ C:\> aws cloudwatch put-metric-alarm \
    --alarm-name snapshot-copy-region-failed-monitor \
    --alarm-description "Alarm when snapshot copy fails" \
    --metric-name SnapshotsCopiedRegionFailed \
    --namespace AWS/EBS \
    --statistic Sum \
    --period 3600 \
    --threshold 0 \
    --comparison-operator GreaterThanThreshold \
    --dimensions "Name=DLMPolicyId,Value=policy_id" \
    --evaluation-periods 1 \
    --alarm-actions sns_topic_arn
```

## Managing policies that report failed actions
<a name="manage"></a>

For more information about what to do when one of your policies reports an unexpected non-zero value for a failed action metric, see the article [What should I do if Amazon Data Lifecycle Manager reports failed actions in CloudWatch metrics?](https://repost.aws/knowledge-center/cloudwatch-metrics-dlm)

# Service endpoints for Amazon Data Lifecycle Manager
<a name="dlm-service-endpoints"></a>

An *endpoint* is a URL that serves as an entry point for an AWS web service. Amazon Data Lifecycle Manager supports the following endpoint types:
+ IPv4 endpoints
+ Dual-stack endpoints that support both IPv4 and IPv6
+ FIPS endpoints

When you make a request, you can specify the endpoint and Region to use. If you do not specify an endpoint, the IPv4 endpoint is used by default. To use a different endpoint type, you must specify it in your request. For examples of how to do this, see [Specifying endpoints](#dlm-endpoint-examples).

For the Amazon Data Lifecycle Manager, see [ Amazon Data Lifecycle Manager endpoints](https://docs.aws.amazon.com/general/latest/gr/dlm.html) in the *Amazon Web Services General Reference*.

**Topics**
+ [

## IPv4 endpoints
](#dlm-ipv4)
+ [

## Dual-stack (IPv4 and IPv6) endpoints
](#dlm-ipv6)
+ [

## FIPS endpoints
](#dlm-fips)
+ [

## Specifying endpoints
](#dlm-endpoint-examples)

## IPv4 endpoints
<a name="dlm-ipv4"></a>

IPv4 endpoints support IPv4 traffic only. IPv4 endpoints are available for all Regions.

You must specify the Region as part of the endpoint name. The endpoint names use the following naming convention:
+ dlm.*region*.amazonaws.com

For example, the IPv4 endpoint for the US East (N. Virginia) Region is `dlm.us-east-1.amazonaws.com`.

## Dual-stack (IPv4 and IPv6) endpoints
<a name="dlm-ipv6"></a>

Dual-stack endpoints support both IPv4 and IPv6 traffic. Dual-stack endpoints are available for all Regions.

To use IPv6, you must use a dual-stack endpoint. When you make a request to a dual-stack endpoint, the endpoint URL resolves to an IPv6 or an IPv4 address, depending on the protocol used by your network and client.

You must specify the Region as part of the endpoint name. Dual-stack endpoint names use the following naming convention:
+ `dlm.region.api.aws`

For example, the dual-stack endpoint for the US East (N. Virginia) Region is `dlm.us-east-1.api.aws`.

## FIPS endpoints
<a name="dlm-fips"></a>

Amazon Data Lifecycle Manager provides FIPS-validated dual-stack (IPv4 and IPv6) endpoints for the following Regions:
+ `us-east-1` — US East (N. Virginia)
+ `us-east-2` — US East (Ohio)
+ `us-west-1` — US West (N. California)
+ `us-west-2` — US West (Oregon)
+ `ca-central-1` — Canada (Central)
+ `ca-west-1` — Canada West (Calgary)

FIPS dual-stack endpoints use the following naming convention: `dlm-fips.region.api.aws`. For example, the FIPS dual-stack endpoint for the US East (N. Virginia) Region is `dlm-fips.us-east-1.api.aws`.

## Specifying endpoints
<a name="dlm-endpoint-examples"></a>

The following examples show how to specify an endpoint for the `US East (N. Virginia)` Region using the AWS CLI.
+ **Dual-stack**

  ```
  aws dlm create-default-role \
  --resource-type snapshot \
  --endpoint-url https://dlm.us-east-2.api.aws
  ```
+ **IPv4**

  ```
  aws dlm create-default-role \
  --resource-type snapshot \
  --endpoint-url https://dlm.us-east-2.amazonaws.com
  ```

# Create a private connection between a VPC and Amazon EBS
<a name="dlm-vpc-endpoints"></a>

You can establish a private connection between your VPC and Amazon EBS by creating an *interface VPC endpoint*, powered by [AWS PrivateLink](https://aws.amazon.com/privatelink/). You can access Amazon EBS as if it were in your VPC, without using an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC don't need public IP addresses to communicate with Amazon EBS.

We create an endpoint network interface in each subnet that you enable for the interface endpoint.

For more information, see [Access AWS services through AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-aws-services.html) in the *AWS PrivateLink Guide*.

**Note**  
Amazon Data Lifecycle Manager supports IPv4 interface VPC endpoints for all commercial and AWS GovCloud (US) Regions, and IPv6 interface VPC endpoints for commercial Regions only.

## Considerations for Amazon EBS VPC endpoints
<a name="dlm-vpc-endpoint-considerations"></a>

Before you set up an interface VPC endpoint for Amazon EBS, review [Considerations](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#considerations-interface-endpoints) in the *AWS PrivateLink Guide*.

By default, full access to Amazon EBS is allowed through the endpoint. You can control access to the interface endpoint using VPC endpoint policies. You can attach an endpoint policy to your VPC endpoint that controls access to Amazon EBS. The policy specifies the following information:
+ The **principal** that can perform actions.
+ The **actions** that can be performed.
+ The **resources** on which actions can be performed.

For more information, see [ Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html) in the *Amazon VPC User Guide*.

The following is an example of an endpoint policy for Amazon EBS. When attached to an endpoint, this policy grants all users permission to get summary information about Amazon Data Lifecycle Manager policies.

```
{
  "Statement": [{
    "Action": "dlm:GetLifecyclePolicies",
    "Effect": "Allow",
    "Principal": "*",
    "Resource": "*"
  }]
}
```

## Create an interface VPC endpoint for Amazon EBS
<a name="dlm-vpc-endpoint-create"></a>

You can create a VPC endpoint for Amazon EBS using either the Amazon VPC console or the AWS Command Line Interface (AWS CLI). For more information, see [ Create a VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#create-interface-endpoint-aws) in the *AWS PrivateLink Guide*.

Create a VPC endpoint for Amazon EBS using the following service name: 
+ `com.amazonaws.region.dlm`

If you enable private DNS for the endpoint, you can make API requests to Amazon EBS using its default DNS name for the Region, for example, `dlm.us-east-1.amazonaws.com`.

# Troubleshoot Amazon Data Lifecycle Manager issues
<a name="dlm-troubleshooting"></a>

The following documentation can help you troubleshoot problems that you might encounter.

**Topics**
+ [

## Error: `Role with name already exists`
](#dlm-role-arn-issue)

## Error: `Role with name already exists`
<a name="dlm-role-arn-issue"></a>

**Description**  
You get the `Role with name AWSDataLifecycleManagerDefaultRole already exists` or `Role with name AWSDataLifecycleManagerDefaultRoleForAMIManagement already exists` error when you try to create a policy using the console.

**Cause**  
The ARN format of the default role differs depending on whether it was created using the console or the AWS CLI. While the ARNs are different, the roles use the same role name, which results in a role naming conflict between the console and the AWS CLI.

**Solution**  
To resolve this issue, do the following:

1. (*For snapshot policies enabled for pre and post scripts only*) Manually attach the **AWSDataLifecycleManagerSSMFullAccess** AWS managed policy to the **AWSDataLifecycleManagerDefaultRole** IAM role. For more information, see [ Adding IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console).

1. When creating your Amazon Data Lifecycle Manager policy, for **IAM role**, select **Choose another role**, and then select either **AWSDataLifecycleManagerDefaultRole** (for a snapshot policy), or **AWSDataLifecycleManagerDefaultRoleForAMIManagement** (for an AMI policy).

1. Continue to create the policy as usual.