

# Backup creation, maintenance, and restore
<a name="recovery-points"></a>

A backup, or *recovery point*, represents the content of a resource, such as an Amazon Elastic Block Store (Amazon EBS) volume or Amazon DynamoDB table, at a specified time. Recovery point is a term that refers generally to the different backups in AWS services, such as Amazon EBS snapshots and DynamoDB backups. The terms *recovery point* and *backup* are used interchangeably.

AWS Backup saves recovery points in backup vaults, which you can organize according to your business needs. For example, you can save a set of resources that contain financial information for fiscal year 2020. When you need to recover a resource, you can use either the AWS Backup console or the AWS Command Line Interface (AWS CLI) to find and recover the resource you need.

Each recovery point has a unique ID. The unique ID is at the end of the recovery point's Amazon Resource Name (ARN). For examples of recovery point ARNs and unique IDs, see the table in [Resources and operations](access-control.md#access-control-resources).

**Important**  
To avoid additional charges, configure your retention policy with a warm storage duration of **at least one week**. For more information, see [Metering, costs, and billing for AWS BackupMetering, costs, and billing](metering-and-billing.md).

The following sections provide an overview of the basic backup management tasks in AWS Backup.

**Topics**
+ [Creating an on-demand backup using AWS Backup](recov-point-create-on-demand-backup.md)
+ [Continuous backups and point-in-time recovery (PITR)](point-in-time-recovery.md)
+ [Backup creation by resource type](creating-a-backup.md)
+ [Backup and tag copy](recov-point-create-a-copy.md)
+ [Backup deletion](deleting-backups.md)
+ [Backup and tag edits](editing-a-backup.md)
+ [Backup search](backup-search.md)
+ [Backup tiering](backup-tiering.md)
+ [Restore a backup by resource type](restoring-a-backup.md)
+ [Restore testing](restore-testing.md)
+ [Stop a backup job](stopping-a-backup-job.md)
+ [View existing backups](listing-backups.md)

# Creating an on-demand backup using AWS Backup
<a name="recov-point-create-on-demand-backup"></a>

On the AWS Backup console, the **Protected resources** page lists resources that have been backed up by AWS Backup at least once. If you’re using AWS Backup for the first time, there aren’t any resources (such as Amazon EBS volumes or Amazon RDS databases) listed on this page. This is true even if a resource was assigned to a backup plan and that backup plan has not run a scheduled backup job at least once.

Note: An on-demand backup begins to back up your resource immediately. You can choose an on-demand backup if you wish to create a backup at a time other than the scheduled time defined in a backup plan. An on-demand backup can be used, for example, to test backup and functionality at any time.

On-demand backups cannot be used with point-in-time recovery (PITR), because an on-demand backup preserves resources in the state they are in when the backup is taken, but PITR uses [ continuous backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html#point-in-time-recovery-working-with), which record changes over a period of time.

**Considerations**
+ If the AWS Backup default role is not present in your account, one is created for you with the correct permissions.
+ When backups expire and are marked for deletion as part of your lifecycle policy, AWS Backup deletes the backups at a randomly chosen point over the following 8 hours. This window helps ensure consistent performance.
+ For Amazon EC2 resources, AWS Backup automatically copies existing group and individual resource tags, in addition to any tags that you add in this step.
+ AWS Backup takes EC2 backups with "no reboot" as the default behavior. AWS Backup currently supports resources running on Amazon EC2, and certain instance types are not supported. For more information, see [Create Windows VSS backups](windows-backups.md).

**To create an on-demand backup**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. On the dashboard, choose **Create an on-demand backup**. Or, in the navigation pane, choose **Protected resources** and then choose **Create an on-demand backup**.

1. For **Resource type** page, choose the resource type that you want to back up. For example, choose **DynamoDB** for Amazon DynamoDB tables.

1. Choose the name or ID of the resource to protect. For example, choose the name of the DynamoDB table for Amazon DynamoDB.

1. Ensure that **Create backup now** is selected.

1. If the resource type supports transition to cold storage, **Cold storage** is present. For more information, see the **Lifecycle to cold storage ** column in table [Feature availability by resource](backup-feature-availability.md#features-by-resource).

   To specify when this backup goes to cold storage, choose **Move backups from warm to cold storage** and then specify the time in warm storage.

1. For **Total retention period**, specify the number of days. If you specified time in cold storage, the retention period is divided between warm and cold storage.

1. Choose an existing **Backup vault** or create a new one. Choosing **Create new Backup vault** opens a new page to create a vault and then returns you to the **Create on-demand backup** page when you are finished.

1. For **IAM role**, choose the default role or a role that you created.

1. To assign a tag to your on-demand backup, expand **Tags added to recovery points**, choose **Add new tag**, and enter a tag key and tag value.

1. **Advanced backup settings** options vary by resource type:
   + For **EC2** resources: To take application-consistent snapshots using Windows Volume Shadow Copy Service (VSS), choose **Windows VSS**.
   + For **Amazon S3** resources: You can choose to exclude Access Control Lists (ACLs) from your backup by leaving **Backup Access Control Lists (ACLs)** unselected.

1. Choose **Create on-demand backup**. This opens the **Jobs** page, where you can see a list of jobs and view job status.

# Continuous backups and point-in-time recovery (PITR)
<a name="point-in-time-recovery"></a>

For some resources, AWS Backup supports continuous backups and point-in-time recovery (PITR) in addition to snapshot backups.

With **continuous backups**, you can restore your AWS Backup-supported resource by rewinding it back to a specific time that you choose, within 1 second of precision (going back a maximum of 35 days). Continuous backup works by first creating a full backup of your resource, and then constantly backing up your resource’s transaction logs. PITR works by accessing your full backup and replaying the transaction log to the time that you tell AWS Backup to recover.

Alternatively, **snapshot backups** can be taken as frequently as every hour. Snapshot backups can be stored for up to a maximum of 100 years. Snapshots can be copied for full or incremental backups.

Because continuous and snapshot backups offer different advantages, we recommend that you protect your resources with both continuous and snapshot backup rules.

An on-demand backup begins to back up your resource immediately. You can choose an on-demand backup if you wish to create a backup at a time other than the scheduled time defined in a backup plan. An on-demand backup can be used, for example, to test backup and functionality at any time.

You can't use [on-demand backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) with PITR, because an on-demand backup preserves resources in the state they are in when the backup is taken, while PITR uses continuous backups, which record changes over a period of time.

You can opt in to continuous backups for supported resources when you create a backup plan in AWS Backup using the AWS Backup console or the API. The continuous backup plan creates one continuous recovery point and updates that recovery point whenever the job runs.

**Topics**
+ [Point-in-time recovery considerations](#point-in-time-recovery-considerations)
+ [Supported services for continuous backup and PITR](#point-in-time-recovery-supported-services)
+ [Finding a continuous backup](point-in-time-recovery-finding.md)
+ [Restoring a continuous backup](point-in-time-recovery-restoring.md)
+ [Stopping or deleting continuous backups](point-in-time-recovery-stopping.md)
+ [Copying continuous backups](point-in-time-recovery-copying.md)
+ [Changing your retention period](point-in-time-recovery-retention-period.md)
+ [Removing the only continuous backup rule from a backup plan](point-in-time-recovery-removing_rule.md)

## Point-in-time recovery considerations
<a name="point-in-time-recovery-considerations"></a>

Be aware of the following considerations for point-in-time recovery:
+ **Automatic fallback to snapshots** — If AWS Backup is unable to perform a continuous backup, it tries to perform a *snapshot* backup instead.
+ **No support for on-demand continuous backups **— AWS Backup doesn't support on-demand continuous backup because on-demand backup records a point in time, whereas continuous backup records changes over a period of time.
+ **No support for transition to cold storage** — Continuous backups don't support transition to cold storage because transition to cold requires a minimum transition period of 90 days, whereas continuous backups have a maximum retention period of 35 days.
+ **Restoring recent activity** — Amazon RDS activity allows restores up until the most recent 5 minutes of activity; Amazon S3 allows restores up until the most recent 15 minutes of activity.

**Important**  
A single resource can only have one continuous backup. Expand below for additional details and best practices.

### Overlapping continuous backups on the same resource
<a name="point-in-time-recovery-overlapping"></a>

Each resource (such as an Amazon S3 bucket or an Amazon RDS database) can only have one continuous backup (recovery point); additional continuous backups are redundant. When multiple backup policies, plans, or rules instruct AWS Backup to create multiple continuous backups for the same resource, the following process applies:
+ If multiple rules specify that more than one continuous backup should be in a single vault, AWS Backup follows the rule with the longest retention period (lifecycle) and ignores additional rules.
+ If multiple rules specify that more than one continuous backup should be in more than one vault, AWS Backup creates one continuous backup according to the first rule processed. Each subsequent rule specifying a continuous backup for a resource that already has a continuous backup will result in a snapshot (periodic) backup instead.

When duplicate continuous backup plans occur, the snapshot backups created after the continuous recovery point can show a status of `Completed with issues`. The detailed information of this recovery point will show an error similar to `“Enabling continuous backup failed, because of the following error: PITR already configured in backup plan: [ARN]”`. This error indicates that there is already at least one continuous backup configured (for a different recovery point than the one containing the error). That first continuous backup (recovery point) is able to be used for point in time restore (PITR) as long as it is has a status of `COMPLETED`.

To prevent the creation of unintended snapshots with issues (and error message), review your organization backup strategy. If necessary, adjust backup plans and policies that create multiple continuous backups of the same resource.

After you have made adjustments that result in only one continuous backup for a resource, the snapshot backups will be retained according to the specified lifecycle of the plan that created them, then they will transition to `EXPIRED` and be deleted. The continuous backup and its point-in-time recovery ability will be maintained according to the rule that created it.

## Supported services for continuous backup and PITR
<a name="point-in-time-recovery-supported-services"></a>

AWS Backup supports continuous backups and point-in-time recovery for the following services and applications:

### Amazon S3
<a name="point-in-time-recovery-S3"></a>

To turn on PITR for S3 backups, continuous backups need to part of the backup plan.

While this original backup of the source bucket can have PITR active, cross-Region or cross-account destination copies will not have PITR, and restoring from these copies will restore to the time they were created (the copies will be snapshot copies) instead of restoring to a specified point in time.

AWS Backup for S3 relies on receiving S3 events through Amazon EventBridge. If this setting is disabled in S3 bucket notification settings, continuous backups will stop for those buckets with the setting turned off. For more information, see [Using EventBridge](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventBridge.html).

### RDS
<a name="point-in-time-recovery-rds"></a>

**Backup schedules:** When an AWS Backup plan creates both Amazon RDS snapshots and continuous backups, AWS Backup will intelligently schedule your backup windows to coordinate with the Amazon RDS maintenance window to prevent conflicts. To further prevent conflicts, manual configuration of the Amazon RDS automated backup window is unavailable. RDS takes snapshots once per day, even if a backup plan has a frequency for snapshot backups other than once per day.

**Settings:** After you apply an AWS Backup continuous backup rule to an Amazon RDS instance, you can't create or modify continuous backup settings in Amazon RDS. You must make modifications through the AWS Backup console or the AWS Backup CLI. When you turn on automated backups for the first time, an outage occurs if you change the backup retention period of the DB instance from 0 to a nonzero value. Plan this change during a maintenance window to minimize impact. For more information about enabling automated backups, see [Enabling automated backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.BackupRetention.html) in the *Amazon RDS User Guide*.

**Transition control of continuous backup for an Amazon RDS instance back to Amazon RDS:**

------
#### [ Console ]

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup plans**.

1. Delete all the Amazon RDS backup plans with continuous backup protecting that resource.

1. Choose **Backup vaults**. Delete the continuous backup recovery point from your backup vault. Or, wait for their retention period to elapse, causing AWS Backup to automatically delete the recovery point.

After you complete these steps, AWS Backup will transition continuous backup control of your resource back to Amazon RDS.

------
#### [ AWS CLI ]

Call the `DisassociateRecoveryPoint` API operation.

To learn more, see [DisassociateRecoveryPoint](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_DisassociateRecoveryPoint.html).

------

**IAM permissions required for Amazon RDS continuous backups**
+ To use AWS Backup to configure continuous backups for your Amazon RDS database, verify that the API permission `rds:ModifyDBInstance` exists in the IAM role defined by your backup plan configuration. To restore Amazon RDS continuous backups, you must add the permission `rds:RestoreDBInstanceToPointInTime` to the IAM role that you submitted for the restore job. You can use the `AWS Backup default service role` to perform backups and restores.
+ To describe the range of times available for point-in-time recovery, AWS Backup calls `rds:DescribeDBInstanceAutomatedBackups`. In the AWS Backup console, you must have the `rds:DescribeDBInstanceAutomatedBackups` API permission in your AWS Identity and Access Management (IAM) managed policy. You can use the `AWSBackupFullAccess` or `AWSBackupOperatorAccess` managed policies. Both policies have all required permissions. For more information, see [Managed Policies](https://docs.aws.amazon.com/aws-backup/latest/devguide/access-control.html#managed-policies).

**Retention periods:** When you change your PITR retention period, AWS Backup calls [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html)to apply that change.

When AWS Backup enables PITR for the first time on an Amazon RDS instance (changing retention from 0 to a non-zero value), the operation is scheduled to occur during your database's next maintenance window to prevent unexpected downtime.

**Scenarios:**
+ **First-time PITR enablement:** When PITR is enabled on an Amazon RDS instance for the first time (regardless of whether it's managed by AWS Backup or configured directly), the change is queued for the next maintenance window. AWS Backup automatically creates snapshot backups to maintain coverage until PITR becomes active.
+ **PITR retention changes:** Non-zero to non-zero retention changes apply immediately without restart.
+ **PITR disabling:** Changes from non-zero to zero retention are scheduled for the next maintenance window.

**Backup coverage during transition:**
+ Snapshot backups provide protection while waiting for maintenance window
+ Continuous recovery points become available when the backup job runs after PITR is enabled
+ No gap in backup protection occurs during the transition period
+ Recovery granularity may be limited to snapshot intervals until PITR is fully active

Note: [ Stopping the RDS instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html) will remove pending changes. PITR configuration changes will be requeued by the next backup job and applied during a subsequent maintenance window.

**Copies of Amazon RDS continuous backups:**
+ **Creating copies of Amazon RDS continuous backups** — You can't create copies of Amazon RDS continuous backups because AWS Backup for Amazon RDS does not allow copying transaction logs. Instead, AWS Backup creates a snapshot and copies it with the frequency specified in the backup plan.

**Restores:** You can perform a point-in-time restore using either AWS Backup or Amazon RDS. For AWS Backup console instructions, see [Restoring an Amazon RDS Database](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-rds.html). For Amazon RDS instructions, see [Restoring a DB Instance to a specified time](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html) in the *Amazon RDS User Guide*.

**Tip**  
A multi AZ (availability zone) database instance set to `Always On` should not have a backup retention set to zero. If errors occur, use AWS CLI command `disassociate-recovery-point` instead of `delete-recovery-point`, then change the retention setting to 1 in your Amazon RDS settings.

For general information about working with Amazon RDS, see the [Amazon RDS User Guide](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html).

#### CLI examples for RDS and Aurora PITR restore
<a name="rds-pitr-cli-examples"></a>

The following examples demonstrate how to restore RDS and Aurora databases to a point in time using the AWS Backup CLI with metadata parameters.

**Example: Restore RDS database to a point in time with metadata**  


```
aws backup start-restore-job \
    --recovery-point-arn arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45 \
    --metadata '{"DBInstanceIdentifier":"restored-db-instance","Engine":"mysql","UseLatestRestorableTime":"false","RestoreTime":"2024-01-15T10:30:00Z"}' \
    --iam-role-arn arn:aws:iam::123456789012:role/service-role/AWSBackupDefaultServiceRole \
    --resource-type RDS \
    --copy-source-tags-to-restored-resource
```

**Example: Restore Aurora cluster to a point in time**  


```
aws backup start-restore-job \
    --recovery-point-arn arn:aws:backup:us-east-1:123456789012:recovery-point:2FC4C6F8-0FC1-546B-B91C-209C599C1D56 \
    --metadata '{"DBClusterIdentifier":"restored-aurora-cluster","Engine":"aurora-mysql","UseLatestRestorableTime":"true"}' \
    --iam-role-arn arn:aws:iam::123456789012:role/service-role/AWSBackupDefaultServiceRole \
    --resource-type Aurora \
    --copy-source-tags-to-restored-resource
```

**Metadata parameters for RDS PITR restore**  
The following metadata parameters are supported for RDS and Aurora PITR restores:
+ **DBInstanceIdentifier** (RDS) or **DBClusterIdentifier** (Aurora) - Required. The name for the restored database.
+ **Engine** - Required. The database engine (e.g., mysql, postgres, aurora-mysql, aurora-postgresql).
+ **UseLatestRestorableTime** - Optional. Set to "true" to restore to the latest restorable time, or "false" to specify a RestoreTime.
+ **RestoreTime** - Optional. The date and time to restore to (ISO 8601 format). Required if UseLatestRestorableTime is "false".

**Copy tags to restored resource**  
Use the `--copy-source-tags-to-restored-resource` flag to copy tags from the source database to the restored database. This ensures tag-based access controls and cost allocation tags are preserved.

For complete details on RDS PITR restore parameters, see:
+ [RestoreDBInstanceToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html) in the Amazon RDS API Reference
+ [RestoreDBClusterToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html) in the Amazon RDS API Reference

### Aurora
<a name="pitr-aurora"></a>

To enable continuous backup of your Aurora resources, see the steps in the first section of this page.

The procedure to restore an Aurora cluster to a point in time is a [variation of the steps to restore a snapshot of an aurora cluster](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-aur.html).

When you conduct a point in time restore, the console displays a **restore time** section. See * Restoring a continuous backup* further down on this page in [Working with Continuous backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html#point-in-time-recovery-working-with).

### SAP HANA on Amazon EC2 instances
<a name="point-in-time-recovery-saphana"></a>

You can make [continuous backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) , which can be used with point-in-time restore (PITR) (note that on-demand backups preserve resources in the state in which they are taken; whereas PITR uses continuous backups which record changes over a period of time).

With continuous backups, you can restore your SAP HANA database on an EC2 instance by rewinding it back to a specific time that you choose, within 1 second of precision (going back a maximum of 35 days). Continuous backup works by first creating a full backup of your resource, and then constantly backing up your resource’s transaction logs. PITR restore works by accessing your full backup and replaying the transaction log to the time that you tell AWS Backup to recover.

You can opt in to continuous backups when you create a backup plan in AWS Backup using the AWS Backup console or the API.

**To enable continuous backups using the console**

1. Sign in to the AWS Management Console, and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup plans**, and then choose **Create Backup plan**.

1. Under **Backup rules**, choose **Add Backup rule**.

1. In the **Backup rule configuration** section, select **Enable continuous backups for supported resources**.

After you disable [ PITR (point-in-time restore)](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) for SAP HANA database backups, logs will continue to be sent to AWS Backup until the recovery point expires (status equals `EXPIRED)`. You can change to an alternative log backup location in SAP HANA to stop the transmission of logs to AWS Backup.

A continuous recovery point with a status of `STOPPED` indicates that a continuous recovery point has been interrupted; that is, the logs transmitted from SAP HANA to AWS Backup that show the incremental changes to a database have a gap. The recovery points that occur within this timeframe gap have a status of `STOPPED.`.

For issues you may encounter during restore jobs of continuous backups (recovery points), see the [ SAP HANA Restore troubleshooting](https://docs.aws.amazon.com/aws-backup/latest/devguide/saphana-restore.html#saphanarestoretroubleshooting) section of this guide.

# Finding a continuous backup
<a name="point-in-time-recovery-finding"></a>

You can use the AWS Backup console to find your continuous backup.

**To find a continuous backup using the AWS Backup console**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup vaults**, and then choose your backup vault in the list.

1. In the **Backups** section, in the **Backup type** column, sort for **Continuous** recovery points. You can also sort by **Recovery point ID** for the prefix *continuous*.

# Restoring a continuous backup
<a name="point-in-time-recovery-restoring"></a>

**To restore a continuous backup using the AWS Backup console**
+ During the PITR restore process, the AWS Backup console displays a **Restore time** section. In this section, do one of the following:
  + Choose to restore to the **Latest restorable time**.
  + Choose **Specify date and time** to enter your own date and time within your retention period.

**To restore a continuous backup using the AWS Backup API**

1. For Amazon S3 see [ Use the AWS Backup API, CLI, or SDK to restore S3 recovery points](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-s3.html).

1. For Amazon RDS see [ Use the AWS Backup API, CLI, or SDK to restore Amazon RDS recovery points](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-rds.html).

# Stopping or deleting continuous backups
<a name="point-in-time-recovery-stopping"></a>

You can stop the creation of continuous backups or you can delete specific backups (point-in-time-recovery or PITR points).

If you want to stop continuous backups, you must delete the continuous backup rule from your backup plan. If you wish to stop continuous backups for one or more resources but not for all resources, create a new backup plan with the continuous backup rule for those resources you still want to be continuously backed up. If instead you only delete a continuous backup recovery point from your backup vault, your backup plan will still continue to execute the continuous backup rule, creating a new recovery point.

However, even after you delete your continuous backup rule, AWS Backup remembers the retention period from your now-deleted backup rule. It will automatically delete your continuous backup recovery point from your backup vault based on your specified retention period.

When deleting Amazon RDS recovery points, consider:
+ A multi AZ (availability zone) database instance set to `Always On` should not have a backup retention set to zero. If errors occur, use AWS CLI command `disassociate-recovery-point` instead of `delete-recovery-point`, then change the retention setting to 1 in your Amazon RDS settings.
+ When point-in-time recovery (PITR) is disabled, the change is scheduled for your maintenance window. You may continue to incur backup storage costs until the maintenance window applies the change. This process may take up to 7 days depending on your maintenance window schedule.

When deleting Aurora recovery points, consider:

If this is selected for an Amazon Aurora recovery point, AWS Backup sets the retention period to 1 day. Aurora backups cannot be completely deleted until the source cluster has also been deleted.

# Copying continuous backups
<a name="point-in-time-recovery-copying"></a>

If a continuous backup rule also specifies a cross-account or cross-Region copy and AWS Backup supports the operation for the resource type, AWS Backup takes a snapshot of the resource and copies the snapshot to the destination vault. To learn more about copying your recovery points across accounts and Regions, see [Copying a backup ](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-a-copy.html).

Continuous backups create a periodic backups in accordance with the frequency set in the backup plan rule in the destination account and/or Region.

AWS Backup does not support on-demand copies of continuous backups.

# Changing your retention period
<a name="point-in-time-recovery-retention-period"></a>

You must use your backup plan to increase or decrease the retention period for your existing continuous backup rule. The minimum retention period is 1 day. The maximum retention period is 35 days. The change in retention period will take effect when the next backup is completed following this change. You cannot use the `UpdateRecoveryPointLifecycle` API or CLI to update the retention period of any continuous backup.

## Retention period by service
<a name="point-in-time-recovery-retention-period-by-service"></a>

The retention behavior can be specific to the resource type that is backed up in the recovery point.

**Amazon S3**  
When a retention period of an Amazon S3 continuous recovery point has changed (increased or decreased), that recovery point status will become `STOPPED`. A new continuous recovery point with the altered retention settings will be created.

**Amazon Aurora and Amazon RDS**  
For recovery points of Aurora and Amazon RDS resources, only one recovery point is possible at a time. No new recovery points are created when a retention period is changed; instead, AWS Backup updates the existing recovery point with the retention specifications within the backup plan.

**SAP HANA on Amazon EC2**  
For recovery points of SAP HANA on EC2 resources, only one recovery point is possible at a time. No new recovery points are created when a retention period is changed; instead, AWS Backup updates the existing recovery point with the retention specifications within the backup plan.

Ensure that you set the retention period for these backups to be greater than the backup frequency to avoid a scenario where the recovery point transitions to `EXPIRED` state.

**Tip**  
A backup frequency rule for a continuous backup is not the same as a periodic backup snapshot. Each backup plan, even one that doesn't create a snapshot, has a frequency you set (hourly, daily, weekly, or monthly as examples) for maintenance and syncing purposes.

# Removing the only continuous backup rule from a backup plan
<a name="point-in-time-recovery-removing_rule"></a>

When you create a backup plan with a continuous backup rule and then you remove that rule, AWS Backup remembers the retention period from your now-deleted rule. It will delete the continuous backup from your backup vault when the retention period elapses.

# Backup creation by resource type
<a name="creating-a-backup"></a>

With AWS Backup, you can create backups automatically using backup plans or manually by initiating an on-demand backup. 

## Creating automatic backups
<a name="creating-automatic-backups"></a>

When backups are created automatically by backup plans, they are configured with the lifecycle settings that are defined in the backup plan. They are organized in the backup vault that is specified in the backup plan. They are also assigned the tags that are listed in the backup plan. For more information about backup plans, see [Backup plans](about-backup-plans.md).

## Creating on-demand backups
<a name="creating-on-demand-backups"></a>

When you create an on-demand backup, you can configure these settings for the backup that is being created. When a backup is created either automatically or manually, a backup *job* is initiated. For how to create an on-demand backup, see [Creating an on-demand backup using AWS Backup](recov-point-create-on-demand-backup.md).

Note: An on-demand backup creates a backup job; the backup job will transition in state of `Running` within an hour (or when specified). You can choose an on-demand backup if you wish to create a backup at a time other than the scheduled time defined in a backup plan. An on-demand backup can be used, for example, to test backup and functionality at any time.

[ On-demand backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) cannot be used with [ point-in-time restore (PITR)](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) since an on-demand backup preserves resources in the state they are in when the backup is taken, whereas PITR uses [ continuous backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html#point-in-time-recovery-working-with) which record changes over a period of time.

## Backup job statuses
<a name="backup-job-statuses"></a>

Each backup job has a unique ID. For example, `D48D8717-0C9D-72DF-1F56-14E703BF2345`.

You can view the status of a backup job on the **Jobs** page of the AWS Backup console. Backup job statuses include `CREATED`, `PENDING`, `RUNNING`, `ABORTING`, `ABORTED`, `COMPLETED`, `FAILED`, `EXPIRED`, and `PARTIAL`.

## Incremental backups
<a name="incremental-backup-works"></a>

Many resources support incremental backup with AWS Backup. A full list is available in the incremental backup section of the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table.

Although each backup after the first (full) one is incremental (meaning it only captures changes from the previous backup), all backups made with AWS Backup retain the necessary reference data to allow a full restore. This is true even if the original (full) backup has reached the end of its lifecycle and been deleted.

For example, if your day 1 (full) backup was deleted due to a 3-day lifecycle policy, you would still be able to perform a full restore with the backups from days 2 and 3. AWS Backup maintains the necessary reference data from day 1 to do so.

**Incremental backups and Regions**

Backups of resources which are fully managed by AWS Backup can only be incremental if the vault in which the backup is created also contains an earlier backup (incremental or full); other resource types (not fully managed by AWS Backup) can have incremental backups as long as there is a previous backup of the resource within the same *Region*.

**Note**  
Not all resource types support incremental backups. Some resources, such as Amazon Aurora, offer incremental backup only through continuous backups and point-in-time restore (PITR), not through snapshot-based backups. For a full list of which resources support incremental backups, see the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table.

## Access to source resources
<a name="source-resource-statuses"></a>

AWS Backup needs access to your source resources to back them up. For example:
+ To back up an Amazon EC2 instance, the instance can be in the `running` or `stopped` state, but not the `terminated` state. This is because a `running` or `stopped` instance can communicate with AWS Backup, but a `terminated` instance cannot.
+ To back up a virtual machine, its hypervisor must have the Backup gateway status `ONLINE`. For more information, see [Understanding hypervisor status](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html#understand-hypervisor-status).
+ To back up an Amazon RDS database, Amazon Aurora, or Amazon DocumentDB cluster, those resources must have the status `AVAILABLE`.
+ To back up an Amazon Elastic File System (Amazon EFS), it must have the status `AVAILABLE`.
+ To back up an Amazon FSx file system, it must have the status `AVAILABLE`. If the status is `UPDATING`, the backup request is queued until the file system becomes `AVAILABLE`.

  FSx for ONTAP doesn’t support backing up certain volume types, including DP (data-protection) volumes, LS (load-sharing) volumes, full volumes, or volumes on file systems that are full. For more information, please see [FSx for ONTAP Working with backups](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/using-backups.html).

AWS Backup retains previously-created backups consistent with your lifecycle policy, regardless of the health of your source resource.

**Topics**
+ [Creating automatic backups](#creating-automatic-backups)
+ [Creating on-demand backups](#creating-on-demand-backups)
+ [Backup job statuses](#backup-job-statuses)
+ [Incremental backups](#incremental-backup-works)
+ [Access to source resources](#source-resource-statuses)
+ [CloudFormation stack backups](applicationstackbackups.md)
+ [Amazon Aurora DSQL backups](backup-aurora.md)
+ [Advanced DynamoDB backup](advanced-ddb-backup.md)
+ [Amazon EBS and AWS Backup](multi-volume-crash-consistent.md)
+ [Amazon Relational Database Service backups](rds-backup.md)
+ [Amazon Redshift backups](redshift-backups.md)
+ [Amazon Redshift Serverless backups](redshift-serverless-backups.md)
+ [Amazon EKS backups](eks-backups.md)
+ [SAP HANA backup on Amazon EC2](backup-saphana.md)
+ [Amazon S3 backups](s3-backups.md)
+ [Amazon Timestream backups](timestream-backup.md)
+ [Virtual machine backups](vm-backups.md)
+ [Create Windows VSS backups](windows-backups.md)

# CloudFormation stack backups
<a name="applicationstackbackups"></a>

A CloudFormation stack consists of multiple stateful and stateless resources that you can back up as a single unit. In other words, you can backup and restore an application containing multiple resources by backing up a stack and restoring the resources within it. All the resources in a stack are defined by the stack's CloudFormation template.

When a CloudFormation stack is backed up, recovery points are created for the CloudFormation template and for each additional resource supported by AWS Backup in the stack. These recovery points are grouped together within a overarching recovery point called a **composite**.

This composite recovery point cannot be restored, but nested recovery points can be restored. You can restore anywhere from one to all nested backups within a composite backup using the console or the AWS CLI.

## CloudFormation application stack terminology
<a name="appstackterminology"></a>
+ **Composite recovery point**: A recovery point used to group nested recovery points together, as well other metadata.
+ **Nested recovery point**: A recovery point of a resource that is part of a CloudFormation stack and is backed up as part of the composite recovery point. Each nested recovery point belongs in the stack of one composite recovery point.
+ **Composite job**: A backup, copy, or restore job for a CloudFormation stack which can trigger other backup jobs for individual resources within the stack.
+ **Nested job**: A backup, copy, or restore job for a resource within a CloudFormation stack.

## CloudFormation stack backup jobs
<a name="howtobackupcfn"></a>

The process of a backup creation is called a backup job. A CloudFormation stack backup job has a [ status](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup.html#backup-job-statuses). When a backup job has finished, it has the status of `Completed`. This signifies a [CloudFormation recovery point](#cfnrecoverypoints) (a backup) has been created.

CloudFormation stacks can be backed up using the console or backed up programatically. To backup any resource, including a CloudFormation stack, see [ Creating a backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup.html) elsewhere in this *AWS Backup Developer Guide*.

CloudFormation stacks can be backed up using the API command `StartBackupJob`. Note that the documentation and console refer to composite and nested recovery points; the API language uses the terminology "parent and child recovery points" in the same contextual relationship.

CloudFormation stacks contain all AWS resources are indicated by your [CloudFormation template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html). Note that your template may contain resources not yet supported by AWS Backup. If your template contains a combination of AWS supported resources and unsupported resources, AWS Backup will still back up the template into a composite stack, but Backup will only create recovery points of the Backup-supported services. All resource types contained within the CloudFormation template will be included within a backup, even if you have not opted into to a particular service (toggling a service to “Enabled” in console Settings).

## CloudFormation recovery point
<a name="cfnrecoverypoints"></a>

### Recovery point status
<a name="cfnrecoverypointstatus"></a>

When the backup job of a stack is finished (the job status is `Completed`), a backup of the stack has been created. This backup is also known as a composite recovery point. A composite recovery point can have one of the following statuses: `Completed`, `Failed`, or `Partial`. Note that a backup job has a status, and a recovery point (also called a backup) also has a separate status.

A completed backup job means your entire stack and the resources within in are protected by AWS Backup. A failed status indicates that the backup job was unsuccessful; you should create the backup again once the issue that caused the failure is corrected.

A `Partial` status means that not all the resources in the stack were backed up. This may happen if the CloudFormation template contains resources that are not currently supported by AWS Backup, or it may happen if one or more of the backup jobs belonging to resources within the stack (nested resources) have statuses other than `Completed`. You can manually create an on-demand backup to rerun any resources that resulted in a status other than `Completed`. If you expected the stack to have the status of `Completed` but it is marked as `Partial` instead, check to see which of the conditions above might be true about your stack.

Each nested resource within the composite recovery point has its own individual recovery point, each with its own status (either `Completed` or `Failed`). Nested recovery points with a status of `Completed` can be restored.

### Manage recovery points
<a name="cfnmanagerecoverypoints"></a>

Composite recovery points (backups) can be copied; nested recovery points can be copied, deleted, disassociated, or restored. A composite recovery point which contains nested backups cannot be deleted. After the nested recovery points within a composite recovery point have been deleted or disassociated, you can manually delete the composite recovery point manually or let it remain until the backup plan lifecycle deletes it. 

### Delete a recovery point
<a name="cfndeleterecoverypoint"></a>

You can delete a recovery point using the AWS Backup console or using the AWS CLI.

To delete recovery points using the AWS Backup console,

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Click on **Protected Resources** in the left-hand navigation. In the text box, type `CloudFormation` to display only your CloudFormation stacks.

1. Composite recovery points will be displayed in the Recovery points pane. The plus sign (\$1) to the left of each recovery point ID can be clicked to expand each composite recovery point, showing all nested recovery points contained in the composite. You can check the box to the left of any recovery point to include it in your selection of recovery points you wish to delete.

1. Click the **Delete** button.

When you use the console to delete one or more composite recovery points, a warning box will pop up. This warning box requires you to confirm your intention to delete the composite recovery points, including nested recovery points within composite stacks.

To delete recovery points using API, use the `DeleteRecoveryPoint` command.

When you use API with the AWS Command Line Interface you must delete all nested recovery points prior to deleting a composite point. If you send an API request to delete a composite stack backup (recovery point) that still contains nested recovery points within it, the request will return an error.

### Disassociate a nested recovery point from composite recovery point
<a name="cfndisassociaterecoverypoints"></a>

You can disassociate a nested recovery point from a composite recovery point (for example, you wish to keep the nested recovery point but delete the composite recovery point). Both recovery points will remain, but they will no longer be connected; that is, actions that occur on the composite recovery point will no longer apply to the nested recovery point once it has been disassociated.

You can disassociate the recovery point using the console, or you can call the API `DisassociateRecoveryPointFromParent`. [Note that the API calls use the term “parent” to refer to composite recovery points.]

### Copy a recovery point
<a name="cfncopyrecoverypoint"></a>

You can copy a composite recovery point, or you can copy a nested recovery point if the resource supports [cross-account and cross-Region copy](backup-feature-availability.md#features-by-resource).

To copy recovery points using the AWS Backup console:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Click on **Protected Resources** in the left-hand navigation. In the text box, type `CloudFormation` to display only your CloudFormation stacks.

1. Composite recovery points will be displayed in the Recovery points pane. The plus sign (\$1) to the left of each recovery point ID can be clicked to expand each composite recovery point, showing all nested recovery points contained in the composite. You can click the radial circle button to the left of any recovery point to copy it.

1. Once it is selected, click the **Copy** button in the top-right corner of the pane.

When you copy a composite recovery point, nested recovery points that don’t support copy functionality won’t end up in the copied stack. The composite recovery point will have a status of `Partial`.

## Frequently Asked Questions
<a name="cfnfaq"></a>

1. *"What is included as part of the application backup?"*

   As part of each backup of an application defined using CloudFormation, the template, the processed value of each parameter in the template, and the nested resources supported by AWS Backup are backed up. A nested resource is backed up in the same way as an individual resource not part of a CloudFormation stack is backed up. Note that values of parameters marked as `no-echo` will not be backed up.

   

1. *"Can I back up my CloudFormation stack that has nested stacks?"*

   Yes. Your CloudFormation stacks which contain nested stacks can be in your backup.

   

1. *"Does a `Partial` status mean the creation of my backup failed?"*

   No. A partial status indicates that some of the recovery points were backed up, while some were not. There are three conditions to check if you were expecting a `Completed` backup result:

   1. Does your CloudFormation stack contain resources currently unsupported by AWS Backup? For a list of supported resources, see [ Supported AWS resources and third-party applications](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html#supported-resources) in our Developer Guide.

   1. One or more of the backup jobs belonging to resources within the stack were not successful and the job has to be rerun.

   1. A nested recovery point was deleted or disassociated from the composite recovery point.

   

1. *"How do I exclude resources in my CloudFormation stack backup?"*

   When you back up your CloudFormation stack, you can exclude resources from being part of the backup. In the console, during the [create a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) and [update a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/updating-a-backup-plan.html) processes, there is an [assign resources](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html) step. In this step, there is a **Resource selection** section. If you choose **include specific resource types** and have included CloudFormation as a resource to backup, you can **exclude specific resource IDs from the selected resource types**. You can also use tags to exclude resources within the stack.

   Using CLI, you can use
   + `NotResources` in your backup plan to exclude a specific resource from your CloudFormation stacks.
   + `StringNotLike` to exclude items through tags.

   

1. *"What types of backups are supported for nested resources?"*

   Backups of nested resources may be either full or incremental backups, depending on which kind of backup is supported by AWS Backup for these resources. For more information, see [ How incremental backups work](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup.html#how-incremental-backup-works). However, note that PITR (point-in-time restore) is [not supported](backup-feature-availability.md#features-by-resource) for Amazon S3 and Amazon RDS nested resources.

   

1. *"Are change sets that are part of the CloudFormation stack backed up?"*

   No. Change sets are not backed up as part of CloudFormation stack backup.

   

1. *"How does the status of the CloudFormation stack impact the backup?"*

   The status of the CloudFormation stack may impact the backup. A stack with a status that includes `COMPLETE` can be backed up, such as statuses `CREATE_COMPLETE`, `ROLLBACK_COMPLETE`, `UPDATE_COMPLETE`, `UPDATE_ROLLBACK_COMPLETE`, `IMPORT_COMPLETE`, or `IMPORT_ROLLBACK_COMPLETE`.

   In the case where an upload of a new template fails and the stack move to the status of `ROLLBACK_COMPLETE`, the new template will be backed up but backups of the nested resources will be based on the rolled-back resources.

   

1. *"How do application stack lifecycles differ from other recovery point lifecycles?" *

   Nested recovery point lifecycles are determined by the backup plan to which they belong. The composite recovery point is determined by the longest lifecycle of all nested recovery points. When the last remaining nested recovery point within a composite recovery point is deleted or disassociated, the composite recovery point will also be deleted.

   

1. *“How are tags of a CloudFormation copied to recovery points?”*

   Yes. Those tags will be copied to each respective nested recovery point.

1. *“Is there an order for deleting composite and nested recovery points (backups)?”*

   Yes. Some backups must be deleted before others can be deleted. Composite backups which contain nested recovery points cannot be deleted until all recovery points within the composite have been deleted. Once a composite recovery point is no longer contains nested recovery points, you can delete it manually. Otherwise, it will be deleted in accordance with its backup plan lifecycle.

   

## Restore applications within a stack
<a name="restore-app-stack"></a>

See [ How to restore application stack backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-application-stacks.html) for information on restoring nested recovery points.

# Amazon Aurora DSQL backups
<a name="backup-aurora"></a>

You can use AWS Backup to create backups of your Amazon Aurora DSQL single-Region and multi-Region clusters. Amazon Aurora DSQL cluster backups are always full backups.

Backup creation for Amazon Aurora DSQL clusters use the standard creation backup process. For more information, see the following:
+ [Creating an on-demand backup using AWS Backup](recov-point-create-on-demand-backup.md)
+ [Create a backup plan](creating-a-backup-plan.md)

To use AWS Backup to create backups of your Amazon Aurora DSQL clusters, you must enable protection for Aurora DSQL. For more information, see [Service Opt-in](getting-started.md#service-opt-in).

When you backup a multi-Region cluster, consider the following items:
+ A multi-Region cluster backup requires a separate backup for each Region within the cluster; a backup in one Region doesn't create a recovery point for all Regions in a multi-Region cluster.
+ As a best practice, AWS Backup recommends you create a recovery point in one Region and copy it to another related Region. For [multi-Region restore](restore-auroradsql.md#restore-auroradsql-multiregion), you need a recovery point in one supported Region, and a copy of that recovery point in another Region within the same Regional triplet.

  The following supported triplets are available. Where there are more than Regions, choose three in the same grouping.
  + US East (N. Virginia); US East (Ohio); US West (N. California)
  + Europe (Ireland); Europe (London); Europe (Paris); Europe (Frankfurt)
  + Asia Pacific (Tokyo); Asia Pacific (Seoul); Asia Pacific (Osaka)

AWS Backup recommends that you add the backup copy rule to the backup plan. If you do not add the copy rule to the backup plan, you must manually copy the backup to the required Region in which to perform the restore, which will increase your Recovery Time Objective (RTO) times.

For information about restoring an Aurora DSQL recovery point (backup), see [Amazon Aurora DSQL restore](restore-auroradsql.md).

# Advanced DynamoDB backup
<a name="advanced-ddb-backup"></a>

AWS Backup supports additional, advanced features for your Amazon DynamoDB data protection needs.

Customers who started using AWS Backup after November 2021 have advanced DynamoDB backup features enabled by default. Specifically, advanced DynamoDB backup features are enabled by default to customers who have not created a backup vault prior to November 21, 2021.

It's best practice for existing AWS Backup customers to enable advanced features for DynamoDB. There is no difference in warm backup storage pricing after you enable advanced features. You can potentially save money by moving backups to cold storage and optimize your costs by using cost allocation tags. You can also start taking advantage of AWS Backup's cross-Region and cross-account copy and security features.

**Topics**
+ [Benefits of advanced DDB backup](#advanced-ddb-backup-benefits)
+ [Considerations for Advanced DynamoDB backup](#advanced-ddb-considerations)
+ [Enabling advanced DynamoDB backup using the console](#advanced-ddb-backup-enable-console)
+ [Enabling advanced DynamoDB backup programmatically](#advanced-ddb-backup-enable-cli)
+ [Editing an advanced DynamoDB backup](#advanced-ddb-backup-edit)
+ [Restoring an advanced DynamoDB backup](#advanced-ddb-backup-restore)
+ [Deleting an advanced DynamoDB backup](#advanced-ddb-backup-delete)
+ [Other benefits of full AWS Backup management when you enable advanced DynamoDB backup](#advanced-ddb-backup-other-benefits)

## Benefits of advanced DDB backup
<a name="advanced-ddb-backup-benefits"></a>

After you enable AWS Backup's advanced features in your AWS Region, you unlock the following features for all new for DynamoDB table backups you create:
+ Cost savings and optimization:
  + [Tiering backups to cold storage](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_Lifecycle.html) to reduce storage costs
  + [ Cost allocation tagging for use with Cost Explorer](https://docs.aws.amazon.com/aws-backup/latest/devguide/metering-and-billing.html#cost-allocation-tags)
+ Additional copy options:
  + [Cross-Region copy](https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-region-backup.html)
  + [Cross-account copy](https://docs.aws.amazon.com/aws-backup/latest/devguide/create-cross-account-backup.html#prereq-cab)
+ Security:
  + Backups inherit tags from their source DynamoDB tables, allowing you to use those tags to set permissions and [ service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html).

## Considerations for Advanced DynamoDB backup
<a name="advanced-ddb-considerations"></a>

**Opting in**

Backups, including those of Advanced DDB resources, can be created by a backup plan, an on-demand backup, or through a backup policy. Backups created by a plan or on-demand will automatically opt-in your account to allow backups of Advanced DDB resources.

If your backup job is created by a backup policy, you need to manually opt-in to Advanced DynamoDB backups, either through the [Backup console](assigning-resources-console.md) or through [CLI](assigning-resources-json.md).

**Custom policies and roles**

If you use a custom role or policy instead of AWS Backup's default service role, you must add or use the following permissions policies (or add their equivalent permissions) to your custom role:
+ `AWSBackupServiceRolePolicyForBackup` to perform advanced DynamoDB backup.
+ `AWSBackupServiceRolePolicyForRestores` to restore advanced DynamoDB backups.

To learn more about AWS-managed policies and view examples of customer-managed policies, see [Managed policies for AWS Backup](security-iam-awsmanpol.md).

## Enabling advanced DynamoDB backup using the console
<a name="advanced-ddb-backup-enable-console"></a>

You can enable AWS Backup advanced features for DynamoDB backups using either the AWS Backup or DynamoDB console.

**To enable advanced DynamoDB backup features from the AWS Backup console:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation menu, choose **Settings**.

1. Under the **Supported services** section, verify that **DynamoDB** is **Enabled**.

   If it is not, choose **Opt-in** and enable DynamoDB as an AWS Backup supported service.

1. Under the **Advanced features for DynamoDB backups** section, choose **Enable**.

1. Choose **Enable features**.

For how to enable AWS Backup advanced features using the DynamoDB console, see [ Enabling AWS Backup features](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CreateBackupAWS.html#CreateBackupAWS_enabling) in the *Amazon DynamoDB User Guide*.

## Enabling advanced DynamoDB backup programmatically
<a name="advanced-ddb-backup-enable-cli"></a>

You can also enable AWS Backup advanced features for DynamoDB backups using the AWS Command Line Interface (CLI). You enable advanced DynamoDB backups when you set both of the following values to `true`:

**To programmatically enable AWS Backup advanced features for DynamoDB backups:**

1. Check if you already enabled AWS Backup advanced features for DynamoDB using the following command:

   ```
   $ aws backup describe-region-settings
   ```

   If `"DynamoDB":true` under both `"ResourceTypeManagementPreference"` and `"ResourceTypeOptInPreference"`, you have already enabled advanced DynamoDB backup.

   If, like the following output, you have at least one instance of `"DynamoDB":false`, you have not yet enabled advanced DynamoDB backup, proceed to the next step.

   ```
   {
     "ResourceTypeManagementPreference":{
       "DynamoDB":false,
       "EFS":true
     }
     "ResourceTypeOptInPreference":{
       "Aurora":true,
       "DocumentDB":false,
       "DynamoDB":false,
       "EBS":true,
       "EC2":true,
       "EFS":true,
       "FSx":true,
       "Neptune":false,
       "RDS":true,
       "Storage Gateway":true
     }
   }
   ```

1. Use the following [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_UpdateRegionSettings.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_UpdateRegionSettings.html) operation to set both `"ResourceTypeManagementPreference"` and `"ResourceTypeOptInPreference"` to `"DynamoDB":true`:

   ```
   aws backup update-region-settings \ 
                 --resource-type-opt-in-preference DynamoDB=true \
                 --resource-type-management-preference DynamoDB=true
   ```

## Editing an advanced DynamoDB backup
<a name="advanced-ddb-backup-edit"></a>

When you create a DynamoDB backup after you enable AWS Backup advanced features, you can use AWS Backup to:
+ Copy a backup across Regions
+ Copy a backup across accounts
+ Change when AWS Backup tiers a backup to cold storage
+ Tag the backup

To use those advanced features on an existing backup, see [ Editing a backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/editing-a-backup.html).

If you later disable AWS Backup advanced features for DynamoDB, you can continue to perform those operations to DynamoDB backups that you created during the period of time when you enabled advanced features.

## Restoring an advanced DynamoDB backup
<a name="advanced-ddb-backup-restore"></a>

You can restore DynamoDB backups taken with AWS Backup advanced features enabled in the same way you restore DynamoDB backups taken prior to enabling AWS Backup advanced features. You can perform a restore using either AWS Backup or DynamoDB.

You can specify how to encrypt your newly-restored table with the following options:
+ When you restore in the same Region as your original table, you can optionally specify an encryption key for your restored table. If you do not specify an encryption key, AWS Backup will automatically encrypt your restored table using the same key that encrypted your original table.
+ When you restore in a different Region than your original table, you must specify an encryption key.

 To restore using AWS Backup, see [Restore a Amazon DynamoDB table](restoring-dynamodb.md).

To restore using DynamoDB, see [Restoring a DynamoDB table from a backup](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Restore.Tutorial.html) in the *Amazon DynamoDB User Guide*.

## Deleting an advanced DynamoDB backup
<a name="advanced-ddb-backup-delete"></a>

You cannot delete backups created using these advanced features in DynamoDB. You must use AWS Backup to delete backups to maintain global consistency throughout your AWS environment.

To delete a DynamoDB backup, see [Backup deletion](deleting-backups.md).

## Other benefits of full AWS Backup management when you enable advanced DynamoDB backup
<a name="advanced-ddb-backup-other-benefits"></a>

When you enable AWS Backup advanced features for DynamoDB, you give full management of your DynamoDB backups to AWS Backup. Doing so gives you the following, additional benefits:

**Encryption**

AWS Backup automatically encrypts the backups with the KMS key of your destination AWS Backup vault. Previously, they were encrypted using the same encryption method of your source DynamoDB table. This increases the number of defenses you can use to safeguard your data. See [Encryption for backups in AWS Backup](encryption.md) for more information.

**Amazon Resource Name (ARN)**

Each backup ARN’s service namespace is `awsbackup`. Previously, the service namespace was `dynamodb`. Put another way, the beginning of each ARN will change from `arn:aws:dynamodb` to `arn:aws:backup`. See [ARNs for AWS Backup](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsbackup.html#awsbackup-resources-for-iam-policies) in the *Service Authorization Reference*.

With this change, you or your backup administrator can create access policies for backups using the `awsbackup` service namespace that now apply to DynamoDB backups created after you enable advanced features. By using the `awsbackup` service namespace, you can also apply policies to other backups taken by AWS Backup. See [Access control](access-control.md) for more information.

**Location of charges on billing statement**

Charges for backups (including storage, data transfers, restores, and early deletion) appear under “Backup” in your AWS bill. Previously, charges appeared under “DynamoDB” in your bill.

This change ensures that you can use AWS Backup billing to centrally monitor your backup costs. See [Metering, costs, and billing for AWS BackupMetering, costs, and billing](metering-and-billing.md) for more information.

# Amazon EBS and AWS Backup
<a name="multi-volume-crash-consistent"></a>

The backup process for Amazon EBS resources is similar to the steps used to back up other resources types:
+ [Create an on-demand backup](recov-point-create-on-demand-backup.md)
+ [Create a scheduled backup](creating-a-backup-plan.md)

Resource-specific information is noted in the following sections.

## Amazon EBS Archive Tier for cold storage
<a name="ebs-archive-tier"></a>

EBS is one of the resource that supports a transition of backups to cold storage. For more information, see [Lifecycle and storage tiers](plan-options-and-configuration.md#backup-lifecycle).

## Amazon EBS multi-volume, crash-consistent backups
<a name="ebs-multi-volume"></a>

By default, AWS Backup creates crash-consistent backups of Amazon EBS volumes that are attached to an Amazon EC2 instance. Crash consistency means that the snapshots for every Amazon EBS volume attached to the same Amazon EC2 instance are taken at the exact same moment. You no longer have to stop your instances or coordinate between multiple Amazon EBS volumes to ensure crash-consistency of your application state.

Since multi-volume, crash-consistent snapshots are a default AWS Backup functionality, you don’t need to do anything different to use this feature.

The role used to create an EBS snapshot recovery point is associated with that snapshot. This same role must be used to delete recovery points created by it or to transition recovery points of it to an archive tier.

## Amazon EBS Snapshot Lock and AWS Backup
<a name="ebs-snapshotlock"></a>

AWS Backup managed Amazon EBS snapshots and snapshots associated with a AWS Backup managed Amazon EC2 AMI which have Amazon EBS Snapshot Lock applied may not be deleted as part of the recovery point lifecycle if the snapshot lock duration exceeds the backup lifecycle. Instead, these recovery points will have the status of `EXPIRED`. These recovery points can be [deleted manually](https://docs.aws.amazon.com/aws-backup/latest/devguide/deleting-backups.html#deleting-backups-manually) if you choose to first remove the Amazon EBS snapshot lock.

## Restoring Amazon EBS resources
<a name="ebs-restore-link"></a>

To restore your Amazon EBS volumes, follow the steps in [Restoring an Amazon EBS volume](restoring-ebs.md).

# Amazon Relational Database Service backups
<a name="rds-backup"></a>

## Amazon RDS and AWS Backup
<a name="rds-backup-differences"></a>

When you consider the options to back up your Amazon RDS instances and clusters, it's important to clarify which kind of backup you want to create and use. Several AWS resources, including Amazon RDS, offer their own native backup solutions.

Amazon RDS gives the option of making [automated backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ManagingAutomatedBackups.html) and [manual backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ManagingManualBackups.html). Recovery points created by AWS Backup are classified differently depending on the backup type:
+ **Periodic snapshots** created by AWS Backup are considered manual backups in Amazon RDS. These are snapshot-based backups taken according to your backup plan schedule.
+ **Continuous backups** created by AWS Backup are considered automated backups in Amazon RDS. These enable point-in-time restore (PITR) by maintaining transaction logs alongside automated snapshots.

This distinction is important because manual and automated backups have different retention behaviors and lifecycle management in Amazon RDS.

When you use AWS Backup to [create a backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html#create-backup-plan-console) (recovery point) of an Amazon RDS instance, AWS Backup checks if you have previously used Amazon RDS to create an automated backup. If an automated backup exists, AWS Backup creates a incremental snapshot copy (`copy-db-snapshot` operation). If no backup exists, AWS Backup creates a snapshot of the instance you indicate, instead of a copy (`create-db-snapshot` operation).

The first snapshot made by AWS Backup, created by either operation, will result in 1 full snapshot. All subsequent *copies* of this will be incremental backups, as long as the full backup exists.

When using cross account or cross Region copies, incremental snapshot copy jobs process faster than full snapshot copy jobs. Keeping a previous snapshot copy until the new copy job is complete may reduce the copy job duration. If you choose to copy snapshots from RDS database instances, it is important to note that deleting previous copies first will cause full snapshot copies to be made (instead of incremental). For more information on optimizing copying, see [Incremental snapshot copying](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html#USER_CopySnapshot.Incremental) in the *Amazon RDS User Guide*.

**Important**  
When a AWS Backup backup plan is scheduled to create multiple daily snapshots of an Amazon RDS instance, and when one of those scheduled [AWS Backup Start Backup window](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html#plan-options-and-configuration) coincides with the [Amazon RDS Backup window](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ManagingAutomatedBackups.html#USER_WorkingWithAutomatedBackups.BackupWindow), the data lineage of the backups can branch off into non-identical backups, creating unplanned and conflicting backups. To prevent this, ensure your AWS Backup backup plan or Amazon RDS window do not coincide in their times.

### Considerations
<a name="rds-backup-considerations"></a>

RDS Custom for SQL Server and RDS Custom for Oracle are not currently supported by AWS Backup.

AWS Backup does not support backup and restore of RDS on Outposts.

## Amazon RDS continuous backups and point in time restore
<a name="rds-backup-continuous"></a>

Continuous backups involve using AWS Backup to create a full backup of your Amazon RDS resource, then capturing all changes through a transaction log. You can achieve a greater granularity by rewinding to the point in time you desire to restore to instead of choosing a previous snapshot taken at fixed time intervals.

See [continuous backups and PITR supported services](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html#point-in-time-recovery-supported-services) and [managing continuous backup settings](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html#point-in-time-recovery-managing) for more information.

## Amazon RDS Multi-Availability Zone backups
<a name="rds-multiaz"></a>

AWS Backup backs up and supports Amazon RDS for MySQL and for PostgreSQL Multi-AZ (Availability Zone) deployment options with one primary and two readable standby database instances.

For a list of Regions where Multi-Availability Zone backups are available, see the Amazon RDS Multi-AZ column in [Supported services by AWS Region](backup-feature-availability.md#supported-services-by-region).

The Multi-AZ deployment option optimizes write transactions and is ideal when your workloads require additional read capacity, lower write transaction latency, more resilience from network jitter (which impacts the consistency of write transaction latency), and high availability and durability.

To create a Multi-AZ cluster, you can choose either MySQL or PostgreSQL as the engine type.

In the AWS Backup console, there are three deployment options:
+ **Multi-AZ DB cluster:** Creates a DB cluster with a primary DB instances and two readable standby DB instances, which each DB instance in a different Availability Zone. Provides high availability, data redundancy, and increases capacity to server-ready workloads.
+ **Multi-AZ DB instance:** Creates a primary DB instance and a standby DB instance in a different Availability Zone. This provides high availability and data redundancy, but the standby DB instance doesn’t support connections for read workloads.
+ **Single DB instance: **Creates a single DB instance with no standby DB instances.

**Backup behavior with instances and clusters**
+ [ Point-in-Time Recovery](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) (PITR) can support instances, but not clusters.
+ Copying a Multi-AZ DB cluster snapshot is not supported.
+ The Amazon Resource Name (ARN) for an RDS recovery point depends on whether an instance or cluster is used:

  An RDS instance ARN: `arn:aws:rds:region: account:db:name`

  An RDS Multi-Availability Cluster: `arn:aws:rds:region:account:cluster:name`

For more information, consult [ Multi-AZ DB cluster deployments](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) in the *Amazon RDS User Guide*.

For more information on [ Creating a Multi-AZ DB cluster snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateMultiAZDBClusterSnapshot.html), see the Amazon RDS User Guide.

## Amazon Aurora Global Databases
<a name="rds-aurora-global"></a>

AWS recommends maintaining backups in every Region where your global database is deployed.

# Amazon Redshift backups
<a name="redshift-backups"></a>

Amazon Redshift is a fully managed, scalable cloud data warehouse that accelerates your time to insights with fast, easy, and secure analytics. You can use AWS Backup to protect your data warehouses with immutable backups, separate access policies, and centralized organizational governance of backup and restore jobs.

An Amazon Redshift data warehouse is a collection of computing resources called nodes, which are organized into a group called a cluster. AWS Backup can backup these clusters.

For information on [Amazon Redshift](https://docs.aws.amazon.com/redshift/index.html) , see the [ Amazon Redshift Getting Started Guide](https://docs.aws.amazon.com/redshift/latest/gsg/index.html), the [Amazon Redshift Database Developer Guide](https://docs.aws.amazon.com/redshift/latest/dg/index.html), and the [Amazon Redshift Cluster Management Guide](https://docs.aws.amazon.com/redshift/latest/mgmt/index.html).

## Back up Amazon Redshift provisioned clusters
<a name="backupredshift"></a>

You can protect your Amazon Redshift clusters using the AWS Backup console or programmatically using API or CLI. These clusters can be backed up on a regular schedule as part of a backup plan, or they can be backed up as needed via on-demand backup.

You can restore a single table (also known as item-level restore) or an entire cluster. Note that tables cannot be backed up by themselves; tables are backed up as part of a cluster when the cluster is backed up.

Using AWS Backup allows you to view your resources in a centralized way; however, if Amazon Redshift is the only resource you use, you can continue to use the automated snapshot scheduler in Amazon Redshift. Note that you cannot continue to manage manual snapshot settings using Amazon Redshift if you choose to manage these via AWS Backup.

You can backup Amazon Redshift clusters either through the AWS Backup console or using the AWS CLI.

There are two ways to use the AWS Backup console to backup a Amazon Redshift cluster: on demand or as part of a backup plan.

### Create on-demand Amazon Redshift backups
<a name="ondemandredshiftbackups"></a>

See [ Creating an on-demand backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) type page for more information.

To create a manual snapshot, leave the continuous backup checkbox unchecked when you create a backup plan that includes Amazon Redshift resources.

### Create scheduled Amazon Redshift backups in a backup plan
<a name="scheduledredshiftbackups"></a>

Your scheduled backups can include Amazon Redshift clusters if they are a protected resource. To opt into protecting Amazon Redshift clusters:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Using the navigation pane, choose **Protected resources**.

1. Toggle Amazon Redshift to **On**.

1. See [ Assigning resources to the console](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html#assigning-resources-console) to include Amazon Redshift clusters in an existing or new plan.

Under **Manage Backup plans**, you can choose to [create a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) and include Amazon Redshift clusters, or you can [update an existing one](https://docs.aws.amazon.com/aws-backup/latest/devguide/updating-a-backup-plan.html) to include Amazon Redshift clusters. When adding the resource type *Amazon Redshift*, you can choose to add **All Amazon Redshift clusters**, or check the boxes next to the clusters you wish to include in your backup plan.

### Back up programmatically
<a name="redshiftbackupapi"></a>

You can also define your backup plan in a JSON document and provide it using the AWS Backup console or AWS CLI. See [ Creating backup plans using a JSON document and the AWS Backup CLI](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html#create-backup-plan-cli) for information on how to create a backup plan programatically.

You can do the following operations using API:
+ Start a backup job
+ Describe a backup job
+ Get recovery point metadata
**Note**  
`BackupSizeInBytes` metadata is supported for the following resource types: Amazon EBS volumes, Amazon EFS file systems, Amazon RDS databases, DynamoDB tables, Amazon EC2 instances, Amazon FSx file systems, and Amazon S3 buckets. This field provides the size of the backup in bytes and is available through the `DescribeRecoveryPoint` API and AWS Backup console. For unsupported resource types, this field will not be populated.
+ List recovery points by resources
+ List tags for the recovery point

### View Amazon Redshift cluster backups
<a name="viewredshiftbackups"></a>

To view and modify your Amazon Redshift table backups within the console:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Choose **Backup vaults**. Then, click on the backup vault name that contains your Amazon Redshift clusters.

1. The backup vault will display a summary and a list of backups. You can click on the link in the column **Recovery point ID**.

1. To delete one or more recovery points, check the box(es) you wish to delete. Under the button **Actions**, you can select **Delete**.

### Restore a Amazon Redshift cluster
<a name="w2aac17c19c31c11c11c11"></a>

See how to [Restore a Amazon Redshift cluster](https://docs.aws.amazon.com/aws-backup/latest/devguide/redshift-restores.html) for more information.

# Amazon Redshift Serverless backups
<a name="redshift-serverless-backups"></a>

## Overview
<a name="redshift-serverless-backups-overview"></a>

AWS Backup offers full backup management of your Amazon Redshift Serverless namespaces. Through AWS Backup, you can schedule and restore Redshift Serverless manual snapshots through the console or through CLI.

Redshift Serverless data protection through AWS Backup provides several options for backing up and restoring your data warehouses. You can create a scheduled or on-demand snapshot of your namespace. Then, you can choose to restore all the databases in that snapshot to a Amazon Redshift provisioned cluster or a Serverless namespace. Alternatively, you can restore a single table.

Redshift Serverless offers both automated and manual snapshots. Currently, AWS Backup can be used to manage manual snapshots but not automated ones.

## Backup options for Redshift Serverless
<a name="redshift-serverless-backups-options"></a>

You can use the AWS Backup console or CLI to create backups on demand or as part of a backup plan.

### Create on-demand backup
<a name="redshift-serverless-backups-on-demand"></a>

You can create on-demand backups of Redshift Serverless namespaces through the following steps:

------
#### [ Console ]

1. Open the [AWS Backup console](https://console.aws.amazon.com//backup).

1. On the dashboard, choose **Create an on-demand backup**.

1. Choose **Redshift Serverless** in the resource type dropdown menu.

1. Select the namespace you plan to back up.

1. Ensure **Create backup now** is selected.

1. Specify the retention period for the backup.

1. Choose an existing backup vault or create a new one.

1. Select the IAM role to use for the backup.

1. Optionally, add tags to the backup. To assign a tag to your on-demand backup, expand **Tags added to recovery points**, choose **Add new tag**, and enter a tag key and tag value.

1. Select **Create on-demand backup** to begin the backup job.

1. Once the job is initiated, the console will show the Jobs screen where you can see a list of your backup jobs and their statuses.

------
#### [ AWS CLI ]

Use the **start-backup-job** command.

**Required parameters**
+ `BackupVaultName`
+ `IamRoleArn`
+ `ResourceArn`

**Optional parameters**
+ `CompleteWindowMinutes`
+ `IdempotencyToken`
+ `Lifecyle`
+ `StartWindowMinutes`

**Example**  
The following example creates an on-demand backup of a Redshift Serverless namespace.  

```
aws backup start-backup-job \
    --backup-vault-name sample-vault \
    --iam-role-arn arn:aws:iam::account:role/service-role/AWSBackupDefaultServiceRole \
    --resource-arn arn:aws:redshift-serverless:region:account:namespace/namespace-name-UUID
```

------

### Create scheduled Redshift Serverless backups in a backup plan
<a name="redshift-serverless-backups-scheduled"></a>

You can create a new backup plan for their Redshift Serverless namespaces through the AWS Backup console or through CLI, or you can add Redshift Serverless to an existing backup plan.

Your scheduled backups can include Redshift Serverless namespaces if they are a protected resource. To opt into protecting Redshift Serverless in the AWS Backup console, complete the following steps:

------
#### [ Console ]

To opt into protecting Redshift Serverless in the AWS Backup console, complete the following steps:

1. Open the [AWS Backup console](https://console.aws.amazon.com//backup).

1. Using the navigation pane, choose **Protected resources**.

1. Toggle **Amazon Redshift Serverless** to **On**.

1. See [Select AWS services to backup](assigning-resources.md) to include Redshift Serverless namespaces in an existing or new plan. When you add the resource type *Redshift Serverless*, you can choose to add **All Amazon Redshift namespaces**, or check the boxes next to the namespaces you wish to back up.

Under **Manage Backup plans**, you can:
+ [Create a backup plan](creating-a-backup-plan.md) and include Redshift Serverless;
+ [Update](updating-a-backup-plan.md) an existing backup plan to include Redshift Serverless.

------
#### [ AWS CLI ]

See [Create backup plans using the AWS CLI](creating-a-backup-plan.md#create-backup-plan-cli) for guidance to use **create-backup-plan**.

If you want to alter an existing plan to include your Serverless resources, use the command **update-backup-plan**.

The ARN (Amazon Resource Name) for Serverless resources to include in "BackupSelection": \$1 "Resources" has the following format:

```
arn:aws:redshift-serverless:Region:account:snapshot/a12bc34d-567e-890f-123g-h4ijk56l78m9
```

------

See [Amazon Redshift Serverless restore](redshift-serverless-restore.md) for information to restore data from a snapshot to a Serverless namespace.

# Amazon EKS backups
<a name="eks-backups"></a>

An Amazon Elastic Kubernetes Service (Amazon EKS) cluster consists of multiple resources that you can back up as a single unit. When you back up an Amazon EKS cluster, AWS Backup creates a composite recovery point that includes both EKS cluster state and persistent volume backups.

When an Amazon EKS cluster is backed up, recovery points are created for the Amazon EKS cluster state and persistent volumes supported by AWS Backup. These recovery points are grouped together within an overarching recovery point called a **composite**.

There are two distinct components of an Amazon EKS backup:
+ *Amazon EKS Cluster State:* This is a backup of the Amazon EKS cluster state. See Amazon EKS backup terminology below for what is included.
+ *Persistent Storage:* This is a backup of persistent storage (Amazon EBS, Amazon S3, Amazon Elastic File System) attached to the Amazon EKS cluster via Persistent Volume Claims and [supported by EKS Add Ons CSI Driver](https://docs.aws.amazon.com/eks/latest/userguide/storage.html).

## Amazon EKS backup terminology
<a name="eks-backup-overview"></a>

The following terms are used throughout the Amazon EKS backup documentation. For Amazon EKS Specific terminology, please refer to [Amazon EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html).

## EKS Backup Terminology
<a name="eks-backup-terminology"></a>
+ **Composite recovery point** – A recovery point used to group nested recovery points together for an Amazon EKS cluster backup.
+ **Nested recovery point** – A recovery point of a resource that is part of an Amazon EKS cluster and is backed up as part of the composite recovery point.
+ **EKS Cluster State** – The Kubernetes manifests (YAML or JSON files) that define the desired state of Kubernetes resources in your cluster. This includes Kubernetes resources and deployments such as: secrets, config maps, stateful sets, DaemonSets, storage classes, storage maps, replica sets, persistent volume claims, custom resource definitions, roles, and role bindings.
+ **Amazon EKS Cluster Configuration Child Recovery Point** – Contains Amazon EKS cluster state.
+ **Persistent Volume Child Recovery Points** – Contains persistent volume backups for supported storage types (EBS, S3, EFS) [supported by EKS Add Ons CSI Driver](https://docs.aws.amazon.com/eks/latest/userguide/storage.html).

## Amazon EKS backup structure
<a name="eks-backup-creation"></a>

**Amazon EKS backups include the following components:**
+ Amazon EKS Cluster State
+ Persistent Storage: Backups of supported storage types including Amazon EBS, Amazon EFS, and Amazon S3

**Amazon EKS Backups will not include the following components:**
+ Container images from external repositories (ECR, Docker)
+ EKS cluster infrastructure components (e.g. VPCs, Subnets)
+ Auto-generated EKS resources like nodes, auto-generated pods, events, leases, and jobs.

**EKS backup setup and prerequisites ("Before you backup")**
+ **EKS Cluster Settings:**
  + EKS Cluster [authorization mode](https://docs.aws.amazon.com/eks/latest/userguide/setting-up-access-entries.html) set to API or API\$1AND\$1CONFIG\$1MAP for AWS Backup to create [Access Entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) to access the EKS cluster.
+ **Permissions:**
  + AWS Backup's managed policy [AWSBackupServiceRolePolicyForBackup](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#AWSBackupServiceRolePolicyForBackup) contains the required permissions to backup your Amazon EKS cluster and EBS and EFS persistent storage
  + If your EKS Cluster contains an S3 bucket you will need to ensure the following policies and prerequisites for your S3 bucket are added and enabled as documented:
    + [AWSBackupServiceRolePolicyForS3Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#AWSBackupServiceRolePolicyForS3Backup)
    + [Prerequisites for S3 Backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html#s3-backup-prerequisites)
+ **Encryption:**
  + Amazon EKS child recovery points will be encrypted with the Amazon KMS key set of the target Backup Vault
  + Persistent Storage recovery points will be encrypted as per the current support for each storage class: EBS Snapshots, S3 Backups, EFS Backups. [See Encryption for backups in AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/encryption.html)

## Create an Amazon EKS backup
<a name="eks-backups-options"></a>

The process of a backup creation is called a backup job. An Amazon EKS cluster backup job has a status. When a backup job has finished, it has the status of Completed. This signifies a recovery point (a backup) has been created.

### Creating an on-demand Amazon EKS backup
<a name="eks-backups-on-demand"></a>

------
#### [ Console ]

To create an on-demand backup of your Amazon EKS cluster:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**.

1. Under **Resource type**, select **Amazon EKS**.

1. Select the checkbox next to the Amazon EKS cluster you want to back up.

1. Choose **Create on-demand backup**.

1. Configure your backup settings, including backup window, transition to cold storage, and retention period.

1. Choose **Create on-demand backup**.

------
#### [ AWS CLI ]

To create an on-demand backup of your Amazon EKS cluster using the AWS CLI:

Use the **start-backup-job** command:

```
aws backup start-backup-job \
    --backup-vault-name my-backup-vault \
    --resource-arn arn:aws:eks:us-west-2:123456789012:cluster/my-cluster \
    --iam-role-arn arn:aws:iam::123456789012:role/AWSBackupDefaultServiceRole \
    --region us-west-2
```

Optionally, specify additional parameters such as lifecycle settings:

```
aws backup start-backup-job \
    --backup-vault-name my-backup-vault \
    --resource-arn arn:aws:eks:us-west-2:123456789012:cluster/my-cluster \
    --iam-role-arn arn:aws:iam::123456789012:role/AWSBackupDefaultServiceRole \
    --lifecycle MoveToColdStorageAfterDays=30,DeleteAfterDays=365 \
    --region us-west-2
```

Monitor the backup job status:

```
aws backup describe-backup-job \
    --backup-job-id backup-job-id \
    --region us-west-2
```

------

## Amazon EKS backup ARN format
<a name="eks-recovery-points"></a>

Composite Recovery Point arn:*partition*:backup:*region*:*accountId*:recovery-point:composite:eks/*cluster-name*-*timestamp*

Child Recovery Point arn:*partition*:backup:*region*:*accountId*:recovery-point:eks/*cluster-name*-*timestamp*

### Amazon EKS recovery points
<a name="eks-recovery-point-status"></a>

#### Recovery point status
<a name="eks-recovery-point-status-details"></a>

When the backup job of an Amazon EKS cluster is finished (the job status is `Completed`), a backup of the cluster has been created. This backup is also known as a composite recovery point. A composite recovery point can have one of the following statuses: `Completed`, `Failed`, or `Partial`.

Each Amazon EKS backup creates a parent backup job for the composite recovery point and child backup jobs for each child recovery point (cluster configuration and persistent volumes).
+ A completed backup job means your entire Amazon EKS cluster and the resources within it are protected by AWS Backup.
+ A failed status indicates that the backup job was unsuccessful; you should create the backup again once the issue that caused the failure is corrected.
+ A `Partial` status means that not all the resources in the cluster were backed up. This may happen if one or more of the backup jobs belonging to resources within the cluster (nested resources) have statuses other than `Completed`. You can manually create an on-demand backup to rerun any resources that resulted in a status other than `Completed`.
+ A `Completed with issues` status means that not all the resources in the cluster were backed up. This can happen when we fail to backup some Kubernetes objects in the cluster. You can subscribe to **Notification Events** for failed objects for backup. For more information, see [Notification options with AWS Backup.](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-notifications.html)

Each nested resource within the composite recovery point has its own individual recovery point, each with its own status (either `Completed` or `Failed`). Nested recovery points with a status of `Completed` can be restored.

AWS Backup supports lifecycle transitions to cold storage for persistent volume recovery points. You can subscribe to notifications to receive alerts on backup job status.

## Manage recovery points
<a name="eks-manage-recovery-points"></a>

Composite recovery points (backups) can be copied; persistent volume child recovery points can be copied, deleted, disassociated, or restored. The Amazon EKS cluster state child recovery point cannot be copied, deleted, or disassociated as it maintains a 1:1 relationship with its parent composite recovery point.

A composite recovery point which contains nested backups cannot be deleted. After the nested recovery points within a composite recovery point have been deleted or disassociated, you can manually delete the composite recovery point manually or let it remain until the backup plan lifecycle deletes it.

### Delete a recovery point
<a name="eks-delete-recovery-point"></a>

You can delete a recovery point using the console or using the AWS CLI.

To delete recovery points using the console:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Click on Protected Resources in the left-hand navigation. In the text box, type EKS to display only your Amazon EKS clusters.

1. Composite recovery points will be displayed in the Recovery points pane. The plus sign (\$1) to the left of each recovery point ID can be clicked to expand each composite recovery point, showing all nested recovery points contained in the composite. You can check the box to the left of any recovery point to include it in your selection of recovery points you wish to delete.

1. Click the Delete button.

When you use the console to delete one or more composite recovery points, a warning box will pop up. This warning box requires you to confirm your intention to delete the composite recovery points, including nested recovery points within composite stacks.

To delete recovery points using API, use the DeleteRecoveryPoint command.

When you use API with the AWS Command Line Interface you must delete all nested recovery points prior to deleting a composite point.

### Disassociate a nested recovery point from composite recovery point
<a name="eks-disassociate-recovery-point"></a>

You can disassociate a nested recovery point from a composite recovery point (for example, you wish to keep the nested recovery point but delete the composite recovery point). Both recovery points will remain, but they will no longer be connected; that is, actions that occur on the composite recovery point will no longer apply to the nested recovery point once it has been disassociated. The Amazon EKS cluster state child recovery point cannot be disassociated as it maintains a 1:1 relationship with its parent composite recovery point.

You can disassociate the recovery point using the console, or you can call the API DisassociateRecoveryPointFromParent.

## Copy a recovery point
<a name="eks-copy-recovery-point"></a>

You can copy a composite recovery point, or you can copy a nested recovery point if the resource supports [cross-account and cross-Region copy](backup-feature-availability.md#features-by-resource).

To copy recovery points using the AWS Backup console:

Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Click on **Vaults** in the left-hand navigation, and go to the vault that contains the recovery point you want to copy. In the text box, type `EKS` to display only your recovery points for Amazon EKS clusters.

1. Both composite and nested recovery points will be displayed under the Recovery point ID pane. Note you cannot select and copy a nested EKS recovery point.

1. The arrow sign to the left of each composite recovery point ID can be clicked to expand, showing all nested recovery points contained in the composite. You can click the square checkbox to the left of any recovery point to copy it.

1. Once it is selected, click the **Actions** dropdown in the top-right corner of the pane and click **Copy**.

Amazon EKS backups support all copy types:
+ Same region/account
+ Cross account
+ Cross region
+ Opt-in regions

## Limitations
<a name="eks-limitations"></a>
+ Persistent volumes using a CSI Driver via CSI migration, in-tree storage plugins or ACK controllers are not supported. Note that the annotation `volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com` is metadata indicating which provisioner could manage the volume, not that the volume uses CSI. The actual provisioner is determined by the storageClass.
+ Amazon S3 buckets with specific prefixes attached to CSI Driver MountPoints cannot be backed up. Only Amazon S3 buckets as targets are supported, not specific prefixes.
+ Amazon S3 bucket backups as part of an EKS cluster backup will only support snapshot backups.
+ Backups of EFS file systems in a cross-account are not supported via EKS Backups.
+ Amazon FSx via CSI driver is not supported via EKS Backups.
+ AWS Backup does not support Amazon EKS on AWS Outposts.
+ Subject to [backup and restore quotas](aws-backup-limits.html).

## Backup Jobs Completed with Issues
<a name="eks-backup-jobs-completed-with-issues"></a>

When backing up an Amazon EKS cluster, some Kubernetes objects may fail to be retrieved. In this case, the backup job will complete with a `Completed with issues` status rather than failing entirely, with the following status message:
+ Some Kubernetes Objects failed to be backed up. To get notified of these failures, [enable SNS event notifications](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-notifications.html).

The following Kubernetes object types may be skipped during a backup job due to [Amazon EKS Metrics Server Add On](https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html) unavailability issues resulting in a 503 Service Unavailable error. See here for [troubleshooting guidance](https://repost.aws/knowledge-center/eks-resolve-http-503-errors-kubernetes).
+ `metrics.k8s.io`
+ `custom.metrics.k8s.io`
+ `external.metrics.k8s.io`
+ `metrics.eks.amazonaws.com`

## Frequently Asked Questions
<a name="eks-faq"></a>

1. *"What is included as part of the Amazon EKS backup?"*

   As part of each backup of an Amazon EKS cluster, the Amazon EKS cluster state and persistent volumes supported by AWS Backup are backed up. The Amazon EKS cluster state includes details like cluster name, IAM role, Amazon VPC configuration, network settings, logging, encryption, add-ons, access entries, managed node groups, Fargate profiles, pod identity associations, and Kubernetes manifest files.

1. *"Does a `Partial` status mean the creation of my backup failed?"*

   No. A partial status indicates that some of the recovery points were backed up, while some were not. There are two conditions to check if you were expecting a `Completed` backup result:

   1. One or more of the backup jobs belonging to resources within the cluster were not successful and the job has to be rerun.

   1. A nested recovery point was deleted or disassociated from the composite recovery point.

1. *"Do I need to have an agent or Amazon EKS Add-on installed on my Amazon EKS cluster before backup?"*

   No. AWS Backup does not require any agents or add-ons to be installed on your Amazon EKS cluster. The only pre-requisite is to have your EKS Cluster's [authorization mode](https://docs.aws.amazon.com/eks/latest/userguide/setting-up-access-entries.html) set to API or API\$1AND\$1CONFIG\$1MAP for AWS Backup to create [Access Entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) to access the EKS cluster.

1. *"Does Amazon EKS Backups include Amazon EKS infrastructure components or Amazon ECR images?"*

   No. Amazon EKS backups focus on the EKS cluster state and application workloads, not the underlying infrastructure components or container images.

1. *"Can I lifecycle my EKS Composite Recovery Point to cold storage?"*

   You can transition to cold storage for underlying child recovery points that support cold storage tiers. See the [AWS Backup feature availability matrix](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource) for full list of support.

1. *"Are my EKS backups incremental?"*

   AWS Backup will take incremental backups of each child recovery point where supported today, this includes EBS volumes, EFS Filesystems and S3 buckets. The EKS cluster state child recovery point will be a full backup. See the [AWS Backup feature availability matrix](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource).

1. *"Can I create an index and search my EKS backups?"*

   No, however you can create on-demand indexes and search persistent volumes where the underlying storage type supports this capability through AWS Backup. See the [AWS Backup feature availability matrix](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource).

# SAP HANA backup on Amazon EC2
<a name="backup-saphana"></a>

**Note**  
[Supported services by AWS Region](backup-feature-availability.md#supported-services-by-region) contains the currently supported Regions where SAP HANA database backups on Amazon EC2 instances are available.

AWS Backup supports backups and restores of SAP HANA databases on Amazon EC2 instances.

**Topics**
+ [Overview of SAP HANA databases with AWS Backup](#saphanaoverview)
+ [Prerequisites for backing up SAP HANA databases through AWS Backup](#saphanaprerequisites)
+ [SAP HANA backup operations in the AWS Backup console](#saphanabackupconsole)
+ [View SAP HANA database backups](#saphanaviewbackup)
+ [Use AWS CLI for SAP HANA databases with AWS Backup](#saphanaapicli)
+ [Troubleshooting backups of SAP HANA databases](#saphanatroubleshooting)
+ [Glossary of SAP HANA terms when using AWS Backup](#saphanaglossary)
+ [AWS Backup support of SAP HANA databases on EC2 instances release notes](#saphanareleasenotes)

## Overview of SAP HANA databases with AWS Backup
<a name="saphanaoverview"></a>

In addition to the ability to create backups and to restore databases, AWS Backup integration with Amazon EC2 Systems Manager for SAP allows customers to identify and tag SAP HANA databases.

AWS Backup is integrated with AWS Backint Agent to perform SAP HANA backups and restores. For more information, see [AWS Backint](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-sap-hana.html).

When you take backups of SAP HANA, your snapshots and on-demand backups are full backups. However, you can achieve incremental backups by enabling continuous backups for point-in-time recovery (PITR).

## Prerequisites for backing up SAP HANA databases through AWS Backup
<a name="saphanaprerequisites"></a>

Several prerequisites must be completed before backup and restore activities can be performed. Note you will need administrative access to your SAP HANA database and permissions to create new IAM roles and policies in your AWS account to perform these steps.

Complete [ these prerequisites at Amazon EC2 Systems Manager](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html).

1. [ Set up required permissions for Amazon EC2 instance running SAP HANA database](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#ec2-permissions)

1. [ Register credentials in AWS Secrets Manager](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#register-secrets)

1. [ Install AWS Backint and AWS Systems Manager for SAP Agents](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-installing-configuring.html)

1. [ Verify SSM Agent](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#verify-ssm-agent)

1. [ Verify parameters](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#verification)

1. [ Register SAP HANA database](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#register-database)

It is best practice to register each HANA instance only once. Multiple registrations can result in multiple ARNs for the same database. Maintaining a single ARN and registration simplifies backup plan creation and maintenance and can also help reduce unplanned duplication of backups.

## SAP HANA backup operations in the AWS Backup console
<a name="saphanabackupconsole"></a>

Once the prerequisites and SSM for SAP setups are complete, you can back up and restore your SAP HANA on EC2 databases.

### Opt in to protect SAP HANA resources
<a name="saphanaenableoptin"></a>

To use AWS Backup to protect your SAP HANA databases, SAP HANA must be toggled on as one of the protected resources. To opt in:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation pane, choose **Settings**.

1. Under **Service opt-in**, select **Configure resources**.

1. Opt in to **SAP HANA on Amazon EC2.**.

1. Click **Confirm**.

Service opt-in for SAP HANA on Amazon EC2 will now be enabled.

### Create a scheduled backup of SAP HANA databases
<a name="saphanascheduledbackup"></a>

You can [ edit an existing backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/updating-a-backup-plan.html) and add SAP HANA resources to it, or you can [create a new backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) just for SAP HANA resources.

If you choose to create a new backup plan, you will have three options:

1. **Option 1: Start with a template**

   1. Choose a backup plan template.

   1. Specify a backup plan name.

   1. Click **Create plan**.

1. **Option 2: Build a new plan**

   1. Specify a backup plan name.

   1. Optionally specify tags to add to backup plan.

   1. Specify the backup rule configuration.

      1. Specify a backup rule name.

      1. Select an existing vault or create a new backup vault. This is where your backups are stored.

      1. Specify a backup frequency.

      1. Specify a backup window.

         *Note transition to cold storage is currently unsupported*.

      1. Specify the retention period.

         *Copy to destination is currently unsupported*

      1. (*Optional*) Specify tags to add to recovery points.

   1. Click **Create plan**.

1. **Option 3: Define a plan using JSON**

   1. Specify the JSON for your backup plan by either modifying the JSON expression of an existing backup plan or creating a new expression.

   1. Specify a backup plan name.

   1. Click **Validate JSON**.

   Once the backup plan is created successfully, you can assign resources to the backup plan in the next step.

Whichever plan you use, ensure you [ assign resources](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html). You can choose which SAP HANA databases to assign, including system and tenant databases. You also have the option to exclude specific resource IDs.

### Create an on-demand backup of SAP HANA databases
<a name="saphanaondemandbackup"></a>

You can [ create a full on-demand backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) that runs immediately after creation. Note that on-demand backups of SAP HANA databases on Amazon EC2 instances are full backups; incremental backups are not supported.

Your on-demand backup is now created. It will begin backing up your specified resources. The console will transition you to the **Backup jobs** page where you can view the job progress. Take note of the backup job ID from the blue banner at the top of your screen, as you will need it to easily find the status of your backup job. When the backup is completed, the status will progress to `Completed`. Backups can take up to several hours.

Refresh the **Backup jobs list** to see the status change. You can also search for and click on your **backup job ID** to view detailed job status.

### Continuous backups of SAP HANA databases
<a name="saphanacontinuousbackup"></a>

You can make [continuous backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) , which can be used with point-in-time restore (PITR) (note that on-demand backups preserve resources in the state in which they are taken; whereas PITR uses continuous backups which record changes over a period of time).

With continuous backups, you can restore your SAP HANA database on an EC2 instance by rewinding it back to a specific time that you choose, within 1 second of precision (going back a maximum of 35 days). Continuous backup works by first creating a full backup of your resource, and then constantly backing up your resource’s transaction logs. PITR restore works by accessing your full backup and replaying the transaction log to the time that you tell AWS Backup to recover.

You can opt in to continuous backups when you create a backup plan in AWS Backup using the AWS Backup console or the API.

**To enable continuous backups using the console**

1. Sign in to the AWS Management Console, and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup plans**, and then choose **Create Backup plan**.

1. Under **Backup rules**, choose **Add Backup rule**.

1. In the **Backup rule configuration** section, select **Enable continuous backups for supported resources**.

After you disable [ PITR (point-in-time restore)](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) for SAP HANA database backups, logs will continue to be sent to AWS Backup until the recovery point expires (status equals `EXPIRED)`. You can change to an alternative log backup location in SAP HANA to stop the transmission of logs to AWS Backup.

A continuous recovery point with a status of `STOPPED` indicates that a continuous recovery point has been interrupted; that is, the logs transmitted from SAP HANA to AWS Backup that show the incremental changes to a database have a gap. The recovery points that occur within this timeframe gap have a status of `STOPPED.`.

For issues you may encounter during restore jobs of continuous backups (recovery points), see the [ SAP HANA Restore troubleshooting](https://docs.aws.amazon.com/aws-backup/latest/devguide/saphana-restore.html#saphanarestoretroubleshooting) section of this guide.

## View SAP HANA database backups
<a name="saphanaviewbackup"></a>

**View the status of backup and restore jobs:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Jobs**.

1. Choose backup jobs, restore jobs or copy jobs to see the list of your jobs.

1. Search for and click on your job ID to view detailed job statuses.

**View all recovery points in a vault:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup vaults**.

1. Search for and click on a backup vault to view all the recovery points within the vault.

**View details of protected resources:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**.

1. You may also filter by resource type to view all backups of that resource type.

## Use AWS CLI for SAP HANA databases with AWS Backup
<a name="saphanaapicli"></a>

Each action within the Backup console has a corresponding API call.

To programmatically configure and manage AWS Backup and its resources, use the API call [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartBackupJob.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartBackupJob.html) to backup an SAP HANA database on an EC2 instance.

Use `start-backup-job` as the CLI command.

## Troubleshooting backups of SAP HANA databases
<a name="saphanatroubleshooting"></a>

If you encounter errors during your workflow, consult the following example errors and suggested resolutions:

**Python prerequisites**
+ **Error: Zypper error related to Python version** since SSM for SAP and AWS Backup require Python 3.6 but SUSE 12 SP5 by default supports Python 3.4.

  **Resolution:** Install multiple versions of Python on SUSE12 SP5 by doing the following steps:

  1. Run an update-alternatives command to create a symlink for Python 3 in '/usr/local/bin/' instead of directly using '/usr/bin/python3'. This commands will set Python 3.4 as the default version. The command is: `# sudo update-alternatives —install /usr/local/bin/python3 python3 /usr/bin/python3.4 5` 

  1. Add Python 3.6 to alternatives configuration by running the following command: `# sudo update-alternatives —install /usr/local/bin/python3 python3 /usr/bin/python3.6 2`

  1. Change the alternative configuration to Python 3.6 by running the following command: `# sudo update-alternatives —config python3`

     The following output should be displayed:

     ```
     There are 2 choices for the alternative python3 (providing /usr/local/bin/python3).
      Selection Path Priority Status
     * 0 /usr/bin/python3.4 5 auto mode
      1 /usr/bin/python3.4 5 manual mode
      2 /usr/bin/python3.6 2 manual mode
     Press enter to keep the current choice[*], or type selection number:
     ```

  1. Enter the number corresponding to Python 3.6.

  1. Check the Python version and confirm Python 3.6 is being used.

  1. (*Optional, but recommended*) Verify Zypper commands work as expected.

**Amazon EC2 Systems Manager for SAP discovery and registration**
+ **Error: SSM for SAP failed to discover workload** due to blocked access to public endpoint for AWS Secrets Manager and SSM.

  **Resolution:** Test if endpoints are reachable from your SAP HANA database. If they cannot be reached, you can create Amazon VPC endpoints for AWS Secrets Manager and SSM for SAP.

  1. Test access to Secrets Manager from Amazon EC2 host for HANA DB by running the following the command: `aws secretsmanager get-secret-value —secret-id hanaeccsbx_hbx_database_awsbkp` . If the command fails to return a value, the firewall is blocking access to Secrets Manager service endpoint. The log will stop at the step “Retrieving secrets from Secrets Manager”.

  1. Test connectivity to SSM for SAP endpoint by running the command `aws ssm-sap list-registration` . If the command fails to return a value, the firewall is blocking access to the SSM for SAP endpoint.

     Example error: `Connection was closed before we received a valid response from endpoint URL: “https://ssm-sap.us-west-2.amazonaws.com/register-application"`.

  There are two options to proceed if the endpoints are not reachable.
  + Open firewall ports to allow access to public service endpoint for Secrets Manager and SSM for SAP; or,
  + Create VPC endpoints for Secrets Manager and SSM for SAP, then:
    + Ensure Amazon VPC is enabled for DNSSupport and DNSHostname.
    + Ensure your VPC endpoint has enabled Allow Private DNS Name.
    + If the SSM for SAP discovery completed successfully, the log will show the host is discovered.
+ **Error: AWS Backup and Backint connection fails due to blocked access to AWS Backup service public endpoints.** `aws-backint-agent.log` can show errors similar to this: `time="2024-01-03T11:39:15-08:00" level=error msg="Storage configuration validation failed: missing backup data plane Id"` or `level=fatal msg="Error performing backup missing backup data plane Id`. Also, the AWS Backup console can show `Fatal Error: An internal error occured.`

  **Resolution: **Open firewall ports to allow access to public service endpoints (HTTPS). After this option is used, DNS will resolve requests to AWS services through public IP addresses.
+ **Error: SSM for SAP registration fails due to HANA password containing special characters.** Example errors can include `Error connecting to database HBX/HBX when validating its credentials.` or `Discovery failed because credentials for HBX/SYSTEMDB either not provided or cannot be validated.` after testing a connection using `hdbsql` for `systemdb` and `tenantdb` that was tested from HANA database Amazon EC2 instance.

  In the AWS Backupconsole on the Jobs page, the backup job details can show a status of `FAILED` with the error `Miscellaneous: b’* 10: authentication failed SQLSTATE: 28000\n’`.

  **Resolution: **Ensure your password does not have special characters, such as \$1.
+ **Error: `b’* 447: backup could not be completed: [110507] Backint exited with exit code 1 instead of 0. console output: time...`**

  **Resolution:** The AWS BackInt Agent for SAP HANA installation might not have completed successfully. Retry the process to deploy the [AWS Backint Agent](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-sap-hana.html) and [Amazon EC2 Systems Manager Agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) on your SAP application server.
+ **Error: Console does not match log files after registration.**

  The discovery log shows failed registration when trying to connect to HANA DB due to the password containing special characters, though the SSM for SAP Application Manager for SAP console displays successful registration. it does not confirm that registration was successful. If the console shows successful registration but the logs do not, backups will fail.

  **Confirm the registration status:**

  1. Log into the [SSM console](https://console.aws.amazon.com//systems-manager)

  1. Select **Run Command** from the left side navigation.

  1. Under text field **Command history**, input `Instance ID:Equal:`, with the value equal to the instance you used for registration. This will filter command history. 

  1. Use the command id column to find commands with status `Failed`. Then, find the document name of **AWSSystemsManagerSAP-Discovery**.

  1. In the AWS CLI, run the command `aws ssm-sap register-application status`. If returned value shows `Error`, the registration was unsuccessful.

  **Resolution: **Ensure your HANA password does not have special characters (such as ‘\$1’).

**Creating a backup of an SAP HANA database**
+ **Error: AWS Backup console displays message “Fatal Error” when an on-demand backup for SystemDB or TenantDB is created.** This occurs because the public endpoint cannot be accessed. This is caused by a client side firewall that blocks access to this endpoint.

  `aws-backint-agent.log` can show errors such as `level=error msg="Storage configuration validation failed: missing backup data plane Id"` or `level=fatal msg="Error performing backup missing backup data plane Id."`

  **Resolution: ** Open firewall access to public endpoint .
+ **Error: ** `Database cannot be backed up while it is stopped`.

  **Resolution:** Ensure the database to be backed up is active. Database data and logs can be backed up only while the database is online.
+ **Error: ** `Getting backup metadata failed. Check the SSM document execution for more details.`

  **Resolution:** Ensure the database to be backed up is active. Database data and logs can be backed up only while the database is online.

**Monitoring backup logs**
+ **Error: ** `Encountered an issue with log backups, please check SAP HANA for details.`

  **Resolution:** Check SAP HANA to ensure log backups are being sent to AWS Backup from SAP HANA.
+ **Error: ** `One or more log backup attempts failed for recovery point.`

  **Resolution:** Check SAP HANA for details. Ensure log backups are being sent to AWS Backup from SAP HANA.
+ **Error: ** `Unable to determine the status of log backups for recovery point.`

  **Resolution:** Check SAP HANA for details. Ensure log backups are being sent to AWS Backup from SAP HANA.
+ **Error:** `Log backups for recovery point %s were interrupted due to a restore operation on the database.`

  **Resolution:** Wait for the restore job to complete. The log backups should resume.

## Glossary of SAP HANA terms when using AWS Backup
<a name="saphanaglossary"></a>

**Data Backup Types:** SAP HANA supports two types of data backups: Full and INC (incremental). AWS Backup optimizes which type is used during each backup operation.

**Catalog Backups:** SAP HANA maintains its own manifest called a *catalog*. AWS Backup interacts with this catalog. Each new backup will create an entry in the catalog.

**Continuous Log Backup (Transaction Logs)**: For Point in Time Recovery (PITR) functions, SAP HANA tracks all transactions since the most recent backup. 

**System Copy:** A restore job in which the restore target database is different from the source database from which the recovery point was created.

**Destructive Restore:** A destructive restore is a type of restore job during which a restored database deletes or overwrites the source or existing database.

**FULL: **A full backup is a backup of a complete database.

**INC: **An incremental backup is a backup of all changes to an SAP HANA database since the previous backup.

## AWS Backup support of SAP HANA databases on EC2 instances release notes
<a name="saphanareleasenotes"></a>

Certain functionalities are not supported at this time:
+ Continuous backups (which use transaction logs) cannot be copied to other Regions or accounts. Snapshot backups can be copied to supported Regions and accounts from full backups.
+ Backup Audit Manager and reporting are not currently supported.
+ [Supported services by AWS Region](backup-feature-availability.md#supported-services-by-region) contains the currently supported Regions for SAP HANA database backups on Amazon EC2 instances.

# Amazon S3 backups
<a name="s3-backups"></a>

## Overview
<a name="s3-backup-overview"></a>

AWS Backup supports centralized backup and restore of applications storing data in S3 alone or alongside other AWS services for database, storage, and compute. Many [features are available for S3 backups](backup-feature-availability.md#features-by-resource), including Backup Audit Manager.

You can use a single backup policy in AWS Backup to centrally automate the creation of backups of your application data. AWS Backup automatically organizes backups across different AWS services and third-party applications in one centralized, encrypted location (known as a [ backup vault](https://docs.aws.amazon.com/aws-backup/latest/devguide/vaults.html)) so that you can manage backups of your entire application through a centralized experience. For S3, you can create continuous backups and restore your application data stored in S3 and restore the backups to a point-in-time with a single click.

## Backup tiering
<a name="s3-backup-tiering"></a>

Amazon S3 is the only resource that supports backup tiering to a lower cost warm storage tier. For more information, see [Backup tiering](backup-tiering.md). 

## Prerequisites for S3 backups
<a name="s3-backup-prerequisites"></a>

### Permissions and policies for Amazon S3 backup and restore
<a name="one-time-permissions-setup"></a>

To backup, copy, and restore S3 resources, you must have the correct policies in your role. To add these policies, go to [AWS managed policies](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#aws-managed-policies). Add the [AWSBackupServiceRolePolicyForS3Backup](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForS3Backup.html) and [AWSBackupServiceRolePolicyForS3Restore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForS3Restore.html) to the roles that you intend to use to backup and restore S3 buckets.

If you do not have sufficient permission, please request the manager of your organization's administrative (admin) account to add the policies to the intended roles.

For more information, please see [Managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html) in the *IAM User Guide*.

### Backups and versioning
<a name="s3-backup-versioning"></a>

You must [ enable S3 Versioning on your S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/manage-versioning-examples.html) to use AWS Backup for Amazon S3.

We recommend that you [ set a lifecycle expiration period](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html) for your S3 versions.

All objects (including all versions) in the bucket when the backup begins will be stored in the recovery point (completed backup). These can include the current version of each object, older versions, delete markers, and objects pending lifecycle actions.

The storage cost will be calculated for all objects in the backup, including objects scheduled for deletion (objects that will expire). You can use CLI or scripts to remove the inclusion of objects scheduled for expiration.

To learn more about setting up S3 lifecycle policies, follow the instructions [on this page](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-expire-general-considerations.html).

### Considerations for Amazon S3 backups
<a name="S3-backup-considerations"></a>

The following points should be considered when you backup S3 resources:
+ **Focused object metadata support** – AWS Backup supports the following metadata: tags, access control lists (ACLs), user-defined metadata, original creation date, and version ID. You may also restore all backed-up data and metadata except original creation date, version ID, storage class, and e-tags.
+ When you restore an S3 object, AWS Backup applies a checksum value, even if the original object did not use the checksum feature.
+  An S3 object key name can be made up of most UTF-8 encodable strings. The following Unicode characters are allowed: `#x9` \$1 `#xA` \$1 `#xD` \$1 `#x20 to #xD7FF` \$1 `#xE000 to #xFFFD` \$1 `#x10000 to #x10FFFF` .

  Object key names that include characters not in this list might be excluded from backups.
+ **Cold storage transition** – Use AWS Backup lifecycle management policy to define the timeline for backup expiration. Cold storage transition of S3 backups is not supported.
+ For periodic backups, AWS Backup makes a best effort to track all changes to your object metadata. However, if you update a tag or ACL multiple times within 1 minute, AWS Backup might not capture all intermediate states.
+ AWS Backup does not offer support for backups of [ SSE-C-encrypted](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html) objects. AWS Backup also does not support backups of bucket configurations, including bucket policies, settings, names, or access points.
+ AWS Backup does not support backups of S3 on AWS Outposts.
+ **CloudTrail logging** – If you log data read events, you must have CloudTrail logs delivered to a different target bucket. If you save CloudTrail logs in the bucket that they log, there is an infinite loop, which can cause unexpected charges.

  For more information, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *CloudTrail User Guide*.
+ **Server access logging** – If you enable server access logging, you must have the logs delivered to a different target bucket. If you save these logs in the bucket that they log, there is an infinite loop. For more information, see [Enabling Amazon S3 server access logging](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html).

## Supported bucket types, quantities, and object sizes
<a name="bucket-types-and-quotas"></a>

AWS Backup supports backup and restore operations for S3 objects of any size, up to the maximum object size supported by Amazon S3.

AWS Backup supports backup and restore of general purpose S3 buckets. Directory buckets are not supported at this time.

The upper limit of a quantity of a resource (known as a quota), such as a bucket, allowed in an AWS account depends on the service. [Amazon S3 quotas](https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html) are different from [AWS Backup quotas](aws-backup-limits.md).

In each AWS account, you can create backups for up to 100 buckets by default. You are able to request a quota increase up to 1,000 buckets.

Accounts with excess of 1,000 buckets are subject to quota limits; when requests exceed the quota, it may result in failed jobs. It is a best practice to limit an account to 1,000 buckets.

## Supported S3 Storage Classes
<a name="supported-s3-classes"></a>



AWS Backup allows you to backup your S3 data stored in the following [S3 Storage Classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html):
+ S3 Standard
+ S3 Standard - Infrequent Access (IA)
+ S3 One Zone-IA
+ S3 Glacier Instant Retrieval
+ S3 Intelligent-Tiering (S3 INT)

Backups of an object in the storage class [S3 Intelligent-Tiering (INT)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-dynamic-data-access) access those objects. This access triggers S3 Intelligent-Tiering to automatically move those objects to Frequent Access.

Backups that access Infrequent Access tiers, including S3 Standard - Infrequent Access (IA) and S3 One Zone-IA classes, move under the S3 storage charge of Frequent Access (applies to Infrequent Access or Archive Instant Access tiers).

The archived storage classes S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive are not supported.

For more information about storage pricing for Amazon S3, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

## S3 backup types
<a name="s3-backup-types"></a>

With AWS Backup, you can create the following types of backups of your S3 buckets, including object data, tags, Access Control Lists (ACLs), and user-defined metadata:
+ **Continuous backups** allow you to restore to any point in time within the last 35 days. Continuous backups for an S3 bucket should only be configured in one backup plan.

  See [Point-in-Time Recovery](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) for a list of supported services and instructions on how to use AWS Backup to take continuous backups.
+ **Periodic backups** use snapshots of your data to allow you to retain data for your specified duration up to 99 years. You can schedule periodic backups in frequencies such as 1 hour, 12 hours, 1 day, 1 week, or 1 month. AWS Backup takes periodic backups during the backup window you define in your [backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/about-backup-plans.html).

  See [Creating a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) to understand how AWS Backup applies your backup plan to your resources.

Cross-account and cross-Region copies are available for S3 backups, but copies of continuous backups do not have point-in-time restore capabilities.

Continuous and periodic backups of S3 buckets must both reside in the same backup vault.

AWS Backup for S3 relies on receiving S3 events through Amazon EventBridge. If this setting is disabled in S3 bucket notification settings, continuous backups will stop for those buckets with the setting turned off. For more information, see [Using EventBridge](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventBridge.html).

For both backup types, the first backup is a full backup, while subsequent backups are incremental at object-level.

## Compare S3 backup types
<a name="compare-s3-backup-types"></a>

Your backup strategy for S3 resources can involve just continuous backups, just periodic (snapshot) backups, or a combination of both. The information below can help you choose what works best for your organization:

Continuous backups only:
+ After the first full backup of your existing data is complete, changes in your S3 bucket data are tracked as they occur.
+ The tracked changes allow you to use PITR (point-in-time restore) for the retention period of the continuous backup. To perform a restore job, you choose the point in time to which you wish to restore.
+ The retention period of each continuous backup has a maximum of 35 days.
+ For backup plans you create through CLI, advanced backup settings for Amazon S3 (which include the option to include tags and ACLs in the backup) are turned on by default. You may exclude these in the backup options. See [Advanced Amazon S3 backup settings](#s3-advanced-backup-settings) for an example of the syntax.

Periodic (snapshot) backups only, scheduled or on-demand:
+ AWS Backup scans the entire S3 bucket, retrieves each object’s ACL and tags, and initiates a Head request for every object that was in the prior snapshot but was not found in the snapshot being created.
+ The backup is point-in-time consistent. 
+ The backup date and time recorded is the time at which AWS Backup completes the traversal of the bucket, not at the time which a backup job was created.
+ The first backup of a bucket is a full backup. Each subsequent backup is incremental, representing the change in data since the last snapshot.
+ The snapshot made by the periodic backup can have a retention period of up to 99 years.

Continuous backups combined with periodic/snapshot backups:
+ After the first full backup of your existing data (each bucket) is complete, changes in your bucket are tracked as they occur.
+ You can perform a point-in-time restore from a continuous recovery point.
+ Snapshots are point-in-time consistent.
+ Snapshots are taken directly from the continuous recovery point, eliminating the need to rescan a bucket to allow for faster processes.
+ Snapshots and continuous recovery points share data lineage; storage of data between snapshot and continuous recovery points is not duplicated.
+ When advanced Amazon S3 backup settings, such as including tags and ACLs in a backup, are changed for a `continuous` recovery point, AWS Backup stops that recovery point and creates a new one with the updated setting(s).

When a continuous backup job is running for an S3 bucket, you can still initiate periodic (snapshot) backup jobs. However, the following behavior applies:
+ Snapshot backup jobs will use the same backup options (ACLs and object tags settings) as the existing continuous backup.
+ If you specify different backup options for a snapshot job than what the continuous backup uses, the snapshot job will still use the continuous backup's settings and complete with a "Completed with issues" status.

  When this occurs, you'll see the following status message: `"Periodic/snapshot backup for bucket <bucket name> has different backup options than the continuous backup. When using continuous backups along with snapshot backups for the same bucket, the snapshot will use the same settings for backing up ACLs and Object tags as the continuous backup."`

The following table shows when a full scan is required when changing BackupOptions for existing continuous recovery points:


**Full scan behavior when BackupOptions is modified**  

| Previous BackupOptions | New BackupOptions | Full scan | 
| --- | --- | --- | 
| backupACLs and backupObjectTags enabled | backupACLs and backupObjectTags disabled | No | 
| backupACLs and backupObjectTags enabled | backupACLs enabled; backupObjectTags disabled | No | 
| backupACLs and backupObjectTags enabled | backupACLs disabled; backupObjectTags enabled | No | 
| backupACLs and backupObjectTags disabled | backupACLs and backupObjectTags enabled | Yes | 
| backupACLs enabled; backupObjectTags disabled | backupACLs and backupObjectTags enabled | Yes | 
| backupACLs disabled; backupObjectTags enabled | backupACLs and backupObjectTags enabled | Yes | 

## S3 backup completion windows
<a name="s3-completion-windows"></a>

The table below shows sample buckets of various sizes to help you guide estimates of the completion time of the initial full backup of an S3 bucket. Backup times will vary with the size, content, configuration, and settings of each bucket.


| Bucket size | Number of objects | Estimated time to complete initial backup | 
| --- | --- | --- | 
| 425 GB (gigabytes) | 135 million | 31 hours | 
| 800 TB (terabytes) | 670 million | 38 hours | 
| 6 PB (petabytes) | 5 billion | 100 hours | 
| 370 TB (terabytes) | 7.5 billion | 180 hours | 

## Best practices and cost considerations for S3 backups
<a name="bestpractices-costoptimization"></a>

### Large bucket best practices
<a name="bucket-size-best-practices"></a>

For buckets with more than 300 million objects:
+ For buckets with greater than 300 million objects, the backup rate can reach up to 17,000 objects per second during the initial full backup of the bucket (incremental backups will have a different speed); buckets containing fewer than 300 million objects back up at a rate close to 1,000 objects per second.
+ Continuous backups are recommended.
+ If backup lifecycle is planned for more than 35 days, you can also enable snapshot backups for the bucket in the same vault in which your continuous backups are stored.

### Backup strategy optimization
<a name="backup-strategy-optimization"></a>
+ For accounts which make backups at least daily or more frequently, cost benefits can be realized by using continuous backups if the data within the backups has minimal changes between backups.
+ Larger buckets that do not change frequently can benefit from continuous backups, since this can result in lower costs when scans of the whole bucket along with multiple requests per objects don't need to be performed on pre-existing objects (objects that are unchanged from the previous backup).
+ Buckets that contain more than 100 million objects and that have a small delete rate compared to the overall backup size might realize cost benefits with a backup plan that contains both a continuous backup with a retention period of 2 days along with snapshots of a longer retention.
+ Periodic (snapshot) backup time aligns with the start of the backup process when a bucket scan is not needed. Scans are not needed in a bucket that contains both continuous backup and snapshots since in these cases snapshots are taken from a continuous recovery point.

### Object lifecycle and delete markers
<a name="object-lifecycle-considerations"></a>
+ S3 lifecycle policies have an optional feature called **Delete expired object delete markers**. When this feature is left off, delete markers, sometimes in the millions, expire with no cleanup plan. When buckets without this feature are backed up, two issues impact time and cost:
  + Delete markers are backed up, just like objects. Backup time and restore time can be impacted depending on the ratio of objects to delete markers.
  + Each object and marker that is backed up has a minimum charge. Each delete marker is charged the same as a 128KiB object.

### Storage class cost considerations
<a name="storage-class-considerations"></a>
+ For each object in a single S3-GIR (Amazon S3 Glacier Instant Retrieval), AWS Backup performs multiple calls, which will result in retrieval charges when a backup is conducted.

  Similar retrieval costs apply to buckets with objects in S3-IA and S3 One Zone-IA storage classes.

### AWS service cost optimization
<a name="aws-service-cost-optimization"></a>
+ Using features of AWS KMS, CloudTrail, Amazon CloudWatch, and Amazon GuardDuty as part of your backup strategy can result in additional costs beyond S3 bucket data storage. See the following for information on adjusting these features:
  + [Reducing the cost of SSE-KMS with Amazon S3 Bucket keys](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-key.html) in the *Amazon S3 User Guide*.
  + You can reduce CloudTrail costs by excluding AWS KMS events and by disabling S3 data events:
    + **Exclude AWS KMS events: **In the *CloudTrail User Guide*, [Creating a trail in the console (basic event selectors)](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html#creating-a-trail-in-the-console) allows the option to exclude AWS KMS events to filter these events out of your trail (default setting includes all KMS events):
      + The option to log or exclude KMS events is available only if you log management events on your trail. If you choose not to log management events, KMS events are not logged, and you cannot change KMS event logging settings.
      + AWS KMS actions such as `Encrypt`, `Decrypt`, and `GenerateDataKey` typically generate a large volume (more than 99%) of events. These actions are now logged as **Read** events. Low-volume, relevant KMS actions such as `Disable`, `Delete`, and `ScheduleKey` (which typically account for less than 0.5% of KMS event volume) are logged as **Write** events.
      + To exclude high-volume events like `Encrypt`, `Decrypt`, and `GenerateDataKey`, but still log relevant events such as `Disable`, `Delete`, and `ScheduleKey`, choose to log **Write** management events, and clear the check box for **Exclude AWS KMS events**.
    + **Disable S3 data events:** By default, trails and event data stores do not log data events. Disable S3 data events before your initial backup to reduce costs.
  + To reduce CloudWatch costs, you can stop sending CloudTrail events to CloudWatch Logs when you update a trail to disable CloudWatch Logs settings.
  + [Estimating GuardDuty usage cost](https://docs.aws.amazon.com/guardduty/latest/ug/monitoring_costs.html) in the *Amazon GuardDuty User Guide*.

## S3 backup messages
<a name="s3-backup-messages"></a>

When a backup job completes or fails, you may see the following message. The following table can help you determine the possible cause of the status message.


| Scenario | Job Status | Message | Example | 
| --- | --- | --- | --- | 
| All objects failed to be backed up for a snapshot or initial continuous backup | `FAILED` | "No objects were backed up from the source bucket **BucketName**. To get notified of these failures, enable SNS event notifications." | Backup role does not have the permission to get object version ACL. Consequently, none of the objects are backed up. | 
| All objects failed to be backed up for a subsequent continuous backup. | `COMPLETED` | "No objects were backed up from the source bucket **BucketName**. To get notified of these failures, enable SNS event notifications." |  | 

## Advanced Amazon S3 backup settings
<a name="s3-advanced-backup-settings"></a>

AWS Backup provides advanced settings to control what metadata is included in your Amazon S3 backups. You can optionally exclude Access Control Lists (ACLs) and object tags, which can be helpful if your objects are set up without ACLs and object tags. In other words, if you do not use ACLs or objects tags for your S3 resources, you may find it beneficial to exclude them from your backups.

### Configuring backup of ACLs and object tags
<a name="s3-backup-configuration"></a>

You can configure ACL and object tag backup options either through the AWS Backup console or through the AWS CLI.

------
#### [ Console ]

**Configure ACL and tag options using the console**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup/](https://console.aws.amazon.com/backup/home).

1. In the navigation pane, choose **Backup plans**, then choose **Create backup plan**.

1. In your backup plan settings, expand **Advanced backup settings**.

1. For Amazon S3 resources, configure the following options:
   + **Back up ACLs**: Select the check box to include ACLs or leave it unselected to exclude them.

     **Backup up object tags**: Select the check box to include object tags in your backup.

1. Complete the backup plan configuration and choose **Create plan**.

------
#### [ AWS CLI ]

You can selectively include or exclude Access Control Lists (ACLs) and object tags from your Amazon S3 backups using the following backup options:

BackupACLs  
Controls whether object ACLs are included in the backup. Set to `disabled` to exclude ACLs. Default: `enabled`

BackupObjectTags  
Controls whether object tags are included in the backup. Set to `disabled` to exclude tags. Default: `enabled`

Configure ACL and tag options using the AWS CLI

To configure ACL and object tag backup options using the AWS CLI, use the `update-backup-plan` command with advanced backup settings:

```
aws backup update-backup-plan \
    --backup-plan-id "your-backup-plan-id" \
    --backup-plan '{
        "BackupPlanName": "MyS3BackupPlan",
        "Rules": [{
            "RuleName": "MyS3BackupRule",
            "TargetBackupVaultName": "MyBackupVault",
            "ScheduleExpression": "cron(0 2 ? * * *)",
            "Lifecycle": {
                "DeleteAfterDays": 30
            },
            "RecoveryPointTags": {},
            "CopyActions": [],
            "EnableContinuousBackup": false
        }],
        "AdvancedBackupSettings": [{
            "ResourceType": "S3",
            "BackupOptions": {
                "BackupACLs": "disabled",
                "BackupObjectTags": "disabled"
            }
        }]
    }'
```

The `BackupOptions` parameters control metadata inclusion:
+ `"BackupACLs": "disabled"` - Excludes ACLs from backups
+ `"BackupObjectTags": "disabled"` - Excludes object tags from backups
+ `"BackupACLs": "enabled"` - Includes ACLs in backups (default)
+ `"BackupObjectTags": "enabled"` - Includes object tags in backups (default)

------

# Amazon Timestream backups
<a name="timestream-backup"></a>

Amazon Timestream is a scalable time series database that allows storage and analysis of up to trillions of time series data points daily. Timestream is optimized for cost and time savings by keeping recent data in memory and by storing historical data in a cost-optimized storage tier in accordance with your policies.

A Timestream database has tables. These tables contain records, and each record is a single data point in a time series. A time series is a sequence of records recorded over a time interval, such as a stock price, usage level of memory of an Amazon EC2 instance, or a temperature reading. AWS Backup can centrally backup and restore Timestream tables. You can copy these table backups to other accounts and several other AWS Regions within the same organization.

Timestream does not currently offer native backup and restore services, so using AWS Backup to create secure copies of your Timestream tables can add an extra layer of security and resilience to your resources.

## Back up Timestream tables
<a name="backuptimestream"></a>

You can backup Timestream tables either through the AWS Backup console or using the AWS CLI.

There are two ways to use the AWS Backup console to backup a Timestream table: on demand or as part of a backup plan. 

### Create on-demand Timestream backups
<a name="ondemandtimestreambackups"></a>

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Using the navigation pane, choose **Protected resources**, and then **Create on-demand backup**.

1. On the **Create on-demand backup** page, choose Amazon Timestream.

1. Choose **Resource type** Timestream, and then choose the table name you want to back up.

1. In Backup window, ensure that **Create backup now** is selected. This initiates a backup immediately and enables you to see your cluster sooner on the **Protected resources** page.

1. In the drop down menu **Transition to cold storage**, you can set your transition settings.

1. In **Retention Period**, you can choose how long to retain your backup.

1. Choose an existing backup vault or create a new backup vault. Choosing **Create new backup vault** opens a new page to create a vault and then returns you to the **Create on-demand backup page** when you are finished.

1. Under **IAM role**, choose **Default role** (if the AWS Backup default role is not present in your account, it will be created for you with the correct permissions).

1. *Optionally,* tags can be added to your recovery point. If you want to assign one or more tags to your on-demand backup, enter a **key** and optional **value**, and choose **Add tag**.

1. Choose **Create on-demand backup**. This takes you to the **Jobs** page, where you will see a list of jobs.

1. Choose the **Backup job ID** for the cluster to see the details of that job. It will display a status of `Completed`, `In Progress`, or `Failed`. You can click the refresh button to update the displayed status.

### Create scheduled Timestream backups in a backup plan
<a name="scheduledtimestreambackups"></a>

Your scheduled backups can include Timestream tables if they are a protected resource. To opt into protecting Amazon Timestream tables:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Using the navigation pane, choose **Protected resources**.

1. Toggle Amazon Timestream to **On**.

1. See [ Assigning resources to the console](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html#assigning-resources-console) to include Timestream tables in an existing or new plan.

Under **Manage Backup plans**, you can choose to [create a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) and include Timestream tables, or you can [update an existing one](https://docs.aws.amazon.com/aws-backup/latest/devguide/updating-a-backup-plan.html) to include Timestream tables. When adding the resource type *Timestream*, you can choose to add **All Timestream tables**, or check the boxes next to the tables you wish to add under **Select specific resource types**.

The first backup made of Timestream tables will be a full backup. Subsequent backups will be incremental backups.

After you’ve created or modified your backup plan, navigate to Backup plans in the left navigation. The backup plan you specified should display your clusters under **Resource Assignments**.

### Backing up programmatically
<a name="timestreambackupapi"></a>

You can use the operation name `start-backup-job`. Include the following parameters:

```
aws backup start-backup-job \
--backup-vault-name backup-vault-name \
--resource-arn arn:aws:timestream:region:account:database/database-name/table/table-name \
--iam-role-arn arn:aws:iam::account:role/role-name \
--region AWS Region \
--endpoint-url URL
```

## View Timestream table backups
<a name="viewtimestreambackups"></a>

To view and modify your Timestream table backups within the console:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Choose **Backup vaults**. Then, click on the backup vault name that contains your Timestream tables.

1. The backup vault will display a summary and a list of backups.

   1. You can click on the link in the column **Recovery point ID**, or

   1. You can check the box to the left of the recovery point ID and click **Actions** to delete the recovery points that are no longer needed.

## Restore a Timestream table
<a name="w2aac17c19c41c13"></a>

See how to [restore a Timestream table](https://docs.aws.amazon.com/aws-backup/latest/devguide/timestream-restore.html) 

# Virtual machine backups
<a name="vm-backups"></a>

AWS Backup supports centralized and automated data protection for on-premises VMware virtual machines (VMs) along with VMs in the VMware Cloud™ (VMC) on AWS and VMware Cloud™ (VMC) on AWS Outposts. You can back up from your on-premises and VMC virtual machines to AWS Backup. Then, you can restore from AWS Backup to on-premises VMs, VMs in the VMC, or the VMC on AWS Outposts.

AWS Backup also provides you with fully-managed, AWS-native VM backup management capabilities, such as VM discovery, backup scheduling, retention management, a low-cost storage tier, cross-Region and cross-account copy, support for AWS Backup Vault Lock and AWS Backup Audit Manager, encryption that is independent from source data, and backup access policies. For a full list of capabilities and details, see the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table.

You can use AWS Backup to protect your virtual machines on [VMware Cloud™ on AWS Outposts](https://aws.amazon.com/vmware/aws-services/). AWS Backup stores your VM backups in the AWS Region to which your VMware Cloud™ on AWS Outposts is connected. You can use AWS Backup to protect your VMware Cloud™ on AWS Backup VMs when you’re using VMware Cloud™ on AWS Outposts to meet your low-latency and local data-processing needs for your application data. Based on your data residency requirements, you may choose AWS Backup to store backups of your application data in the parent AWS Region to which your AWS Outposts is connected.

## Supported VMs
<a name="supported-vms"></a>

AWS Backup can back up and restore virtual machines managed by a VMware vCenter.

**Currently supported:**
+ vSphere 8, 7.0, and 6.7
+ Virtual disk sizes that are multiples of 1 KiB
+ NFS, VMFS, and VSAN datastores on premises and in VMC on AWS
+ SCSI Hot-Add and Network Block Device Secure Sockets Layer (NBDSSL) transport modes for copying data from source VMs to AWS for on-premises VMware
+ Hot-Add mode to protect VMs on VMware Cloud on AWS

**Not currently supported:**
+ RDM (raw disk mapping) disks or NVMe controllers and their disks
+ Independent-persistent and independent-non persistent disk modes

## Backup consistency
<a name="backup-consistency"></a>

AWS Backup, by default, captures application-consistent backups of VMs using the VMware Tools quiescence setting on the VM. Your backups are application consistent if your applications are compatible with VMware Tools. If the quiescence capability is not available, AWS Backup captures crash-consistent backups. Validate that your backups meet your organization’s needs by testing your restores.

## Backup gateway
<a name="backup-gateway"></a>

Backup gateway is downloadable AWS Backup software that you deploy to your VMware infrastructure to connect your VMware VMs to AWS Backup. The gateway connects to your VM management server to discover VMs, discovers your VMs, encrypts data, and efficiently transfers data to AWS Backup. The following diagram illustrates how Backup gateway connects to your VMs:

![\[A backup gateway is an OVF template the connects your VMware environment to AWS Backup.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/Horizon.png)


To download the Backup gateway software, follow the procedure for [Working with gateways](working-with-gateways.md).

### Download VM software
<a name="download-vm-software"></a>

Backup gateway is distributed as an OVF (Open Virtualization Format) template that you deploy to your VMware infrastructure. The gateway software connects your VMware VMs to AWS Backup by discovering VMs, encrypting data, and efficiently transferring data to AWS Backup.

To obtain the OVF template, use the AWS Backup console:

1. Sign in to the AWS Management Console and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation pane, under **External resources**, choose **Gateways**.

1. Choose **Create gateway**.

1. In the **Set up gateway** section, download the OVF template and deploy it to your VMware environment.

For information on VPC (Virtual Private Cloud) endpoints, see [AWS Backup and AWS PrivateLink connectivity](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-network.html#backup-privatelink).

Backup gateway comes with its own API which is separately maintained from the AWS Backup API. To view a list of Backup gateway API actions, see [Backup gateway actions](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_Operations_AWS_Backup_Gateway.html). To view a list of Backup gateway API data types, see [Backup gateway data types](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_Types_AWS_Backup_Gateway.html).

## Endpoints
<a name="backup-gateway-endpoints"></a>

Existing users who currently use a public endpoint and who wish to switch to a VPC (Virtual Private Cloud) endpoint can [ create a new gateway with a VPC endpoint](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-gateways.html#create-gateway) using [AWS PrivateLink](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-network.html#backup-privatelink), associate the existing hypervisor to the gateway, and then [ delete the gateway](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-gateways.html#edit-gateway) containing the public endpoint.

# Configure your infrastructure to use Backup gateway
<a name="configure-infrastructure-bgw"></a>

Backup gateway requires the following network, firewall, and hardware configurations to back up and restore your virtual machines.

## Network configuration
<a name="bgw-network-configuration"></a>

Backup gateway requires certain ports to be allowed for its operation. Allow the following ports:

1. **TCP 443 Outbound**
   + Source: Backup gateway
   + Destination: AWS
   + Use: Allows Backup gateway to communicate with AWS.

1. **TCP 80 Inbound**
   + Source: The host you use to connect to the AWS Management Console
   + Destination: Backup gateway
   + Use: By local systems to obtain the Backup gateway activation key. Port 80 is only used during activation of Backup gateway. AWS Backup does not require port 80 to be publicly accessible. The required level of access to port 80 depends on your network configuration. If you activate your gateway from the AWS Management Console, the host from which you connect to the console must have access to your gateway's port 80.

1. **UDP 53 Outbound**
   + Source: Backup gateway
   + Destination: Domain Name Service (DNS) server
   + Use: Allows Backup gateway to communicate with the DNS.

1. **TCP 22 Outbound**
   + Source: Backup gateway
   + Destination: Support
   + Use: Allows Support to access your gateway to help you with issues. You don't need to open this port for the normal operation of your gateway, but you must open it for troubleshooting.

1. **UDP 123 Outbound**
   + Source: NTP client
   + Destination: NTP server
   + Use: Used by local systems to synchronize virtual machine time to the host time.

1. **TCP 443 Outbound**
   + Source: Backup gateway
   + Destination: VMware vCenter
   + Use: Allows Backup gateway to communicate with VMware vCenter.

1. **TCP 443 Outbound**
   + Source: Backup gateway
   + Destination: ESXi hosts
   + Use: Allows Backup gateway to communicate with ESXi hosts.

1. **TCP 902 Outbound**
   + Source: Backup gateway
   + Destination: VMware ESXi hosts
   + Use: Used for data transfer via Backup gateway.

The above ports are necessary for Backup gateway. See [Create a VPC endpoint](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-network.html#backup-privatelink) for more information on how to configure Amazon VPC endpoints for AWS Backup.

## Firewall configuration
<a name="bgw-firewall-configuration"></a>

Backup gateway requires access to the following service endpoints to communicate with Amazon Web Services. If you use a firewall or router to filter or limit network traffic, you must configure your firewall and router to allow these service endpoints for outbound communication to AWS. Use of an HTTP proxy in between Backup gateway and service points is not supported.

**Endpoint types**

**Standard endpoints**: Support IPv4 traffic between your gateway appliance and AWS.

The following service endpoints are required by all gateways for control path (`anon-cp`, `client-cp`, `proxy-app`) and data path (`dp-1`) operations.

```
anon-cp.backup-gateway.region.amazonaws.com:443  
client-cp.backup-gateway.region.amazonaws.com:443  
proxy-app.backup-gateway.region.amazonaws.com:443  
dp-1.backup-gateway.region.amazonaws.com:443
```

**Dual-stack endpoints**: Support both IPv4 and IPv6 traffic between your gateway appliance and AWS.

The following dual-stack service endpoints are required by all gateways for control path (activation, controlplane, proxy) and data path (dataplane) operations.

```
activation-backup-gateway.region.api.aws:443  
controlplane-backup-gateway.region.api.aws:443  
proxy-backup-gateway.region.api.aws:443  
dataplane-backup-gateway.region.api.aws:443
```

## Configure your gateway for multiple NICs in VMware
<a name="bgw-multinic"></a>

You can maintain separate networks for your internal and external traffic by attaching multiple virtual network interface connections (NICs) to your gateway and then directing internal traffic (gateway to hypervisor) and external traffic (gateway to AWS) separately.

By default, virtual machines connected to AWS Backup gateway have one network adapter (`eth0`). This network includes the hypervisor, the virtual machines, and network gateway (Backup gateway) which communicates with the broader Internet.

Here is an example of a setup with multiple virtual network interfaces:

```
            eth0:
            - IP: 10.0.3.83
            - routes: 10.0.3.0/24
            
            eth1:
            - IP: 10.0.0.241
            - routes: 10.0.0.0/24
            - default gateway: 10.0.0.1
```
+ In this example, the connection is to a hypervisor with IP `10.0.3.123`, the gateway will use `eth0` as the hypervisor IP is part of the `10.0.3.0/24` block
+ To connect to a hypervisor with IP `10.0.0.234`, the gateway will use `eth1`
+ To connect to an IP outside of the local networks (ex. `34.193.121.211`), the gateway will fall back to the default gateway, `10.0.0.1`, which is in the `10.0.0.0/24` block and thus go through `eth1`

The first sequence to add an additional network adapter occurs in the vSphere client:

1. In the VMware vSphere client, open the context menu (with a right-click) for your gateway virtual machine, and choose **Edit Settings**. 

1. On the **Virtual Hardware** tab of the **Virtual Machine Properties** dialog box, open the **Add New Device** menu, and select **Network Adapter** to add a new network adapter.

1. 

   1. Expand the **New Networ**k details to configure the new adapter.

   1. Ensure that **Connect At Power On** is selected.

   1. For **Adapter Type**, see Network Adapter Types in the [ ESXi and vCenter Server Documentation](https://docs.vmware.com/en/VMware-vSphere/index.html).

1. Click **Okay** to save the new network adapter settings.

The next sequence of steps to configure an additional adapter occurs in the AWS Backup gateway console (note this is not the same interface as the AWS management console where backups and other services are managed).

Once the new NIC is added to the gateway VM, you need to
+ Go to `Command Prompt` and turn on the new adapters
+ Configure static IPs for each new NIC
+ Set the preferred NIC as the default

To do these:

1. In the VMware vSphere client, select your gateway virtual machine and **Launch Web Console** to access the Backup gateway local console.

   1.  For more information on accessing a local console, see [ Accessing the Gateway Local Console with VMware ESXi](https://docs.aws.amazon.com/storagegateway/latest/tgw/accessing-local-console.html#MaintenanceConsoleWindowVMware-common) 

1. Exit Command Prompt and go to Network Configuration > Configure Static IP and follow the setup instructions to update the routing table.

   1. Assign a static IP within the network adapter’s subnet.

   1. Set up a network mask.

   1. Enter the IP address of the default gateway. This is the network gateway that connects to all traffic outside of the local network.

1. Select **Set Default Adapter** to designate the adapter that will be connected to the cloud as the default device.

1. All IP addresses for the gateway can be displayed in both the local console and on the VM summary page in VMware vSphere.

## VMware permissions
<a name="bgw-vmware-permissions"></a>

This section lists the minimum VMware permissions required to use AWS Backup gateway. These permissions are necessary for Backup gateway to discover, backup, and restore virtual machines.

To use Backup gateway with VMware Cloud™ on AWS or VMware Cloud™ on AWS Outposts, you must use the default admin user `cloudadmin@vmc.local` or assign the CloudAdmin role to your dedicated user.

To use Backup gateway with VMware on-premises virtual machines, create a dedicated user with the permissions listed below.

**Global**
+ Disable methods
+ Enable methods
+ Licenses
+ Log event
+ Manage custom attributes
+ Set custom attributes

**vSphere Tagging**
+ Assign or Unassign vSphere Tag

**DataStore**
+ Allocate space
+ Browse datastore
+ Configure datastore (for vSAN datastore)
+ Low level file operations
+ Update virtual machine files

**Host**
+ Configuration
  + Advanced settings
  + Storage partition configuration

**Folder**
+ Create folder

**Network**
+ Assign network

**dvPort Group**
+ Create
+ Delete

**Resource**
+ Assign virtual machine to resource pool

**Virtual Machine**
+ Change Configuration
  + Acquire disk lease
  + Add existing disk
  + Add new disk
  + Advanced configuration
  + Change settings
  + Configure raw device
  + Modify device settings
  + Remove disk
  + Set annotation
  + Toggle disk change tracking
+ Edit Inventory
  + Create from existing
  + Create new
  + Register
  + Remove
  + Unregister
+ Interaction
  + Power Off
  + Power On
+ Provisioning
  + Allow disk access
  + Allow read-only disk access
  + Allow virtual machine download
+ Snapshot Management
  + Create snapshot
  + Remove Snapshot
  + Revert to snapshot

# Working with gateways
<a name="working-with-gateways"></a>

To back up and restore your virtual machines (VMs) using AWS Backup, you must first install a Backup gateway. A gateway is software in the form of an OVF (Open Virtualization Format) template that connects Amazon Web Services Backup to your hypervisor, allowing it to automatically detect your virtual machines, and enables you to back up and restore them.

A single gateway can run up to 4 backup or restore jobs at once. To run more than 4 jobs at once, create more gateways and associate them with your hypervisor.

## Creating a gateway
<a name="create-gateway"></a>

You can create a backup gateway using two approaches:
+ **Console method (standard)**: Creates gateways through the AWS Backup console with automatic activation
+ **Manual method**: Creates gateways using gateway VM's local console by obtaining activation keys and using AWS CLI commands

Both methods require downloading and deploying the OVF template first (see [Download VM software](vm-backups.md#download-vm-software)).

Both methods allow gateway to communicate over IPv6, which requires gateway appliance version 2.x\$1 and additional firewall configuration on [dual-stack endpoints](https://docs.aws.amazon.com/aws-backup/latest/devguide/configure-infrastructure-bgw.html#bgw-firewall-configuration).

**Important**  
**IPv6 hypervisor requirement:** If your gateway is activated through IPv6, you **must** create a hypervisor with an IPv6 address. For example, use `2607:fda8:1001:210::252` instead of `10.0.0.252`. If you associate an IPv6 gateway with an IPv4 hypervisor, backup and restore jobs will likely fail.

### Console method
<a name="create-gateway-console"></a>

**To create a gateway:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation pane, under the **External resources** section, choose **Gateways**.

1. Choose **Create gateway**.

1. In the **Set up gateway** section, follow these instructions to download and deploy the OVF template.

#### Downloading VMware software
<a name="downloading-vmware-software"></a>

**Connecting the hypervisor**

Gateways connect AWS Backup to your hypervisor so you can create and store backups of your virtual machines. To set up your gateway on VMware ESXi, download the [OVF template](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-AE61948B-C2EE-436E-BAFB-3C7209088552.html). The download may take about 10 minutes.

After it is complete, proceed with the following steps:

1. Connect to your virtual machine hypervisor using VMware vSphere.

1. Right-click a parent object of a virtual machine and select *Deploy OVF Template.*  
![\[The Deploy OVF Template menu item.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-deploy-ovf-template-20.png)

1. Choose **Local file**, and upload the **aws-appliance-latest.ova** file you downloaded.  
![\[The Local file option on the Select an OVF template panel.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-select-ovf-template-50.png)

1. Follow the deployment wizard steps to deploy it. On the **Select storage** page, select virtual disk format **Thick Provision Lazy Zeroed**.  
![\[The Thick Provision Lazy Zeroed option on the Select storage panel.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-thick-provision-lazy-70.png)

1. After deploying the OVF, right-click the gateway and choose **Edit Settings**.

    ![\[The Edit Settings menu item.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-edit-settings-30.png) 

   1. Under **VM Options**, go to **VM Tools**.

   1. Ensure that for **Synchronize Time with Host**, **Synchronize at start up and resume** is selected.  
![\[The Synchronize at startup and resume VM option.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-synchronize-time-60.png)

1. Turn on the virtual machine by selecting “Power On” from the **Actions** menu.  
![\[The Power On menu item.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-power-on-vm-40.png)

1. Copy the IP address from the VM summary and enter it below.  
![\[The IP Addresses field on the Summary page.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-copy-ip-address-10.png)

Once the VMWare software is downloaded, complete the following steps:

1. In the **Gateway connection** section, type in the **IP address** of the gateway.

   1. To find this IP address, go to the vSphere Client.

   1. Select your gateway under the **Summary** tab.

   1. Copy the **IP address** and paste it in the AWS Backup console text bar.

1. In the **Gateway settings** section,

   1. Type in a **Gateway name**.

   1. Verify the AWS Region.

   1. Choose whether the endpoint is publicly accessible or hosted with your virtual private cloud (VPC).
      + If **publicly accessible** is selected, choose the IP version (IPv4 or IPv6) for gateway connectivity.
      + If **VPC** is selected, enter the VPC endpoint DNS Name. For more information, see [Create a VPC endpoint](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-network.html#backup-privatelink).

1. *[Optional]* In the **Gateway tags** section, you can assign tags by inputting the **key** and *optional* **value**. To add more than one tag, click **Add another tag**.

1. To complete the process, click **Create gateway**, which takes you to the gateway detail page.

### Manual gateway creation
<a name="create-gateway-manual"></a>

#### Getting an activation key
<a name="bgw-activation-key"></a>

To receive an activation key for your gateway, make a web request to the gateway virtual machine (VM) or use the gateway local console. The gateway VM returns a response that contains the activation key, which is then passed as one of the parameters for the `CreateGateway` API to specify the configuration of your gateway. 

**Tip**  
Gateway activation keys expire in 30 minutes if unused.

**Getting an activation key using web request**

The following examples show you how to get an activation key using HTTP request. You can either use a web browser or Linux curl or equivalent command using the following URLs.

**Note**  
Replace the highlighted variables with actual values for your gateway. Acceptable values are as follows:  
*gateway\$1ip\$1address* - The IPv4 address of your gateway, for example `172.31.29.201`
*region\$1code* - The Region where you want to activate your gateway. See [Regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) in the *AWS General Reference Guide*. If this parameter is not specified, or if the value provided is misspelled or doesn't match a valid region, the command will default to the `us-east-1` region.

IPv4:

```
curl "http://gateway_ip_address/?activationRegion=region_code&gatewayType=BACKUP_VM&endpointType=DUALSTACK&ipVersion=ipv4&no_redirect"
```

IPv6:

```
curl "http://gateway_ip_address/?activationRegion=region_code&gatewayType=BACKUP_VM&endpointType=DUALSTACK&ipVersion=ipv6&no_redirect"
```

**Getting an activation key using local console**

The following examples show you how to get an activation key using gateway host's local console

1. Log in to your virtual machine console. 

1. From the **AWS Appliance Activation - Configuration** main menu, select `0` to choose **Get activation key**

1. Select `2` **Backup Gateway** for gateway family option

1. Enter the AWS Region where you want to activate your gateway

1. For network type, enter `1` for Public or `2` for VPC endpoint

1. For endpoint type, enter `1` for standard endpoint or `2` for dual-stack endpoint

   1. For dual-stack endpoint, select `1` for IPv4 or `2` for IPv6

1. Activation key will be populated automatically

#### Creating the gateway
<a name="bgw-create-gateway"></a>

Use the AWS CLI to create the gateway after obtaining an activation key:

1. Obtain activation key using curl commands or local console method

1. Create gateway using AWS CLI, for more information, see [CreateGateway](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_CreateGateway.html) in the *Backup gateway API Reference*.

   ```
   aws backup-gateway create-gateway \
                       --region region_code \
                       --activation-key activation_key \
                       --gateway-display-name gateway_name \
                       --gateway-type BACKUP_VM
   ```

1. Verify gateway appears in AWS Backup console under **External Resources** → **Gateways**

## Editing or deleting a gateway
<a name="edit-gateway"></a>

**To edit or delete a gateway:**

1. In the left navigation pane, under the **External resources** section, choose **Gateways**.

1. In the **Gateways** section, choose a gateway by its **Gateway name**.

1. To edit the gateway name, choose **Edit**.

1. To delete the gateway, choose **Delete**, then choose **Delete gateway**.

   You cannot reactivate a deleted gateway. If you want to connect to the hypervisor again, follow the procedure in [Creating a gateway](#create-gateway) .

1. To connect to a hypervisor, in the **Connected hypervisor** section, choose **Connect**.

   Each gateway connects to a single hypervisor. However, you can connect multiple gateways to the same hypervisor to increase the bandwidth between them beyond that of the first gateway.

1. To assign, edit, or manage tags, in the **Tags** section, choose **Manage tags**.

## Backup gateway Bandwidth Throttling
<a name="backup-gateway-bandwidth-throttling"></a>

**Note**  
This feature will be available on new gateways deployed after December 15, 2022. For existing gateways, this new capability will be available through an automatic software update on or before January 30, 2023. To update the gateway to the latest version manually, use AWS CLI command [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_UpdateGatewaySoftwareNow.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_UpdateGatewaySoftwareNow.html).

You can limit the upload throughput from your gateway to AWS Backup to control the amount of network bandwidth the gateway uses. By default, an activated gateway has no rate limits.

You can configure a bandwidth rate-limit schedule using the AWS Backup Console or using API through the AWS CLI ([https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutBandwidthRateLimitSchedule.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutBandwidthRateLimitSchedule.html)). When you use a bandwidth rate limit schedule, you can configure limits to change automatically throughout the day or week.

Bandwidth rate limiting works by balancing the throughput of all data being uploaded, averaged over each second. While it is possible for uploads to cross the bandwidth rate limit briefly for any given micro- or millisecond, this does not typically result in large spikes over longer periods of time.

You can add up to a maximum of 20 intervals. The maximum value for the upload rate is 8,000,000 Mbps.

### View and edit the bandwidth rate-limit schedule for your gateway using the AWS Backup console.
<a name="backup-gateway-view-edit-bandwidth-rate-limit-schedule"></a>

This section describes how to view and edit the bandwidth rate limit schedule for your gateway.

**To view and edit the bandwidth rate limit schedule**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation pane, choose **Gateways**. In the Gateways pane, gateways are displayed by name. Click the radio button adjacent to the gateway name you want to manage.

1. Once you select a radio button, the drop-down menu **Action** is available to click. Click **Actions**, then click **Edit bandwidth rate limit schedule**. The current schedule is displayed. By default, a new or unedited gateway has no defined bandwidth rate limits.
**Note**  
You can also click **Manage schedule** in the gateway details page to navigate to the Edit bandwidth page.

1. *(Optional)* Choose **Add interval** to add a new configurable interval to the schedule. For each interval, input the following information:

   1. **Days of week** — Select the recurring day or days on which you want the interval to apply. When chosen, the days will display below the drop-down menu. You can remove them by clicking the **X** next to the day.

   1. **Start time** — Enter the start time for the bandwidth interval, using the *HH:MM* 24-hour format. Time is rendered in Universal Coordinated Time (UTC).

      Note: Your bandwidth-rate-limit interval begins at the start of the specified minute.

   1. **End time** — Enter the end time for the bandwidth interval, using the *HH:MM* 24-hour format. Time is rendered in Universal Coordinated Time (UTC).
**Important**  
The bandwidth-rate-limit interval ends at the end of the minute specified. To schedule an interval that ends at the end of an hour, enter `59`. To schedule consecutive continuous intervals, transitioning at the start of the hour, with no interruption between the intervals, enter `59` for the end minute of the first interval. Enter `00` for the start minute of the succeeding interval. 

   1. **Upload rate** — Enter the upload rate limit, in megabits per second (Mbps). The minimum value is 102 megabytes per second (Mbps).

1. *(Optional)* Repeat the previous step as desired until your bandwidth rate-limit schedule is complete. If you need to delete an interval from your schedule, choose **Remove**.
**Important**  
Bandwidth rate-limit intervals cannot overlap. The start time of an interval must occur after the end time of a preceding interval and before the start time of a following interval; its end time must occur before the start time of the following interval.

1. When you are finished, click the **Save changes** button.

### View and edit the bandwidth rate-limit schedule for your gateway using AWS CLI.
<a name="backup-gateway-view-edit-bandwidth-rate-limit-schedule-cli"></a>

The [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetBandwidthRateLimitSchedule.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetBandwidthRateLimitSchedule.html) action can be used to view the bandwidth throttle schedule for a specified gateway. If there is no schedule set, the schedule will be an empty list of intervals. Here is an example using the AWS CLI to fetch the bandwidth schedule of a gateway:

```
aws backup-gateway get-bandwidth-rate-limit-schedule --gateway-arn "arn:aws:backup-gateway:region:account-id:gateway/bgw-gw id"
```

To edit a gateway’s bandwidth throttle schedule, you can use the [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutBandwidthRateLimitSchedule.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutBandwidthRateLimitSchedule.html) action. Note that you can only update a gateway’s schedule as a whole, rather than modifying, adding, or removing individual intervals. Calling this action will overwrite the gateway’s previous bandwidth throttle schedule.

```
aws backup-gateway put-bandwidth-rate-limit-schedule --gateway-arn "arn:aws:backup-gateway:region:account-id:gateway/gw-id" --bandwidth-rate-limit-intervals ...
```

# Working with hypervisors
<a name="working-with-hypervisors"></a>

After you finish [Creating a gateway](working-with-gateways.md#create-gateway), you can connect it to a hypervisor to enable AWS Backup to work with the virtual machines managed by that hypervisor. For example, the hypervisor for VMware VMs is VMware vCenter Server. Ensure your hypervisor is configured with the [necessary permissions for AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/configure-infrastructure-bgw.html#bgw-vmware-permissions). 

## Adding a hypervisor
<a name="add-hypervisor"></a>

**To add a hypervisor:**

1. In the left navigation pane, under the **External resources** section, choose **Hypervisors**.

1. Choose **Add hypervisor**.

1. In the **Hypervisor settings** section, type in a **Hypervisor name**.

1. For **vCenter server host**, use the dropdown menu to select either **IP address** or **FQDN** (fully-qualified domain name). Type in the corresponding value.

1. To allow AWS Backup to discover the virtual machines on the hypervisor, enter the hypervisor’s **Username** and **Password**.

1. Encrypt your password. You can [ specify this encryption](https://docs.aws.amazon.com/aws-backup/latest/devguide/bgw-hypervisor-encryption-page.html) by selecting a specific service-managed KMS key or a customer-managed KMS key using the dropdown menu or choose **Create KMS key**. If you do not select a specific key, AWS Backup will encrypt your password using a service-owned key.

1. In the **Connecting gateway** section, use the dropdown list to specify which Gateway to connect to your hypervisor.

1. Choose **Test gateway connection** to verify your previous inputs.

1. *Optionally*, in the **Hypervisor tags** section, you can assign tags to the hypervisor by choosing **Add new tag**.

1. *Optional* [https://docs.aws.amazon.com/aws-backup/latest/devguide/backing-up-vms.html#backup-gateway-vmwaretags](https://docs.aws.amazon.com/aws-backup/latest/devguide/backing-up-vms.html#backup-gateway-vmwaretags): You can add up to 10 VMware tags you currently use on your virtual machines to generate AWS tags.

1. In the **Log group setting** panel, you may choose to integrate with [ Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) to maintain logs of your hypervisor (standard [CloudWatch Logs pricing](https://aws.amazon.com/cloudwatch/pricing/) will apply based on usage). Each hypervisor can belong to one log group.

   1. If you have not yet created a log group, select the **Create a new log group** radio button. The hypervisor you are editing will be associated with this log group.

   1. If you have previously created a log group for a different hypervisor, you can use that log group for this hypervisor. Select **Use an existing log group**.

   1. If you do not want CloudWatch logging, select **Deactivate logging**. 

1. Choose **Add hypervisor**, which takes you to its detail page.

**Tip**  
You can use Amazon CloudWatch Logs (see step 11 above) to obtain information about your hypervisor, including error monitoring, network connection between the gateway and the hypervisor, and network configuration information. For information about CloudWatch log groups, see [ Working with Log Groups and Log Streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) in the *Amazon CloudWatch User Guide*.

## Viewing virtual machines managed by a hypervisor
<a name="view-vms-by-hypervisor"></a>

**To view virtual machines on a hypervisor:**

1. In the left navigation pane, under the **External resources** section, choose **Hypervisors**.

1. In the **Hypervisors** section, choose a hypervisor by its **Hypervisor name** to go to its detail page.

1. In the section under **Hypervisor summary**, choose the **Virtual machines** tab.

1. In the **Connected virtual machines** section, a list of virtual machines populates automatically.

## Viewing gateways connected to a hypervisor
<a name="view-gateways-by-hypervisor"></a>

**To view gateways connected to the hypervisor:**

1. Choose the **Gateways** tab.

1. In the **Connected gateways** section, a list of gateways populates automatically.

## Connecting a hypervisor to additional gateways
<a name="add-more-gateways"></a>

Your backup and restore speeds might be limited by the bandwidth of the connection between your gateway and hypervisor. You can increase these speeds by connecting one or more additional gateways to your hypervisor. You can do this in the **Connected gateways** section as follows:

1. Choose **Connect**.

1. Select another gateway using the dropdown menu. Alternatively, choose **Create gateway** to create a new gateway.

1. Choose **Connect**.

## Editing a hypervisor configuration
<a name="edit-hypervisor"></a>

If you do not use the **Test gateway connection** feature, you might add a hypervisor with an incorrect username or password. In that case, the hypervisor’s connection status is always `Pending`. Alternatively, you might rotate the username or password to access your hypervisor. Update this information using the following procedure:

**To edit an already-added hypervisor:**

1. In the left navigation pane, under the **External resources** section, choose **Hypervisors**.

1. In the **Hypervisors** section, choose a hypervisor by its **Hypervisor name** to go to its detail page.

1. Choose **Edit**.

1. The top panel is named **Hypervisor settings**.

   1. Under **vCenter server host**, you can also edit the FQDN (Fully-Qualified Domain Name) or the IP address.

   1. *Optionally,* enter the hypervisor’s **Username** and **Password**.

1. In the **Log group setting** panel, you may choose to integrate with [ Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) to maintain logs of your hypervisor (standard [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/) will apply based on usage). Each hypervisor can belong to one log group.

   1. If you have not yet created a log group, select the **Create a new log group** radio button. The hypervisor you are editing will be associated with this log group.

   1. If you have previously created a log group for a different hypervisor, you can use that log group for this hypervisor. Select **Use an existing log group**.

   1. If you do not want CloudWatch logging, select **Deactivate logging**. 

**Tip**  
You can use Amazon CloudWatch Logs (see step 5 above) to obtain information about your hypervisor, including error monitoring, network connection between the gateway and the hypervisor, and network configuration information. For information about CloudWatch log groups, see [ Working with Log Groups and Log Streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) in the *Amazon CloudWatch User Guide*.

To update a hypervisor programmatically, use the CLI command [ update-hypervisor](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup-gateway/update-hypervisor.html) and [ UpdateHypervisor](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_UpdateHypervisor.html) API call.

## Deleting a hypervisor configuration
<a name="delete-hypervisor"></a>

If you need to remove an already-added hypervisor, remove the hypervisor configuration and add another. This remove operation applies to the configuration to connect to the hypervisor. It does not delete the hypervisor.

**To delete the configuration to connect to an already-added hypervisor:**

1. In the left navigation pane, under the **External resources** section, choose **Hypervisors**.

1. In the **Hypervisors** section, choose a hypervisor by its **Hypervisor name** to go to its detail page.

1. Choose **Remove**, then choose **Remove hypervisor**.

1. Optional: replace the removed hypervisor configuration using the procedure for [Adding a hypervisor](#add-hypervisor).

## Understanding hypervisor status
<a name="understand-hypervisor-status"></a>

The following describes each of the possible hypevisor statuses and, if applicable, remediation steps. The `ONLINE` status is the normal status of the hypervisor. A hypervisor should have this status all or most of the time it’s in use for backup and recovery of VMs managed by the hypervisor.


**Hypervisor statuses**  

| Status | Meaning and remediation | 
| --- | --- | 
| ONLINE |  You added a hypervisor to AWS Backup, associated with it a gateway, and can connect with that gateway over your network to perform backup and recovery of virtual machines managed by the hypervisor. You can perform [on-demand and scheduled backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/backing-up-vms.html) of those virtual machines at any time.  | 
| PENDING |  You added a hypervisor to AWS Backup but: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html) To change a hypervisor status from `PENDING` to `ONLINE`, [create a gateway](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-gateways.html#create-gateway) and [connect your hypervisor to that gateway](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html#add-more-gateways).  | 
| OFFLINE |  You added a hypervisor to AWS Backup and associated it with a gateway, but the gateway cannot connect to the hypervisor over your network. To change a hypervisor status from `OFFLINE` to `ONLINE`, verify the correctness of your [network configuration](https://docs.aws.amazon.com/aws-backup/latest/devguide/configure-infrastructure-bgw.html#bgw-network-configuration). If the issue persists, verify that your hypervisor’s IP address or fully-qualified domain name is correct. If they are incorrect, [add your hypervisor again using the correct information and test your gateway connection](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html#add-hypervisor).   | 
| ERROR |  You added a hypervisor to AWS Backup and associated it with a gateway, but the gateway cannot communicate with the hypervisor. To change a hypervisor status from `ERROR` to `ONLINE`, verify that hypervisor’s username and password are correct. If they are incorrect, [edit your hypervisor configuration](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html#edit-hypervisor).  | 

**Next steps**

To back up virtual machines on your hypervisor, see [Backing up virtual machines](backing-up-vms.md).

# Backing up virtual machines
<a name="backing-up-vms"></a>

After [Adding a hypervisor](working-with-hypervisors.md#add-hypervisor), Backup gateway automatically lists your virtual machines. You can view your virtual machines by choosing either **Hypervisors** or **Virtual machines** in the left navigation pane.
+ Choose **Hypervisors** to view only the virtual machines managed by a specific hypervisor. With this view, you can work with one virtual machine at a time.
+ Choose **Virtual machines** to view all the virtual machines across all the hypervisors you added to your AWS account. With this view, you can work with some or all your virtual machines across multiple hypervisors.

Regardless of which view you choose, to perform a backup operation on a specific virtual machine, choose its **VM name** to open its detail page. The VM detail page is the starting point for the following procedures.

## Creating an on-demand backup of a virtual machine
<a name="create-on-demand-backup-vm"></a>

An [on-demand](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) backup is a one-time, full backup you manually initiate. You can use on-demand backups to test AWS Backup’s backup and restore capabilities.

**To create an on-demand backup of a virtual machine:**

1. Choose **Create on-demand backup**.

1. [Configure your on-demand backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html).

1. Choose **Create on-demand backup**.

1. Check when your backup job has the status `Completed`. In the left navigation menu, choose **Jobs**.

1. Choose the **Backup Job ID** to view backup job information such as the **Backup size** and time elapsed between the **Creation date** and **Completion date**.

## Incremental VM backups
<a name="vm-incrementalbackups"></a>

Newer VMware versions contain a feature called [Changed Block Tracking](https://kb.vmware.com/s/article/1020128), which keeps track of the storage blocks of virtual machines as they change over time. When you use AWS Backup to back up a virtual machine, AWS Backup attempts to use the CBT data if it is available. AWS Backup uses CBT data to speed up the backup process; without CBT data, backup jobs are often slower and use more hypervisor resources. The backup can still be successfully completed even when the CBT data is not valid or available. For example, the CBT data might not be valid or might be unavailable if the virtual machine or ESXi host experiences a hard shutdown.

On the occasions CBT data is invalid or unavailable, the backup status will read `Successful` with a message. In these cases, the message will indicate that, in the absence of CBT data, AWS Backup used its own proprietary change detection mechanism to complete the backup instead of VMware's CBT data. Subsequent backups will reattempt to use CBT data, and in most cases the CBT data will be successfully valid and available. If the issue persists, see [ VMware Troubleshooting](https://docs.aws.amazon.com/aws-backup/latest/devguide/vm-troubleshooting.html) for steps to remedy.

For CBT to function correctly, the following must be true:
+ Host needs to be ESXi 4.0 or later
+ The VM owning the disks must have hardware version 7 or later
+ CBT must be enabled for the virtual machine (it is enabled by default)

To verify if a virtual disk has CBT enabled:

1. Open the vSphere Client and select a powered-off virtual machine.

1. Right-click the virtual machine and navigate to **Edit Settings** > **Options** > **Advanced/General** > **Configuration Parameters**.

1. The option `ctkEnabled` needs to equal `True`.

## Automating virtual machine backup by assigning resources to a backup plan
<a name="automate-vm-backup"></a>

A [backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/about-backup-plans.html) is a user-defined data protection policy that automates data protection across many AWS services and third-party applications. You first create your backup plan by specifying its backup frequency, retention period, lifecycle policy, and many other options. To create a backup plan, see Getting started tutorial.

After you create your backup plan, you assign AWS Backup-supported resources, including virtual machines, to that backup plan. AWS Backup offers [many ways to assign resources](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html), including assigning all the resources in your account, including or excluding single specific resources, or adding resources with certain tags. 

In addition to its existing resource assignment features, AWS Backup support for virtual machines introduces several new features to help you quickly assign virtual machines to backup plans. From the **Virtual machines** page, you can assign tags to multiple virtual machines or use the new **Assign resources to plan** feature. Use these features to assign your virtual machines already discovered by AWS Backup gateway.

If you anticipate discovering and assigning additional virtual machines in the future, and would like to automate the resource assignment step to include those future virtual machines, use the new **Create group assignment** feature.

## VMware Tags
<a name="backup-gateway-vmwaretags"></a>

[https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_VmwareTag.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_VmwareTag.html) are key-value pairs you can use to manage, to filter, and to search for your resources.

A VMware tag is composed of a **category** and a **tag name**. VMware tags are used to group virtual machines. A tag name is a label assigned to a virtual machine. A category is a collection of tag names.

In AWS tags, you can use characters among UTF-8 letters, numbers, spaces, and special characters `+ - = . _ : /` .

If you use tags on your virtual machines, you can add up to 10 matching tags in AWS Backup to help with organization. You can map up to 10 VMware tags to AWS tags. In the [AWS Backup console](https://console.aws.amazon.com/backup/), these can be found in **External resources > Virtual Machines > AWS tags** or **VMware tags**.

### VMware tag mapping
<a name="vmware-tag-mapping"></a>

If you use tags on your virtual machines, you can add up to 10 matching tags in AWS Backup for additional clarity and organization. Mappings apply to any virtual machine on the hypervisor.

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the console, go to **edit Hypervisor** (Click **External resources**, then **Hypervisors**, then click the Hypervisor name, then click **Manage mappings**).

1. The last pane, **VMware tag mapping**, contains four textbox fields into which you can enter your VMware tag information into corresponding AWS tags. The four fields are **Vmware tag category**, **VMware tag name**, **AWS tag key**, and **AWS tag value** (*example: Category = OS; Tag name = Windows; AWS tag key = OS-Windows, and AWS tag value = Windows)*. 

1. After you have entered your preferred values, click **Add mapping**. If you make an error, you can click **Remove** to delete entered information.

1. After adding mapping(s), specify the IAM role you intend to use to apply these AWS tags to the VMware virtual machines.

   The policy [https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#aws-managed-policies](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#aws-managed-policies) contains needed permissions. You can attach this policy to the role you are using (or have an administrator attached it) or you can create a custom policy for the role being used.

1. Lastly, click **Add hypervisor** or **Save**.

The IAM role trust relationship should be modified to add the backup-gateway.amazonaws.com and backup.amazonaws.com services. Without this service, you will likely experience an error when you map tags. To edit the trust relationship for an existing role,

1. Log into the [IAM console](https://console.aws.amazon.com/iamv2/home?region=us-west-2#/home).

1. In the navigation pane of the console, choose **Roles**.

1. Choose the name of the role you wish to modify, then select the **Trust relationships** tab on the details page.

1. Under **Policy Document, paste the following:**

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "Service": [
             "backup.amazonaws.com",
             "backup-gateway.amazonaws.com"
           ]
         },
         "Action": "sts:AssumeRole"
       }
     ]
   }
   ```

------

1. Choose **Update Trust Policy**.

See [Editing the trust relationship for an existing role](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/edit_trust.html) in the *AWS Directory Service Administration Guide* for more detail.

### View VMware tag mappings
<a name="w2aac17c19c43c23c15c23"></a>

In the [AWS Backup console](https://console.aws.amazon.com/backup/), click on **External Resources**, then click on **Hypervisors**, then click on the Hypervisor name link to view properties for the selected hypervisor. Under the summary pane, there are four tabs, the last of which is **VMware tag mappings**. Note if you do not yet have mappings, "No VMware tag mappings." will be displayed.

From here, you can sync the metadata of virtual machines discovered by the hypervisor, you can copy mappings to your hypervisor(s), you can add AWS tags mapped to teh VMware tags to the backup selection of a backup plan, or you can manage mappings.

In the console, to see which tags are applied to a selected virtual machine, click **Virtual machines**, then the virtual machine name, then **AWS tags** or **VMware tags**. You can view the tags associated with this virtual machine, and additionally you can manage the tags.

### Assign virtual machines to plan using VMware tag mappings
<a name="w2aac17c19c43c23c15c31"></a>

To assign virtual machines to a backup plan using mapped tags, do the following:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the console go to VMware tag mappings on the hypervisor details page (click **External resources**, then click **Hypervisors** then click on the hypervisor name).

1. Select the checkbox next to multiple mapped tags to assign those tags to the same backup plan.

1. Click **Add to resource assignment**.

1. Choose an existing **Backup plan** from the dropdown list. Alternatively, you can choose **Create backup plan** to create a new backup plan.

1. Click **Confirm**. This opens the **Assign resources** page with **Refine selection using tags** fields with values pre-populated.

### VMware tags using the AWS CLI
<a name="w2aac17c19c43c23c15c37"></a>

AWS Backup uses the API call [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutHypervisorPropertyMappings.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutHypervisorPropertyMappings.html) to map hypervisor entity properties in on-premise to properties in AWS.

In the AWS CLI, use the operation `put-hypervisor-property-mappings`:

```
aws backup-gateway put-hypervisor-property-mappings \
--hypervisor-arn arn:aws:backup-gateway:region:account:hypervisor/hypervisorId \
--vmware-to-aws-tag-mappings list of VMware to AWS tag mappings \
--iam-role-arn arn:aws:iam::account:role/roleName \
--region AWSRegion 
--endpoint-url URL
```

Here is an example:

```
aws backup-gateway put-hypervisor-property-mappings \
--hypervisor-arn arn:aws:backup-gateway:us-east-1:123456789012:hypervisor/hype-12345 \
--vmware-to-aws-tag-mappings VmwareCategory=OS,VmwareTagName=Windows,AwsTagKey=OS-Windows,AwsTagValue=Windows \
--iam-role-arn arn:aws:iam::123456789012:role/SyncRole \
--region us-east-1
```

You can also use [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetHypervisorPropertyMappings.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetHypervisorPropertyMappings.html) to assist with property mappings information. In the AWS CLI, use the operation `get-hypervisor-property-mappings`. Here is an example template: 

```
aws backup-gateway get-hypervisor-property-mappings --hypervisor-arn HypervisorARN 
--region AWSRegion
```

Here is an example:

```
aws backup-gateway get-hypervisor-property-mappings \
--hypervisor-arn arn:aws:backup-gateway:us-east-1:123456789012:hypervisor/hype-12345 \
--region us-east-1
```

### Sync metadata of virtual machines discovered by the hypervisor in AWS using API, CLI, or SDK
<a name="w2aac17c19c43c23c15c57"></a>

You can sync the metadata of virtual machines. When you do, the VMware tags present on the virtual machine that are part of the mappings will be synched. Also, AWS tags mapped to the VMware tags present on the virtual machine will be applied to the AWS Virtual Machine resource.

AWS Backup uses the API call [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_StartVirtualMachinesMetadataSync.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_StartVirtualMachinesMetadataSync.html) to sync the metadata of the virtual machines discovered by the hypervisor. To sync metadata of virtual machines discovered by the hypervisor using AWS CLI, use the operation `start-virtual-machines-metadata-sync`.

Example template:

```
aws backup-gateway start-virtual-machines-metadata-sync \
--hypervisor-arn Hypervisor ARN 
--region AWSRegion
```

Example:

```
aws backup-gateway start-virtual-machines-metadata-sync \
--hypervisor-arn arn:aws:backup-gateway:us-east-1:123456789012:hypervisor/hype-12345 \
--region us-east-1
```

You can also use [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetHypervisor.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetHypervisor.html) to assist with hypervisor information, such as host, state, status of latest metadata sync, and also to retrieve the last successful metadata sync time. In the AWS CLI, use the operation `get-hypervisor`.

Example template:

```
aws backup-gateway get-hypervisor \
--hypervisor-arn Hypervisor ARN 
--region AWSRegion
```

Example:

```
aws backup-gateway get-hypervisor \
--hypervisor-arn arn:aws:backup-gateway:us-east-1:123456789012:hypervisor/hype-12345 \
--region us-east-1
```

For more information, see API documentation [VmwareTag](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_VmwareTag.html) and [ VmwareToAwsTagMapping](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_VmwareToAwsTagMapping.html).

This feature will be available on new gateways deployed after December 15, 2022. For existing gateways, this new capability will be available through an automatic software update on or before January 30, 2023. To update the gateway to the latest version manually, use AWS CLI command [ UpdateGatewaySoftwareNow](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_UpdateGatewaySoftwareNow.html).

Example:

```
aws backup-gateway update-gateway-software-now \
--gateway-arn arn:aws:backup-gateway:us-east-1:123456789012:gateway/bgw-12345 \
--region us-east-1
```

## Assigning virtual machines using tags
<a name="assign-vms-tags"></a>

You can assign your virtual machines currently discovered by AWS Backup, along with other AWS Backup resources, by assigning them a tag that you have already assigned to one of your existing backup plans. You can also create a [new backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) and a new [tag-based resource assignment](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html). Backup plans check for newly-assigned resources each time they run a backup job.

**To tag multiple virtual machines with the same tag:**

1. In the left navigation pane, choose **Virtual machines**.

1. Select the checkbox next to **VM name** to choose all your virtual machines. Alternatively, select the checkbox next to the VM names you want to tag.

1. Choose **Add tags**.

1. Type in a tag **Key**.

1. Recommended: type in a tag **Value**.

1. Choose **Confirm**.

## Assigning virtual machines using the Assign resources to plan feature
<a name="assign-vms-to-plan"></a>

You can assign virtual machines currently discovered by AWS Backup to an existing or new backup plan using the **Assign resources to plan** feature.

**To assign virtual machines using the Assign resources to plan feature:**

1. In the left navigation pane, choose **Virtual machines**.

1. Select the checkbox next to **VM name** to choose all your virtual machines. Alternatively, select the checkbox next to multiple VM names to assign them to the same backup plan.

1. Choose **Assignments**, then choose **Assign resources to plan**.

1. Type in a **Resource assignment name**.

1. Choose a resource assignment **IAM role** to create backups and manage recovery points. If you do not have a specific IAM role to use, we recommend the **Default role** which has the correct permissions.

1. In the **Backup plan** section, choose an existing **Backup plan** from the dropdown list. Alternatively, choose **Create backup plan** to create a new backup plan.

1. Choose **Assign resources**.

1. Optional: Verify your virtual machines are assigned to a backup plan by choosing **View Backup plan**. Then, in the **Resource assignments** section, choose the resource assignment **Name**.

## Assigning virtual machines using the Create group assignment feature
<a name="assign-vms-group-assignment"></a>

Unlike the preceding two resource assignment features for virtual machines, the **Create group assignment** feature not only assigns virtual machines currently discovered by AWS Backup, but also virtual machines discovered in the future in a folder or hypervisor you define.

Also, you do not need to select any checkboxes to use the **Create group assignment** feature.

**To assign virtual machines using the Assign resources to plan feature:**

1. In the left navigation pane, choose **Virtual machines**.

1. Choose **Assignments**, then choose **Create group assignment**.

1. Type in a **Resource assignment name**.

1. Choose a resource assignment **IAM role** to create backups and manage recovery points. If you do not have a specific IAM role to use, we recommend the **Default role** which has the correct permissions.

1. In the **Resource group** section, select the **Group type** dropdown menu. Your options are **Folder** or **Hypervisor**.

   1. Choose **Folder** to assign all the virtual machines in a folder on a hypervisor. Select a folder **Group name**, such as `datacenter/vm`, using the dropdown menu. You can also choose to include **Subfolders**.
**Note**  
To make Folder-based assignments, during the discovery process, AWS Backup tags virtual machines with the folder it finds them in during the discovery process. If you later move a virtual machine to a different folder, AWS Backup cannot update the tag for you due to AWS tagging best practices. This assignment method might result in continuing to take backups of virtual machines you moved out of your assigned folder.

   1. Choose **Hypervisor** to assign all the virtual machines managed by a hypervisor. Select a hypervisor ID **Group name** using the dropdown menu.

1. In the **Backup plan** section, choose an existing **Backup plan** from the dropdown list. Alternatively, choose **Create backup plan** to create a new backup plan.

1. Choose **Create group assignment**.

1. Optional: verify your virtual machines are assigned to a backup plan by choosing **View Backup plan**. In the **Resource assignments** section, choose the resource assignment **Name**.

**Next steps**

To restore a virtual machine, see [Restore a virtual machine using AWS Backup](restoring-vm.md).

# Information about third-party source components for Backup gateway
<a name="bgw-third-party-source"></a>

In this section, you can find information about third party tools and licenses that we depend on to deliver Backup gateway functionality.

The source code for certain third-party source software components that are included with the Backup gateway software is available for download at the following locations:
+ For gateways deployed on VMware ESXi, download [ sources.tgz](https://s3.amazonaws.com/aws-storage-gateway-terms/bgw_backup_vm/third-party-sources.tgz).

This product includes software developed by the OpenSSL project for use in the OpenSSL Toolkit ([https://www.openssl.org/](https://www.openssl.org/)).

This product includes software developed by VMware® vSphere Software Development Kit ([https://www.vmware.com](https://www.vmware.com)).

For the relevant licenses for all dependent third-party tools, see [Third-Party Licenses](https://s3.amazonaws.com/aws-storage-gateway-terms/bgw_backup_vm/third-party-licenses.txt).

## Open-source components for AWS Appliance
<a name="aws-appliance-open-source"></a>

Several third-party tools and licenses are used to deliver functionality for Backup gateway.

Use the following links to download source code for certain open-source software components that are included with AWS Appliance software:
+ For gateways deployed on VMware ESXi, download [sources.tar](https://s3.amazonaws.com/aws-storage-gateway-terms/sources.tar)

This product includes software developed by the OpenSSL project for use in the OpenSSL Toolkit ([https://www.openssl.org/](https://www.openssl.org)). For the relevant licenses for all dependent third-party tools, see [Third-Party Licenses](https://s3.amazonaws.com/aws-storage-gateway-terms/THIRD_PARTY_LICENSES.txt).

# Troubleshoot VM issues
<a name="vm-troubleshooting"></a>

## Incremental Backups / CBT issues and messages
<a name="w2aac17c19c43c27b3"></a>

**Failure message:** `"The VMware Change Block Tracking (CBT) data was invalid during this backup, but the incremental backup was successfully completed with our proprietary change detection mechanism."`

If this message continues, [reset CBT](https://knowledge.broadcom.com/external/article?legacyId=1020128) as directed by VMware.

**Message notes CBT was not turned on or was unavailable:** *"VMware Change Block Tracking (CBT) was not available for this virtual machine, but the incremental backup was successfully completed with our proprietary change mechanism."*

Check to make sure CBT is turned on. To verify if a virtual disk has CBT enabled:

1. Open the vSphere Client and select a powered-off virtual machine.

1. Right-click the virtual machine and navigate to **Edit Settings** > **Options** > **Advanced/General** > **Configuration Parameters**.

1. The option `ctkEnabled` needs to equal `True`.

If it is turned on, ensure you are using up-to-date VMware features. The host must be ESXi 4.0 or later and the virtual machine owning the disks to be tracked must be hardware version 7 or later.

If CBT is turned on (enabled) and the software and hardware are up to date, turn off the virtual machine and then turn it back on again. Ensure that CBT is turned on. Then, perform the backup again.

## VMware backup failure
<a name="w2aac17c19c43c27b5"></a>

When a VMware backup fails, it may be related to one of the following:

**Failure message:** `"Failed to process backup data. Aborted backup job."` or `"Error opening disk on the virtual machine"`.

**Possible causes:** This error may occur because of a configuration issue; or, the VMware version or disk isn't supported.

**Remedy 1:** Ensure your infrastructure is configured to use a gateway and ensure all required ports are open.

1. Access the [backup gateway console](https://docs.aws.amazon.com/storagegateway/latest/tgw/accessing-local-console.html#MaintenanceConsoleWindowVMware-common). Note this is different from the AWS Backup console.

1. On the **Backup gateway configuration** page enter option **3** to test the network connectivity.

1. If the network test is successful, enter **X**.

1. Return to the Backup gateway configuration page.

1. Enter **7** to access the command prompt.

1. Run the following commands to verify network connectivity:

   `ncport -d ESXi Host-p 902`

   `ncport -d ESXi Host-p 443`

**Remedy 2:** Use [Supported VMs](vm-backups.md#supported-vms) versions.

**Remedy 3:** If a gateway appliance is configured with incorrect DNS servers, then the backup fails. To verify the DNS configuration, complete the following steps:

1. Access the [backup gateway console](https://docs.aws.amazon.com/storagegateway/latest/tgw/accessing-local-console.html#MaintenanceConsoleWindowVMware-common).

1. On the **Backup gateway configuration** page enter option **2** to navigate to the network configuration.

1. In **Network configuration**, enter **7** to view the DNS configuration.

1. Review the DNS server IP addresses. If the DNS server IP address are incorrect then exist the prompt to return to **Network Configuration**.

1. In **Network Configuration**, enter **6** to edit the DNS configuration.

1. Enter the correct DNS server IP addresses. Then, enter **X** to complete your network configuration.

To obtain more information about your hypervisor, such as errors and network configuration and connection, see [Editing a hypervisor configuration](working-with-hypervisors.md#edit-hypervisor) to configure the hypervisor to integrate with Amazon CloudWatch Logs.

## Backup failures from network connection issues
<a name="w2aac17c19c43c27b7"></a>

**Failure message: **`"Failed to upload backup during data ingestion. Aborted backup job."` or `"Cloud network request timed out during data ingestion"`.

**Possible causes:** This error can occur if the network connection is insufficient to handle data uploads. If network bandwidth is low, the link between the VM and AWS Backup can become congested and cause backups to fail.

Required network bandwidth depends on several factors, including the size of the VM, the incremental data generated for each VM backup, the backup window, and restore requirements.

**Remedy:** Best practices and recommendations include having a minimum bandwidth of 1000 Mbps upload bandwidth for on-premises VMs connected to AWS Backup. Once the bandwidth is confirmed, retry the backup job.

## Aborted backup job
<a name="w2aac17c19c43c27b9"></a>

**Failure message:** `"Failed to create backup during snapshot creation. Aborted backup job."`

**Possible cause:** The VMware host where the gateway appliance resides may have an issue.

**Remedy:** Check the configuration of your VMware host and review the it for issues. For additional information, see [Editing a hypervisor configuration](working-with-hypervisors.md#edit-hypervisor).

## No available gateways
<a name="w2aac17c19c43c27c11"></a>

**Failure message:** `"No gateways available to work on job."`

**Possible cause:** all connected gateways are busy with other jobs. Each gateway has a limit of four concurrent jobs (backup or restore).

For **remedies**, see the next section for steps on increasing number of gateways and steps to increase backup plan window time.

## VMware backup job failure
<a name="w2aac17c19c43c27c13"></a>

**Failure message: **`"Abort signal detected"`

**Possible causes:**
+ **Low Network Bandwidth**: Insufficient network bandwidth can impede the completion of backups within the completion window. When the backup job requires more bandwidth than available, it can result in failure and trigger the "Abort Signal Detected" error.
+ **Inadequate Number of Backup Gateways**: If the number of backup gateways is not sufficient to handle the backup rotation for all the configured VMs, the backup job may fail. This can occur when the backup plan's window for completing backups is too short or the number of backup gateways are not enough.
+ Backup Plan completion window is too small.

**Remedies:**

**Increase bandwidth:** Consider increasing the network capacity between AWS and the on-premises environment. This step will provide more bandwidth for the backup process, allowing data to transfer smoothly without triggering the error. It is recommended you have at least 100-Mbps bandwidth to AWS to backup on-premises VMware VMs using AWS Backup.

If a bandwidth rate limit is configured for the backup gateway, it can restrict the flow of data and lead to backup failures. Increasing the bandwidth rate limit to ensure sufficient data transfer capacity may help reduce failures. This adjustment can mitigate the occurrence of the "Abort Signal Detected" error. For more information, see [Backup gateway Bandwidth Throttling](working-with-gateways.md#backup-gateway-bandwidth-throttling).

**Increase the number of Backup gateways:** A single backup gateway can process up to 4 backup and restore jobs at a time. Additional jobs will queue and wait for the gateway to free up until the backup start window passed. If the backup window passes and the queued jobs have not started, those backup jobs will fail with "abort signal detected". You can increase the number of backup gateways to alleviate the number of failed jobs. See [Working with gateways](working-with-gateways.md) for more detail.

**Increase backup plan window time:** You can increase the **complete within duration** of the backup window in your backup plan. See [Backup plan options and configuration](plan-options-and-configuration.md) for more detail.

For help resolving these issues, see [AWS Knowledge Center](https://repost.aws/knowledge-center/backup-troubleshoot-vmware-backups).

# Create Windows VSS backups
<a name="windows-backups"></a>

With AWS Backup, you can back up and restore VSS (Volume Shadow Copy Service)-enabled Windows applications running on Amazon EC2 instances. If the application has VSS writer registered with Windows VSS, then AWS Backup creates a snapshot that will be consistent for that application.

You can perform consistent restores, while using the same managed backup service that is used to protect other AWS resources. With application-consistent Windows backups on EC2, you get the same consistency settings and application awareness as traditional backup tools.

**Note**  
AWS Backup only supports application-consistent backups of resources running on Amazon EC2, specifically backup scenarios where application data can be restored by replacing an existing instance with a new instance created from the backup. Not all instance types or applications are supported for Windows VSS backups. 

For more information, see [Create VSS based snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-vss-snaps.html) in the *Amazon EC2 User Guide*.

To back up and restore VSS-enabled Windows resources running Amazon EC2, follow these steps to complete the required prerequisite tasks. For instructions, see [ Prerequisites to create Windows VSS based EBS snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/application-consistent-snapshots-prereqs.html) in the *Amazon EC2 User Guide*.

1. Download, install, and configure the SSM agent in AWS Systems Manager. This step is required. For instructions, see [Working with SSM agent on EC2 instances for Windows Server](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-windows.html) in the *AWS Systems Manager User Guide*.

1. Add an IAM policy to the IAM role and attach the role to the Amazon EC2 instance before you take the Windows VSS (Volume Shadow Copy Service) backup. For instructions, see [Use an IAM managed policy to grant permissions for VSS based snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vss-iam-reqs.html) in the *Amazon EC2 User Guide*. For an example of the IAM policy, see [Managed policies for AWS Backup](security-iam-awsmanpol.md).

1. [ Download and install VSS components](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/application-consistent-snapshots-getting-started.html) to the Windows on Amazon EC2 instance

1. Enable VSS in AWS Backup:

   1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

   1. On the dashboard, choose the type of backup you want to create, either **Create an on-demand backup** or **Manage Backup plans**. Provide the information needed for your backup type.

   1. When you're assigning resources, choose **EC2**. Windows VSS backup is currently supported for EC2 instances only. 

   1. In the **Advanced settings** section, choose **Windows VSS**. This enables you to take application-consistent Windows VSS backups. 

   1. Create your backup.

A backup job with a status of `Completed` does not guarantee that the VSS portion is successful; VSS inclusion is made on a best-effort basis. Proceed with the following steps to determine if a backup is application-consistent, crash-consistent, or failed:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Under **My account** in the left navigation, click **Jobs**.

1. A status of `Completed` indicates a successful job that is application-consistent (VSS).

   A status of `Completed with issues` indicates that the VSS operation has failed, so only a crash-consistent backup has been successful. This status will also have a popover message `"Windows VSS Backup Job Error encountered, trying for regular backup"`. 

   If the backup was unsuccessful, the status will be `Failed`.

1. To view additional details of the backup job, click on the individual job. For example, the details may read `Windows VSS Backup attempt failed because of timeout on VSS enabled snapshot creation`.

VSS-enabled backups with a target that is non-Windows or non-VSS component Windows that is successful job will be crash-consistent without VSS.

## Unsupported Amazon EC2 instances
<a name="unsupported-vss-instances"></a>

The following Amazon EC2 instance types are not supported for VSS-enabled Windows backups because they are small instances and might not take the backup successfully.
+ t3.nano
+ t3.micro
+ t3a.nano
+ t3a.micro
+ t2.nano
+ t2.micro

# Backup and tag copy
<a name="recov-point-create-a-copy"></a>

You can copy backups to multiple AWS accounts or AWS Regions on demand or automatically as part of a scheduled backup plan for most resource types, though backups in cold storage or archive tiers cannot be copied. See [Feature availability by resource](backup-feature-availability.md#features-by-resource) and [Encryption for a backup copy to a different account or AWS Region](encryption.md#copy-encryption) for details.

Some resource types have both continuous backup capability and cross-Region and cross-account copy available. When a cross-Region or cross-account copy of a continuous backup is made, the copied recovery point (backup) becomes a snapshot (periodic) backup (not available for all resource types that support both backup types). Depending on the [resource type](backup-feature-availability.md#features-by-resource), the snapshots may be an incremental copy or a full copy. PITR (Point-in-Time Restore) is not available for these copies.

**Important**  
Copies retain their source configuration, including creation dates and retention period. The creation date refers to when the source was created, not when the copy was created. You can override the retention period.  
The configuration of the source backup being copied overrides its copy’s expiration setting if the copy retention period is set to **Always** in the AWS Backup console (or [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CopyAction.html#Backup-Type-CopyAction-Lifecycle](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CopyAction.html#Backup-Type-CopyAction-Lifecycle) value is set to `-1` in the API request); that is, a copy with a retention setting set to never expire will retain its source recovery point's expiration date.  
If you want your backup copies to never expire, either set your source backups to never expire or specify your copy to expire 100 years after its creation.

## Copy job retry
<a name="backup-copy-retry"></a>

AWS Backup implements the following retry strategy for copy jobs: If AWS Backup encounters any system errors, the copy job enters a retry phase that lasts for 2 hours. During this time, the copy job status remains in the `CREATED` state while the system periodically attempts to initiate the job. If the job successfully starts within this window, it transitions to the `RUNNING` state.

If issues persist beyond the 2-hour retry period, the AWS Backup service team is automatically alerted. The team then investigates and addresses any underlying problems. After the issue is resolved, they manually retry the copy request, ensuring that the copy jobs are completed as requested.

The copy job retry process differs from backup job retry process, which uses a defined start window with regular retry attempts until either success or expiration. The copy job mechanism provides an additional layer of reliability by incorporating direct service team intervention for persistent issues.

## Copy job concurrency
<a name="backup-copy-concurrency"></a>

Only one backup or copy job can run at a time for a given resource. Additional copy jobs for the same resource remain in `CREATED` status until the running job completes. For more information about concurrency limits, see [AWS Backup quotas](https://docs.aws.amazon.com/aws-backup/latest/devguide/aws-backup-limits.html).

Copy jobs for large resources can take several hours to complete, which can result in additional copy jobs waiting in `CREATED` status. For resource types that support incremental copies, a short retention period can lead to situations where the only recovery point in the destination vault expires. If no recovery point exists in the destination vault, then the next copy must be a full copy instead of an incremental copy. To avoid this, set the copy retention period to at least one week. For more information, see [Metering, costs, and billing](https://docs.aws.amazon.com/aws-backup/latest/devguide/metering-and-billing.html). To determine which resource types support incremental copies, see [Feature availability by resource](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource).

If copy jobs continue to queue in `CREATED` status for the same resource, reduce the copy frequency.

**Topics**
+ [Copy job retry](#backup-copy-retry)
+ [Copy job concurrency](#backup-copy-concurrency)
+ [Creating backup copies across AWS Regions](cross-region-backup.md)
+ [Creating backup copies across AWS accounts](create-cross-account-backup.md)
+ [Copy tags onto backups](tags-on-backups.md)

# Creating backup copies across AWS Regions
<a name="cross-region-backup"></a>

Using AWS Backup, you can copy backups to multiple AWS Regions on demand or automatically as part of a scheduled backup plan. Cross-Region replication is particularly valuable if you have business continuity or compliance requirements to store backups a minimum distance away from your production data. For a video tutorial, see [Managing cross-Region copies of backups](https://www.youtube.com/watch?v=qMN18Lpj3PE).

When you copy a backup to a new AWS Region for the first time, AWS Backup copies the backup in full. In general, if a service supports incremental backups, subsequent copies of that backup in the same AWS Region will be incremental. AWS Backup will re-encrypt your copy using the customer managed key of your destination vault.

An exception is Amazon EBS, where changing the encryption status of a snapshot during a copy operation [results in a full (not incremental) copy](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-copy-snapshot.html#creating-encrypted-snapshots).

**Requirements**
+ Most AWS Backup-supported resources support cross-Region backup. For specifics, see [Feature availability by resource](backup-feature-availability.md#features-by-resource).
+ Most AWS Regions support cross-Region backup. For specifics, see [Feature availability by AWS Region](backup-feature-availability.md#features-by-region).
+ AWS Backup does not support cross-Region copies for storage in cold tiers.

## Cross-Region copy encryption
<a name="cross-region-copy-encryption"></a>

See [Encryption for a backup copy to a different account or AWS Region](encryption.md#copy-encryption) for details on how encryption works for copy jobs.

## Cross-Region copy considerations with specific resources
<a name="cross-region-considerations"></a>

**Amazon RDS**  
AWS Backup does not pass the option group when performing a cross-Region copy. Instead, AWS Backup copies the default option group, even if you have configured a custom option group.

If your custom option group uses persistent options, the cross-Region copy job fails unless the destination Region has the same option group as the source Region. In this case, AWS Backup still copies the default option group.

If you attempt a cross-Region copy without a matching option group in the target Region, the copy job fails with an error message such as "The snapshot requires a target option group with the following options: ...."

## Performing on-demand cross-Region backup
<a name="on-demand-crb"></a>

**To copy an existing backup on-demand**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Choose **Backup vaults**.

1. Choose the vault that contains the recovery point you want to copy.

1. In the **Backups** section, select a recovery point to copy.

1. Using the **Actions** dropdown button, choose **Copy**.

1. Enter the following values:  
**Copy to destination**  
Choose the destination AWS Region for the copy. You can add a new copy rule per copy to a new destination.  
**Destination Backup vault **  
Choose the destination backup vault for the copy.  
**Transition to cold storage**  
Choose when to transition the backup copy to cold storage. Backups transitioned to cold storage must be stored there for a minimum of 90 days. This value cannot be changed after a copy has transitioned to cold storage.   
To see the list of resources that you can transition to cold storage, see the "Lifecycle to cold storage" section of the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table. The cold storage expression is ignored for other resources.  
**Retention period**  
Choose specifies the number of days after creation that the copy is deleted. This value must be greater than 90 days beyond the **Transition to cold storage** value.  
**IAM role**  
Choose the IAM role that AWS Backup will use when creating the copy. The role must also have AWS Backup listed as a trusted entity, which enables AWS Backup to assume the role. If you choose **Default** and the AWS Backup default role is not present in your account, one will be created for you with the correct permissions.

1. Choose **Copy**.

## Scheduling cross-Region backup
<a name="scheduled-crb"></a>

You can use a scheduled backup plan to copy backups across AWS Regions.<a name="copy-with-backup-plan"></a>

**To copy a backup using a scheduled backup plan**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In **My account**, choose **Backup plans**, and then choose **Create Backup plan**.

1. On the **Create Backup plan** page, choose **Build a new plan**.

1. For **Backup plan name**, enter a name for your backup plan.

1. In the **Backup rule configuration** section, add a backup rule that defines a backup schedule, backup window, and lifecycle rules. You can add more backup rules later.

   1. For **Backup rule name**, enter a name for your rule.

   1. For **Backup vault**, choose a vault from the list. Recovery points for this backup will be saved in this vault. You can create a new backup vault.

   1. For **Backup frequency**, choose how often you want to take backups.

   1. For services that support PITR, if you want this feature, choose **Enable continuous backups for point-in-time recovery (PITR)**. For a list a services that support PITR, see that section of the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table.

   1. For **Backup window**, choose **Use backup window defaults - *recommended***. You can customize the backup window.

   1. For **Copy to destination**, Choose the destination AWS Region for your backup copy. Your backup will be copied to this Region. You can add a new copy rule per copy to a new destination. Then enter the following values:  
**Copy to another account's vault**  
Do not toggle this option. To learn more about cross-account copy, see [Creating backup copies across AWS accounts](https://docs.aws.amazon.com/aws-backup/latest/devguide/create-cross-account-backup.html)  
**Destination Backup vault**  
Choose the backup vault in the destination Region where AWS Backup will copy your backup.  
If you would like to create a new backup vault for cross-Region copy, choose **Create new Backup vault**. Enter the information in the wizard. Then choose **Create Backup vault**.

1. Choose **Create plan**.

# Creating backup copies across AWS accounts
<a name="create-cross-account-backup"></a>

Using AWS Backup, you can back up to multiple AWS accounts on demand or automatically as part of a scheduled backup plan. Use a cross-account backup if you want to securely copy your backups to one or more AWS accounts in your organization for operational or security reasons. If your original backup is inadvertently deleted, you can copy the backup from its destination account to its source account, and then start the restore. Before you can do this, you must have two accounts that belong to the same organization in the AWS Organizations service. For more information, see [ Tutorial: Creating and configuring an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_tutorials_basic.html) in the *Organizations User Guide*.

In your destination account, you must create a backup vault. Then, you assign a customer managed key to encrypt backups in the destination account, and a resource-based access policy to allow AWS Backup to access the resources you would like to copy. In the source account, if your resources are encrypted with a customer managed key, you must share this customer managed key with the destination account. You can then create a backup plan and choose a destination account that is part of your organizational unit in AWS Organizations. 

When you copy a backup to cross-account for the first time, AWS Backup copies the backup in full. In general, if a service supports incremental backups, subsequent copies of that backup in the same account are incremental. AWS Backup re-encrypts your copy using the customer managed key of your destination vault.

**Requirements**
+ Before you manage resources across multiple AWS accounts in AWS Backup, your accounts must belong to the same organization in the AWS Organizations service.
+ Most resources supported by AWS Backup support cross-account backup. For specifics, see [Feature availability by resource](backup-feature-availability.md#features-by-resource).
+ Most AWS Regions support cross-account backup. For specifics, see [Feature availability by AWS Region](backup-feature-availability.md#features-by-region).
+ AWS Backup does not support cross-account copies for storage in cold tiers.

## Setting up cross-account backup
<a name="prereq-cab"></a>

**What do you need to create cross-account backups?**
+  **A source account**

  The source account is the account where your production AWS resources and primary backups reside. 

  The source account user initiates the cross-account backup operation. The source account user or role must have appropriate API permissions to initiate the operation. Appropriate permissions might be the AWS managed policy `AWSBackupFullAccess`, which enables full access to AWS Backup operations, or a customer managed policy that allows actions such as `ec2:ModifySnapshotAttribute`. For more information about policy types, see [AWS Backup Managed Policies](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html).
+  **A destination account**

  The destination account is the account where you would like to keep a copy of your backup. You can choose more than one destination account. The destination account must be in the same organization as the source account in AWS Organizations. 

  You must “Allow” the access policy `backup:CopyIntoBackupVault` for your destination backup vault. The absence of this policy will deny attempts to copy into the destination account.
+  **A management account in AWS Organizations**

  The management account is the primary account in your organization, as defined by AWS Organizations, that you use to opt-in to cross-account backup across your AWS accounts. Before your organization can start with cross-account backups, you must enable cross-account backup in the AWS Backup console or through the [UpdateGlobalSettings](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_UpdateGlobalSettings.html) API.

For information about security, see [Security considerations for cross-account backup](#security-considerations-cab).

To use cross-account backup, you must enable the cross-account backup feature. Then, you must "Allow" the access policy `backup:CopyIntoBackupVault` into your destination backup vault.

Amazon EC2 offers [EC2 Allowed AMIs](https://docs.aws.amazon.com//AWSEC2/latest/UserGuide/ec2-allowed-amis.html). If this setting is enabled in your account, add your source account ID to your allowlist. Otherwise, the copy operation will fail with an error message, such as "Source AMI not found in Region".

**Enable cross-account backup**

1.  Log in using your AWS Organizations management account credentials. Cross-account backup can only be enabled or disabled using these credentials.

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In **My account**, choose **Settings**.

1. For **Cross-account backup**, choose **Enable**.

1. In **Backup vaults**, choose your destination vault.

   For cross-account copy, the source vault and the destination vault are in different accounts. Switch to the account which owns the destination account, as necessary.

1. In the **Access policy** section, "Allow" `backup:CopyIntoBackupVault`. For an example, choose **Add permissions** and then **Allow access to a Backup vault from organization**. Any cross-account action other than `backup:CopyIntoBackupVault` will be rejected.

1.  Now, any account in your organization can share the contents of their backup vault with any other account in your organization. For more information, see [Configuring backup vault access for cross-account copies](#share-vault-cab). To limit which accounts can receive the contents of other accounts' backup vaults, see [Configuring your account as a destination account](#designate-destination-accounts-cab).

## Scheduling cross-account backup
<a name="scheduled-cab"></a>

You can use a scheduled backup plan to copy backups across AWS accounts.<a name="copy-with-backup-plan"></a>

**To copy a backup using a scheduled backup plan**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In **My account**, choose **Backup plans**, and then choose **Create Backup plan**.

1. On the **Create Backup plan** page, choose **Build a new plan**.

1. For **Backup plan name**, enter a name for your backup plan.

1. In the **Backup rule configuration** section, add a backup rule that defines a backup schedule, backup window, and lifecycle rules. You can add more backup rules later.

   For **Rule name**, enter a name for your rule.

1. In the **Schedule** section under **Frequency**, choose how often you want the backup to be taken.

1. For **Backup window**, choose **Use backup window defaults** (recommended). You can customize the backup window.

1. For **Backup vault**, choose a vault from the list. Recovery points for this backup will be saved in this vault. You can create a new backup vault.

1. In the **Generate copy - optional** section, enter the following values:  
**Destination Region**  
Choose the destination AWS Region for your backup copy. Your backup will be copied to this Region. You can add a new copy rule per copy to a new destination.  
**Copy to another account's vault**  
Toggle to choose this option. The option turns blue when selected. The **External vault ARN** option will appear.  
**External vault ARN**  
Enter the Amazon Resource Name (ARN) of the destination account. The ARN is a string that contains the account ID and its AWS Region. AWS Backup will copy the backup to the destination account's vault. The **Destination region** list automatically updates to the Region in the external vault ARN.   
For **Allow Backup vault access**, choose **Allow**. Then choose **Allow** in the wizard that opens.   
AWS Backup needs permissions to access the external account to copy backup to the specified value. The wizard shows the following example policy that provides this access.    
****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowAccountCopyIntoBackupVault",
         "Effect": "Allow",
         "Action": "backup:CopyIntoBackupVault",
         "Resource": "*",
         "Principal": {
           "AWS": "arn:aws:iam::123456789012:root"
         }
       }
     ]
   }
   ```  
**Transition to cold storage**  
Choose when to transition the backup copy to cold storage and when to expire (delete) the copy. Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. This value cannot be changed after a copy has transitioned to cold storage.   
To see the list of resources that you can transition to cold storage, see the "Lifecycle to cold storage" section of the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table. The cold storage expression is ignored for other resources.  
**Expire** specifies the number of days after creation that the copy is deleted. This value must be greater than 90 days beyond the **Transition to cold storage** value.  
When backups expire and are marked for deletion as part of your lifecycle policy, AWS Backup deletes the backups at a randomly chosen point over the following 8 hours. This window helps ensure consistent performance.

1. Choose **Tags added to recovery points** to add tags to your recovery points. 

1. For **Advanced backup settings**, choose **Windows VSS** to enable application-aware snapshots for the selected third-party software running on EC2.

1. Choose **Create plan**.

## Performing on-demand cross-account backup
<a name="on-demand-cab"></a>

You can copy a backup to a different AWS account on demand.

**To copy a backup on-demand**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. For **My account**, choose **Backup vault** to see all your backup vaults listed. You can filter by the backup vault name or tag.

1. Choose the **Recovery point ID** of the backup you want to copy.

1. Choose **Copy**.

1. Expand **Backup details** to see information about the recovery point you are copying.

1. In the **Copy configuration** section, choose an option from the **Destination region** list.

1. Choose **Copy to another account's vault**. The option turns blue when selected.

1. Enter the Amazon Resource Name (ARN) of the destination account. The ARN is a string that contains the account ID and its AWS Region. AWS Backup will copy the backup to the destination account's vault. The **Destination region** list automatically updates to the Region in the external vault ARN. 

1. For **Allow Backup vault access**, choose **Allow**. Then choose **Allow** in the wizard that opens. 

   To create the copy, AWS Backup needs permissions to access the source account. The wizard shows an example policy that provides this access. This policy is shown following.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "AllowAccountCopyIntoBackupVault",
         "Effect": "Allow",
         "Action": "backup:CopyIntoBackupVault",
         "Resource": "*",
         "Principal": {
           "AWS": "arn:aws:iam::123456789012:root"
         }
       }
     ]
   }
   ```

------

1. For **Transition to cold storage**, choose when to transition the backup copy to cold storage and when to expire (delete) the copy. Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. This value cannot be changed after a copy has transitioned to cold storage. 

   To see the list of resources that you can transition to cold storage, see the "Lifecycle to cold storage" section of the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table. The cold storage expression is ignored for other resources.

   **Expire** specifies the number of days after creation that the copy is deleted. This value must be greater than 90 days beyond the **Transition to cold storage** value.

1. For **IAM role**, specify the IAM role (such as the default role) that has the permissions to make your backup available for copying. The act of copying is performed by your destination account's service linked role. 

1. Choose **Copy**. Depending on the size of the resource you are copying, this process could take several hours to complete. When the copy job completes, you will see the copy in the **Copy jobs** tab in the **Jobs** menu.

## Encryption keys and cross-account copies
<a name="backup-cab-encryption"></a>

See [Encryption for a backup copy to a different account or AWS Region](encryption.md#copy-encryption) for details on how encryption works for copy job.

For additional help troubleshooting cross-account copy failures, please see the [AWS Knowledge Center](https://repost.aws/knowledge-center/backup-troubleshoot-cross-account-copy).

## Restoring a backup from one AWS account to another
<a name="restore-cab"></a>

AWS Backup does not support recovering resources from one AWS account to another. However, you can copy a backup from one account to a different account and then restore it in that account. For example, you can't restore a backup from account A to account B, but you can copy a backup from account A to account B, and then restore it in account B.

Before restoring a backup from one account to another, ensure that the destination account has the service-linked role (SLR) for the resource type you are restoring. If the destination account has never used that AWS service before, the SLR may not exist. You can create the SLR by using the service in the destination account, which automatically creates it. Once the SLR requirement is addressed, restoring a backup from one account to another is a two-step process:

**To restore a backup from one account to another**

1. Copy the backup from the source AWS account to the account you want to restore to. For instructions, see [Setting up cross-account backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/create-cross-account-backup.html#prereq-cab).

1. Use the appropriate instructions for your resource to restore the backup.

## Configuring backup vault access for cross-account copies
<a name="share-vault-cab"></a>

AWS Backup allows you to configure your backup vault to grant access to other AWS accounts, allowing them to copy recovery points to your vault for cross-account backup. This access configuration uses resource-based policies to permit specific accounts to perform backup operations.

**Note**  
This is different from AWS Backup vault sharing for Logically Air Gapped (LAG) vaults, which uses AWS Resource Access Manager (RAM) to share vault resources directly.

You can grant vault access to one or multiple accounts, or your entire organization in AWS Organizations. You can configure a destination backup vault with access for a source AWS Account, user, or IAM role.

**To configure vault access for a destination Backup vault**

1. Choose **AWS Backup**, and then choose **Backup vaults**.

1.  Choose the name of the backup vault that you want to configure access for.

1. In the **Access policy** pane, choose the **Add permissions** dropdown. 

1.  Choose **Allow account level access to a Backup vault**. Or, you can choose to allow organization-level or role-level access. 

1. Enter the **AccountID** of the account you'd like to grant access to this destination backup vault.

1.  Choose **Save policy**. 

You can use IAM policies to configure vault access.
<a name="share-vault-with-account-iam"></a>
**Configure destination backup vault access for an AWS account or IAM role**  
The following policy configures vault access for account number `4444555566666` and the IAM role `SomeRole` in account number `111122223333`.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement":[
    {
      "Effect":"Allow",
      "Principal":{
        "AWS":[
          "arn:aws:iam::444455556666:root",
          "arn:aws:iam::111122223333:role/SomeRole"
        ]
      },
      "Action":"backup:CopyIntoBackupVault",
      "Resource":"*"
    }
  ]
}
```

------
<a name="share-vault-with-organizational-unit"></a>
**Share a destination backup vault an organizational unit in AWS Organizations**  
The following policy shares a backup vault with organizational units using their `PrincipalOrgPaths`.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement":[
    {
      "Effect":"Allow",
      "Principal":"*",
      "Action":"backup:CopyIntoBackupVault",
      "Resource":"*",
      "Condition":{
        "ForAnyValue:StringLike":{
          "aws:PrincipalOrgPaths":[
            "o-a1b2c3d4e5/r-f6g7h8i9j0example/ou-def0-awsbbbbb/",
            "o-a1b2c3d4e5/r-f6g7h8i9j0example/ou-def0-awsbbbbb/ou-jkl0-awsddddd/*"
          ]
        }
      }
    }
  ]
}
```

------
<a name="share-vault-with-entire-organization"></a>
**Share a destination backup vault with an organization in AWS Organizations**  
The following policy shares a backup vault with the organization with `PrincipalOrgID` "o-a1b2c3d4e5".

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement":[
    {
      "Effect":"Allow",
      "Principal":"*",
      "Action":"backup:CopyIntoBackupVault",
      "Resource":"*",
      "Condition":{
        "StringEquals":{
          "aws:PrincipalOrgID":[
            "o-a1b2c3d4e5"
          ]
        }
      }
    }
  ]
}
```

------

## Configuring your account as a destination account
<a name="designate-destination-accounts-cab"></a>

When you first enable cross-account backups using your AWS Organizations management account, any user of a member account can configure their account to be a destination account. We recommend setting one or more of the following service control policies (SCPs) in AWS Organizations to limit your destination accounts. To learn more about attaching service control policies to AWS Organizations nodes, see [Attaching and detaching service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_attach.html).
<a name="limit-destination-accounts-using-tags"></a>
**Limit destination accounts using tags**  
When attached to an AWS Organizations root, OU, or individual account, this policy limits copies destinations from that root, OU, or account to only those accounts with backup vaults you’ve tagged `DestinationBackupVault`. The permission `"backup:CopyIntoBackupVault"` controls how a backup vault behaves and, in this case, which destination backup vaults are valid. Use this policy, along with the corresponding tag applied to approved destination vaults, to control the destinations of cross-account copies to only approved accounts and backup vaults. 

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement":[
    {
      "Effect":"Deny",
      "Action":"backup:CopyIntoBackupVault",
      "Resource":"*",
      "Condition":{
        "Null":{
          "aws:ResourceTag/DestinationBackupVault":"true"
        }
      }
    }
  ]
}
```

------
<a name="limit-destination-accounts-using-names"></a>
**Limit destination accounts using account numbers and vault names**  
When attached to an AWS Organizations root, OU, or individual account, this policy limits copies originating from that root, OU, or account to only two destination accounts. The permission `"backup:CopyFromBackupVault"` controls how a recovery point in the backup vault behaves, and, in this case, the destinations where you can copy that recovery point to. The source vault will only permit copies to the first destination account (112233445566) if one or more destination backup vault names begin with `cab-`. The source vault will only permit copies to the second destination account (123456789012) if the destination is the single backup vault named `fort-knox`.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyCopyFromBackupVault",
      "Effect": "Deny",
      "Action": "backup:CopyFromBackupVault",
      "Resource": "arn:aws:ec2:*:*:snapshot/*",
      "Condition": {
        "ForAllValues:ArnNotLike": {
          "backup:CopyTargets": [
            "arn:aws:backup:*:112233445566:backup-vault:cab-*",
            "arn:aws:backup:us-east-1:123456789012:backup-vault:fort-knox"
          ]
        }
      }
    }
  ]
}
```

------
<a name="limit-destination-accounts-using-organizational-units"></a>
**Limit destination accounts using organizational units in AWS Organizations**  
When attached to an AWS Organizations root or OU that contains your source account, or when attached to your source account, the following policy limits the destination accounts to those accounts within the two specified OUs.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement":[
    {
      "Effect":"Deny",
      "Action":"backup:CopyFromBackupVault",
      "Resource":"*",
      "Condition":{
        "ForAllValues:StringNotLike":{
          "backup:CopyTargetOrgPaths":[
            "o-a1b2c3d4e5/r-f6g7h8i9j0example/ou-def0-awsbbbbb/",
            "o-a1b2c3d4e5/r-f6g7h8i9j0example/ou-def0-awsbbbbb/ou-jkl0-awsddddd/*"
          ]
        }
      }
    }
  ]
}
```

------

## Security considerations for cross-account backup
<a name="security-considerations-cab"></a>

Be aware of the following when using performing cross-account backups in AWS Backup:
+ The destination vault cannot be the default vault. This is because the default vault is encrypted with a key that cannot be shared with other accounts. 
+ Cross-account backups might still run for up to 15 minutes after you disable cross-account backup. This is due to eventual consistency, and might result in some cross-account jobs starting or completing even after you disable cross-account backup.
+ If the destination account leaves the organization at a later date, that account will retain the backups. To avoid potential data leakage, place a deny permission on the `organizations:LeaveOrganization` permission in a service control policy (SCP) attached to the destination account. For detailed information about SCPs, see [Removing a member account from your organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html) in the *Organizations User Guide*.
+ If you delete a copy job role during a cross-account copy, AWS Backup can't unshare snapshots from the source account when the copy job completes. In this case, the backup job finishes, but the copy job status shows as Failed to unshare snapshot.

# Copy tags onto backups
<a name="tags-on-backups"></a>

In general, AWS Backup copies tags from the resources it protects to your *recovery points*. For more information on how to copy tags during a restore, see [ Copy tags during a restore](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-a-backup.html#tag-on-restore).

For example, when you back up an Amazon EC2 volume, AWS Backup copies its group and individual resource tags to the resulting snapshot, subject to the following:
+ For a list of resource-specific permissions that are required to save metadata tags on backups, see [Permissions required to assign tags to backups](access-control.md#backup-tags-required-permissions).
+ Tags that are originally associated with a resource and tags that are assigned during backup are assigned to recovery points stored in a backup vault, up to a maximum of 50 (this is an AWS limitation). Tags that are assigned during backup have priority, and both sets of tags are copied in alphabetical order.

  For [continuous backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html), tags added to the backups from the primary resource won't be removed if the tags are removed from the primary resource. You will need to remove the tags from backups manually. Make sure that the number of tags on the backup is up to a maximum of 50.
+ DynamoDB does not support assigning tags to backups unless you first enable [Advanced DynamoDB backup](advanced-ddb-backup.md).
+ Amazon EBS volumes that are attached to Amazon EC2 instances are nested resources. Tags on the Amazon EBS volumes that are attached to Amazon EC2 instances are nested tags. If AWS Backup can't copy nested tags, the backup job fails.
+ When an Amazon EC2 backup creates an image recovery point and a set of snapshots, AWS Backup copies tags to the resulting AMI. If AWS Backup can't copy the tags from the volumes associated with the Amazon EC2 instance to the resulting snapshots, the backup job fails.

If you copy your backup to another AWS Region, AWS Backup copies all tags of the original backup to the destination AWS Region.

# Backup deletion
<a name="deleting-backups"></a>

We recommend you use AWS Backup to automatically delete the backups that you no longer need by configuring your lifecycle when you created your backup plan. For example, if you set your backup plan’s lifecycle to retain your recovery points for one year, AWS Backup will automatically delete on January 1, 2022 the recovery points it created on or within several hours of January 1, 2021. (AWS Backup randomizes its deletions within 8 hours following recovery point expiration to maintain performance.) To learn more about configuring your lifecycle retention policy, see [Creating a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html).

However, you might want to manually delete one or more recovery points. For example:
+ You have `EXPIRED` recovery points. These are recovery points AWS Backup was unable to delete automatically because you deleted or modified the original IAM policy you used to create your backup plan. When AWS Backup attempted to delete them, it lacked permission to do so.

  Expired recovery points might also be created if an AWS managed Amazon EBS or Amazon EC2 recovery point has an Amazon EBS Snapshot Lock applied and AWS Backup is unable to complete the lifecycle process that would normally result in the recovery point being deleted. Note these expired recovery points can be restored from the Amazon EC2 console and [API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/OperationList-query-ec2.html) or Amazon EBS console and [API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/OperationList-query-ebs.html).
**Warning**  
You will continue to store expired recovery points in your account. This might increase your storage costs.

  After August 6, 2021, AWS Backup will show the target recovery point as **Expired** in its backup vault. You can hover your mouse over the red **Expired** status for a popover status message that explains why it was unable to delete the backup. You can also choose **Refresh** to receive the most recent information.
+ You no longer want a backup plan to operate the way you configured it. Updating the backup plan affects the future recovery points it will create, but does not affect the recovery point it already created. To learn more, see [Updating a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/updating-a-backup-plan.html).
+ You need to clean up after finishing a test or tutorial.

## Deleting backups manually
<a name="deleting-backups-manually"></a>

**To manually delete recovery points**

1. In the AWS Backup console, in the navigation pane, choose **Backup vaults**.

1. On the **Backup vaults** page, choose the backup vault where you stored the backups.

1. Choose recovery points, then choose **Actions**, **Delete**.

1. 

   1. If your list contains a continuous backup, choose one of following options. Each continuous backup has a single recovery point.
      + **Permanently delete my backup data** or **Delete recovery point**. By selecting one of these options, you stop future continuous backups and also delete your existing continuous backup data.
**Note**  
See [Continuous backups and point-in-time recovery (PITR)](point-in-time-recovery.md) for Amazon S3, Amazon RDS, and Aurora continuous backup considerations.
      + **Keep my continuous backup data** or **Disassociate recovery point**. By selecting one of these options, you stop future continuous backups but retain your existing continuous backup data until it expires as defined by your retention period.

        A disassociated Amazon S3 continuous recovery point (backup) will remain in its backup vault, but its state will transition to `STOPPED`.

   1. To delete all the recovery points selected, type delete, and then choose **Delete recovery points**.

   1. AWS Backup begins to submit your recovery points for deletion and displays a progress bar. Keep your browser tab open and do not navigate away from this page during the submission process.

   1. At the end of the submission process, AWS Backup presents you a status in the banner. The status can be:
      + **Successfully submitted**. You can choose to **View progress** about each recovery point’s deletion status.
      + Failed to submit. You can choose to **View progress** about each recovery point’s deletion status or **Try again** with your submission.
      + A mixed result where some recovery points were successfully submitted while other recovery points failed to submit.

   1. If you choose **View progress**, you can review the **Deletion status** of each backup. If a deletion status is **Failed** or **Expired**, you can click that status to see the reason. You can also choose to **Retry failed deletions**.

## Troubleshooting manual deletions
<a name="deleting-backups-troubleshooting"></a>

In rare situations, AWS Backup might not complete your delete request. AWS Backup uses the service-linked role `[AWSServiceRoleForBackup](https://docs.aws.amazon.com/aws-backup/latest/devguide/using-service-linked-roles.html)` to perform deletions.

If your delete request fails, verify that your IAM role has the permission to create service-linked roles. Specifically, verify your IAM role has the `iam:CreateServiceLinkedRole` action. If it does not, add this permission to the role used to create a backup. Adding this permission allows AWS Backup to perform manual deletions.

If, after you confirm that your IAM role has the `iam:CreateServiceLinkedRole` action, your recovery points are still stuck in the `DELETING` status, we are likely investigating your issue. Complete your manual deletion with the following steps:

1. Set up a reminder to come back in 2-3 days.

1. After 2-3 days, check for recently `EXPIRED` deletion points that are the result of your first manual deletion operation.

1. Manually delete those `EXPIRED` recovery points.

For more information on roles, see [Using service-linked roles](https://docs.aws.amazon.com/aws-backup/latest/devguide/using-service-linked-roles-AWSServiceRoleForBackup.html) and [Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html).

# Backup and tag edits
<a name="editing-a-backup"></a>

After you create a backup using AWS Backup, you can change the lifecycle or tags of the backup. The lifecycle defines when a backup is transitioned to cold storage and when it expires. AWS Backup transitions and expires backups automatically according to the lifecycle that you define.

To see the list of resources that you can transition to cold storage, see the "Lifecycle to cold storage" section of the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table. The cold storage expression is ignored for other resources.

**Note**  
Editing the tags of a backup using the AWS Backup console is only supported for backups of Amazon Elastic File System (Amazon EFS) file systems and Advanced Amazon DynamoDB.  
Tags that were added to the recovery point on creation for other resources will still appear, but will be greyed out and uneditable. Even though these tags are not editable in the AWS Backup console, you can edit the tags of these other services' backups using the service’s console or API.

Backups that are transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, the “retention” setting must be 90 days greater than the “transition to cold after days” setting. When you update the “transition to cold after days” setting, the value must be a minimum of the backup’s age plus one day. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold. 

The following is an example of how to update the lifecycle of a backup.

**To edit the lifecycle of a backup**

1. Sign in to the AWS Management Console, and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup vaults**.

1. In the **Backups** section, choose a backup.

1. On the backup details page, choose **Edit**.

1. Configure the lifecycle settings, and then choose **Save**.

# Backup search
<a name="backup-search"></a>

## Overview
<a name="backup-search-overview"></a>

With AWS Backup, you can create backups, also known as recovery points, of AWS resources. You can search for backups of certain resource types such as Amazon S3 and Amazon EBS, as well as items and files within those backups using the AWS Backup console or the command line.

AWS Backup offers the ability for you to search the metadata of your backups of supported resource types at a granular level for files or objects that match the properties you define in your search, such as size, creation date, and resource type. You can dive even deeper by defining the properties of the items you want to locate.

First, create a backup index you want to be able to include in a future search. Backup index creation can be automated through a backup plan or you can manually create one for any existing recovery point. When you’re ready to search, set the backup and item properties you want to see in the search results. Optionally, you can restore the backup or item you sought in the search.

This document outlines the steps to create a backup index, search indexed backups, restore from your search results, and troubleshoot any issues with the index and search functions in AWS Backup.

## Use cases for backup indexes and search
<a name="backup-search-usecase"></a>

You may be an administrator who wants to recover a specific file or object. Instead of manually identifying or guessing which backups contain the data, you can search the metadata of your recovery points and restore the exact backup, files, or objects you need.

Restoring a full backup just to find the specific item that might be in it can take hours or days. Instead, with a backup search, you can find and restore just the specific file or object you require.

Backup searches are useful for backup administrators, backup operators, data owners, and other IT professionals who interact with data backup, restore, and compliance.

## Access
<a name="backup-search-access"></a>

Before you create an index and a search, your account must have required permissions for the operations.

**Index permissions**

For index operations, AWS Backup authenticates based on the IAM role, not user credentials (for IAM user and IAM role specifics, see [Authentication](authentication.md)).

The following permissions are required to create an index of an EBS backup. These permissions are contained in the managed policy [AWSBackupServiceRolePolicyForIndexing](security-iam-awsmanpol.md#AWSBackupServiceRolePolicyForIndexing): 
+ `ec2:DescribeSnapshots`
+ `ebs:ListSnapshotBlocks`
+ `ebs:GetSnapshotBlock`
+ `kms:Decrypt`

No index permissions are required to create an S3 index.

**Search permissions**

The following permissions are required to create a search. These permissions are contained in the managed policy [AWSBackupSearchOperatorAccess](security-iam-awsmanpol.md#AWSBackupSearchOperatorAccess):
+ `backup:ListIndexedRecoveryPointsForSearch`
+ `backup:SearchRecoveryPoint`

If you choose to encrypt the search results with a customer managed AWS KMS key, ensure the following permissions are in the key:
+ `kms:GenerateDataKey`
+ `kms:Decrypt`

## Process Flow
<a name="backup-search-process"></a>

A backup search involves three steps, plus an optional fourth restore step for when you want to restore the items returned in your search.

**Index your backups:** Enable indexing in your backup plan(s) or manually create a backup index through the console or CLI for each existing backup (recovery point) you want to be eligible for searches.

**Search backup metadata for a recovery point, file or, object:** Specify the properties of the backups and items you want to find in your search, such as your searching your S3 buckets created between April 2 and 6. with tags of `Administration` and for objects greater than 100 MB with the key name containing `Admin`.

**Review search results:** If you find the recovery point or item you were seeking, you have the option to restore it. If you haven’t found the recovery point or item, you can refine the backup properties and item properties, then initiate a new search.

**Restore specific items *(optional)*:** Specify file paths or items to restore, as well as the restore conditions.

## Backup indexes
<a name="backup-search-index"></a>

To be searchable, a backup (recovery point) must first have a corresponding index.

Backup index creation can be enabled in a backup plan so that each future backup will also have an associated backup index. You can also create an index as you create an on-demand backup.

Alternatively, you can retroactively create an index for an existing recovery point, either from the Vault recovery point detail screen in the AWS Backup console or through AWS CLI.

Recovery points of supported resource types can have a backup index if they are stored in a standard backup vault (recovery points in a logical air-gapped vault do not currently support backup indexes).

**S3 backup indexes**

An S3 backup can be periodic, where it is scheduled at a fixed interval according to your backup plan. Each time a periodic backup is created, a backup index is created for it. An S3 backup can also be continuous, where each change in the backup is logged. Since there can be numerous changes daily, only one backup index is created daily for a continuous backup.

The first backup index that is created for a continuous S3 recovery point is full; subsequent indexes for the same recovery point may be incremental.

**EBS backup indexes**

Each backup index created for an EBS recovery point is full (not incremental).

AWS Backup attempts to automatically repair snapshot issues during the creation of a backup index. If a file system was in a dirty state when the recovery point was created, AWS Backup will automatically attempt to recovery the file system. If this recovery fails, the index creation job will also fail.

The nature of the snapshot determines if it can be indexed:

**Can** be indexed:
+ File systems: ext2, ext3, ext4, vfat, xfs, and ntfs

**Cannot** be indexed:
+ Snapshots in archive tier (cold storage)
+ RAID and other multi disk storage options
+ Symbolic links
+ Hard links

**Backup index creation steps**

------
#### [ Console ]

**Add backup index creation to your backup plan.**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Select **Backup plans** under **My account** in the left navigation bar.

1. Select the link in the **Backup plans** pane with the name of the plan where you want to add index creation.

1. In the second pane **Backup rules**, select **Add backup rule**.

1. Scroll down to the pane **Backup indexes**. Check the box next to the resource type(s) for which you want to create an index.

   With each new backup this plan creates, a corresponding index for that recovery point will also be concurrently created.

**Create an index for an existing recovery point**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Select **Vault** in the left navigation bar.

1. Select the link of under the **Vault** name column in which the backup where you want to make a backup index is stored.

1. Place a checkmark next to the recovery point for which you want to create a backup index.

1. Select the **Action** button, and then select **Create index**.

While the index is being created, it will have the index status of `In progress`. Once the status has transitioned to `Available`, the recovery point can be included in a search.

**Create an index as you create an on demand backup.**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. See the steps for [Creating an on-demand backup using AWS Backup](recov-point-create-on-demand-backup.md) Creating an on demand backup using AWS Backup.

1. In **Settings**, if you have chosen resource type that supports index and search, the line item **Backup search index** will be display. Toggle on **Create backup search** index to have an index be created concurrently with this on-demand backup.

------
#### [ AWS CLI ]

Create a backup index through CLI

Use the AWS CLI command [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/create-backup-plan.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/create-backup-plan.html) to make a new backup plan. Or, use [https://amazonaws.com/documentation/api/latest/reference/backup/update-backup-plan.html](https://amazonaws.com/documentation/api/latest/reference/backup/update-backup-plan.html) to modify an existing plan.

For either operation, within the parameter `--backup-plan -rules`, include `IndexActions`.

See [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_IndexAction.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_IndexAction.html) in [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BackupRuleInput.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BackupRuleInput.html) in the *AWS Backup API Reference Guide* for more information.

Once a recovery point has an index, you can update its settings.

Example:

```
aws backup update-recovery-point-index-settings
--recovery-point-arn arn:aws:ec2:us-west-2::snapshot/snap-012345678901234567           
--backup-vault-name [vaultname] // 
--index ENABLED
--endpoint-url [URL]
--iam-role-arn arn:aws:iam::012345678901:role/Admin
```

------

## Searches
<a name="backup-search-searches"></a>

Once you have one or more backups with an index, you can search those indexed backups through the AWS Backup console or through AWS CLI. 

As you create a search, you’ll select one resource type. The results will only return recovery points containing that type, such as S3 buckets or EBS snapshots.

You then specify the properties of the backups (recovery points) you wish to include in the search. You can specify up to 9 properties. Property types included more than once will return results that match all included values.

Specify the properties of the items you wish to find within the returned recovery points, such as bucket name or file size. Narrow your results by including multiple properties. 

If one value for an item property is included when you create a search through the AWS Backup console, the results will return only items that match that item property (AND logic). If you repeat the same item property, but with different values, the results will return all items that match *any* of the included values (OR logic). For example, if you include two EBS file paths, all items of recovery points that are included in the search that match *either* file path will be in the search results.
+ S3 item properties include creation time, Etags, object key, object size, and version ID.
+ EBS item properties you can use to help filter your search include creation time, file path, last modification time, and size.

Optionally, you can include an AWS KMS key ID to encrypt your results. If a key is not included, AWS Backup will use a service-owned key to encrypt results.

------
#### [ Console ]

**Search for items in your backups**

There are multiple paths to create a search of indexed backups:

You can find your preferred recovery point by navigating to **Backup vaults** and selecting the specific recovery point you wish to search. Then, select **Search**. You can also start a search from the **Recovery point details** page.

During a restore where you have specific items you wish to include, you can search your backups to help locate the URL(s) or file paths that contain the items.

To search through more than one backup, review the following steps:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Navigate to **Search** in the left navigation menu.

1. Select **Create search** in the Search history section.

1. Select a **resource type**. You must select one resource type for each search. If you change the resource type after additional fields have been entered, your entries will be lost and you will need to re-enter them.

1. Choose 1 to 9 **backup properties** to help narrow the recovery points that will be returned in your search. 

   AWS Backup will scan all of your backups that have an index. It will return only recovery points that match all different backup properties. For example, `backup tag = "savings"` and `backup creation date = May 20, 2019 through May 23, 2019`, inclusive. 

   You may include multiple values of the same property, such as three different tags. If the property is repeated with different values, the search will return all items that match any of the values specified (known as "OR" logic). For example, `backup tag = "savings"`; `backup tag = "checking"`; `backup creation date = May 20, 2019 through May 23, 2019`, inclusive; and `backup creation date = May 20, 2020 through May 23, 2020`, inclusive.

   A backup creation date range counts as one backup property. Only one backup creation date range can be included as a backup property.

1. Choose 1 to 9 **item properties** to help further narrow the returns in your search. 

   You may include multiple values of the same property. If the property is repeated with different values, the search will return all items that match any of the values specified.

1. *Optional*: To encrypt your search results, you can specify an extant AWS KMS key by the dropdown menu or ARN, or you can create a new KMS key.

1. AWS Backup recommends creating a unique search job name.

1. Select **Start search**.

   You may see a warning saying that your search may include a large number of recovery points. The best practice is to go back to the backup properties and select additional criteria to narrow the search. Fewer backups in a search may result in lower costs.

------
#### [ AWS CLI ]

Use the AWS CLI command [start-search-job](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backupsearch/start-search-job.html).

Required parameters:

```
--search-scope // defines the backup properties you wish to include in the search
```

Optional parameters:

```
--client-token // string
--encryption-key-arn // if not included, AWS Backup uses service-owned key to encrypt results
--item-filters // accepted keys and values depend on which resource type is included in the search
--name // If not included, AWS Backup auto-assigns a name
```

Accepted S3 item filters include:

```
--object-keys // string 
--sizes // long condition
--creation-times // time condition
--version-ids // string 
--etags // string
```

Accepted EBS item filters include:

```
--file-paths //
--sizes //
--creation-times //
--last-modification-time //
```

------

### Stop a search
<a name="backup-search-stop"></a>

You can stop a search job if is in status `RUNNING`.

A search job will continue until it reaches `COMPLETED` status (or `FAILED` status if there is an error). You can interrupt a `RUNNING` search job if you wish to end an in-progress search job, which may be desirable if you have found the backup or item you were seeking before the job completed.
+ In the AWS Backup console, Select the **Stop search job** button.
+ In the CLI, send the operation `stop-search-job` with the search job identifier you want to stop.

## Search results
<a name="backup-search-results"></a>

 Once a search job has begun, it will begin aggregating results even while has an `Running` status. While a search job is running and until it completes, partial results are available:
+ In the console, results will display as they are retrieved during the search. Results do not auto-refresh, but you can view the latest results by selecting the refresh button. To view results beyond the first 1,000 items, select **Export results**. 
+ The CLI operations [get-search-job](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backupsearch/get-search-job.html) and [list-search-jobs](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backupsearch/list-search-jobs.html) return search job statuses. If the job status is `RUNNING`, the operation returns an incomplete list.

Results from a search job are available in the console and through CLI for 7 days once a search is stopped or has completed. During this time, you can export the results to your preferred Amazon S3 bucket so you can access the results past this timeframe.

Each search job contains detailed information, available in the console or through CLI, including which recovery points were searched, the search name and status, its description, its creation and completion date and time, and information about the objects or items returned as well as the number of items and recovery points scanned.

If the results do not contain the recovery point, item, or object you were seeking, you can create a new search with different backup and item properties. Each search is charged individually. 

Each resource type has unique considerations for the results the search returns:
+ A search of S3 recovery points will not return delete markers as part of its search result, even if those objects match the search’s specified item properties.
+ Results of an EBS search may have a null value for creation time for file systems in which that field is unsupported. Those file systems may include, but are not limited to, vfat, ext2/3, and versions of XFS prior to v5.

## Export search results to an S3 bucket
<a name="backup-search-export"></a>

AWS Backup retains search results for 7 days, starting with the completion time and date. These results are viewable in the AWS Backup console or retrievable through the CLI operation list-search-job-results.

A best practice is to export your search results to an S3 bucket to retain results beyond the 7 day retention. The export job will create a folder named `Export Job ID` in your designated bucket, then export the results to that folder. Once the results are exported there, they are available for as long as you retain the bucket.

You can export the search results of any supported resource type, not just an S3 search.

------
#### [ Console ]

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Navigate to **Jobs > Search jobs**.

1. Select **search job results**.

1. Place a checkmark next to the result(s) you wish to export.

1. Select **Export to S3**.

1. Choose the destination S3 bucket for the export job.

1. Once you have configured all the fields, select **Export**.

The export action creates an export job. These are viewable in **Jobs > Export jobs**. Once an export job has reached `COMPLETED` status, the search result information is available in the S3 bucket to retrieve or to download as one or more .csv files.

------
#### [ AWS CLI ]

Use the AWS CLI command [start-search-result-export-job](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backupsearch/start-search-result-export-job.html).

Required parameters:

```
--search-job-identifier  
--export-specification
```

Optional parameters:

```
--client-token 
--role-arn
--tags
```

Operation will return `ExportJobArn` and `ExportJobIdentifier`.

Use [list-search-result-export-jobs](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backupsearch/list-search-result-export-jobs.html) to retrieve the statuses of export jobs.

------

## Cost considerations and best practices
<a name="backup-search-cost-considerations"></a>

Each backup index creation and each search job incurs a charge. Each backup index has a storage charge. Each restore from search results (as with all other all restore jobs) is charged. Learn more at [AWS Backup pricing](https://aws.amazon.com//backup/pricing/).

You can narrow the possible returned results of a search job by including multiple backup and item properties; this may result in a lower cost than a search than spans all possible recovery points.



## Restore from search
<a name="backup-search-restore"></a>

Many customers choose to search through their backups - and the objects or files within them - to find a specific recovery point or items to restore. See [Restore a backup by resource type](restoring-a-backup.md) for information on restores in general.

You can restore from your search results in the AWS Backup console by navigating to **Jobs > Search job results > Restore**. To restore through AWS CLI, use [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html) with metadata specific to the resource type, recovery point, and items involved in the restore. 

See [Restore S3 data using AWS Backup](restoring-s3.md) for information on how to restore a recovery point with S3 data, to restore an S3 bucket, or to restore up to five objects or folders with an S3 bucket.

See [Restore an Amazon EBS volume](restoring-ebs.md) for information about restoring an EBS snapshot to a new volume that attaches to an EC2 instance. 

# Backup tiering
<a name="backup-tiering"></a>

## Overview
<a name="backup-tiering-overview"></a>

AWS Backup offers a lower-cost warm storage tier for Amazon S3 backups that reduces long-term storage costs by up to 30% while maintaining enterprise-grade protection and recovery capabilities. The low-cost tier provides the same performance and features as the warm storage tier. You can configure tiering to move S3 backup data to cost-optimized storage based on the age of objects within your backup vaults.

AWS Backup tiering provides the ability to optimize storage costs for S3 backup data that is retained for extended periods due to regulatory compliance, disaster readiness, and ransomware protection strategies. You can configure tiering for all S3 backups for all vaults in an account, or create targeted configurations for specific vaults and protected resources.

First, create a tiering configuration that specifies which S3 resources should be tiered and after how many days (minimum 60 days). Tiering configurations can be automated to apply to all backups or targeted to specific resources. When backup data reaches the specified age threshold, it transitions to the lower-cost storage tier while maintaining identical restore capabilities.

This document outlines the steps to create tiering configurations, manage tiered backup data, monitor cost savings, and troubleshoot any issues with the tiering function in AWS Backup.

**Important**  
Cost considerations for backup tiering:  
Backup tiering has three cost components: warm storage tier, low-cost warm storage tier, and transition costs. When backup data transitions to the lower-cost tier, you'll incur a one-time per-object transition fee based on the number of objects eligible for tiering.
For large datasets with many objects, transition costs may be significant initially but are typically offset by ongoing storage savings for data retained beyond the minimum 60-day threshold.

## Tiering Configurations
<a name="backup-tiering-tiering-configurations"></a>

S3 backup tiering involves creating a tiering configuration that specifies which resources should be tiered and the number of days before transitioning (minimum 60 days) to the lower-cost tier. To enable cost optimization, backup data must be covered by a tiering configuration.

Tiering configuration creation can be set up to apply broadly across backups of all S3 resources in your account, or targeted to specific vaults and resources. You can create multiple configurations to handle different data retention and cost optimization requirements.

Tiering configurations apply to both existing backup data in vaults and new backups created after the configuration is established.

S3 backup tiering configurations specify:
+ **Resource scope:** All resources across all vaults, all resources in a specific vault, or selected individual resources. A tiering configuration that applies to all vaults and all resources is considered the default configuration.
+ **Transition timing:** Minimum 60 days before data moves to the lower-cost tier
+ **Vault assignment:** Which backup vaults the configuration applies to (for all vaults or specific vault name)
+ **Resource selection:** Up to 5 different resource selection rules per configuration

Configuration constraints:
+ **One configuration per vault:** Each vault can only have one tiering configuration apart from the default configuration
+ **Maximum 5 resource selections:** Vault specific configuration supports up to 5 different resource group and corresponding tiering settings
+ **Maximum 100 resources:** Up to 100 specific resources across all resource groups can be selected per configuration
+ **Vault priority:** If both "all vaults" and specific vault configurations exist, the specific vault configuration takes priority

## Creating tiering configurations
<a name="backup-tiering-creating-tiering-configurations"></a>

**Creating tiering configurations**

------
#### [ Console ]

**To create an all vaults tiering configuration (default)**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **S3 backup tiering**.

1. Choose **Create configuration**.

1. For configuration name, enter a unique descriptive name.

1. Choose **All S3 resources in all vaults**.

1. For tiering down settings in days, enter the number of days (minimum 60) before data transitions to the lower-cost tier.

1. (Optional) Add tags.

1. Choose **Create configuration**.

**To create a specific vault tiering configuration**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **S3 backup tiering**.

1. Choose **Create configuration**.

1. For configuration name, enter a unique descriptive name.

1. Choose **S3 resources in a specific vault**.

1. For vault selection, select a specific backup vault from the dropdown.

1. For resource selection, choose either:

   1. **All S3 resources in this vault** to apply to all S3 resources in the vault

   1. **Specific S3 resources in this vault** to select individual S3 buckets

1. If selecting specific resources:

   1. Choose individual S3 resources (up to 100 total across resource groups in a configuration)

   1. Set tiering down settings in days for each resource group

   1. Choose **Add tiering setting** to create additional rules (up to 5 total)

1. (Optional) Add tags.

1. Choose **Create configuration**.

------
#### [ AWS CLI ]

To create an all vaults tiering configuration (default) using the AWS CLI

```
aws backup create-tiering-configuration \
--tiering-configuration '{
  "TieringConfigurationName":"MyTieringConfig",
  "BackupVaultName":"*",
  "ResourceSelection":[{
    "Resources":["*"],
    "TieringDownSettingsInDays":60,
    "ResourceType":"S3"
  }]
}'
```

------

## Managing tiering configurations
<a name="backup-tiering-managing-tiering-configurations"></a>

**Viewing tiering configurations**

You can view existing tiering configurations through the AWS Backup console, AWS CLI, or REST API.

------
#### [ Console ]

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **S3 backup tiering**.

1. View the list of configurations with their scope, transition settings, and status.

------
#### [ AWS CLI ]

To list all tiering configurations using the AWS CLI

```
aws backup list-tiering-configurations --max-results 50
```

To get specific tiering configuration details

```
aws backup get-tiering-configuration --tiering-configuration-name "MyTieringConfig"
```

------

**Modifying tiering configurations**

You can update existing tiering configurations to change transition timing, resource selection, or vault assignments.

------
#### [ Console ]

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **S3 backup tiering**.

1. Select the configuration to modify.

1. Choose Edit.

1. Update the desired settings.

1. For Tiering down settings in days, enter the number of days (minimum 60) before data transitions to the lower-cost tier.

1. Choose Save changes.

------
#### [ AWS CLI ]

To update a tiering configuration using the AWS CLI

```
aws backup update-tiering-configuration \
--tiering-configuration-name "MyTieringConfig" \
--tiering-configuration '{
  "BackupVaultName":"*",
  "ResourceSelection":[{
    "Resources":["*"],
    "TieringDownSettingsInDays":60,
    "ResourceType":"S3"
  }]
}'
```

------

**Deleting tiering configurations**

You can delete tiering configurations when they are no longer needed.

------
#### [ Console ]

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **S3 backup tiering**.

1. Select the configuration to delete.

1. Choose Delete.

1. Enter the tiering configuration name to confirm deletion.

1. Choose Delete tiering configuration.

------
#### [ AWS CLI ]

To delete a tiering configuration using the AWS CLI

```
aws backup delete-tiering-configuration \
--tiering-configuration-name "MyTieringConfig"
```

------

Note: Deleting a tiering configuration keeps existing data in the lower-cost tier but prevents new data from tiering down.

## How S3 backup tiering configurations apply
<a name="backup-tiering-how-apply"></a>

AWS Backup evaluates objects within backups for tiering eligibility based on their age. The service checks object age on a daily basis and transitions eligible objects to the lower-cost tier according to your configuration settings. Tiering evaluation occurs automatically in the background. When objects within a backup reach the specified age threshold (minimum 60 days), they become eligible for transition during the next evaluation cycle.

Both objects in existing backups and newly created backup objects are subject to tiering configurations. If you have multiple configurations that could apply to the same backup objects, vault-specific settings take precedence over the configurations that applies to all vaults. The tiering process is irreversible - once objects move to the lower-cost tier, they remain there until the backup is deleted according to your retention policies.

## Cost structure and monitoring
<a name="backup-tiering-cost-structure"></a>

**Pricing model**

S3 backup tiering uses a cost-optimized pricing structure:
+ **Storage cost:** Lower /GB-month cost compared to standard warm tier
+ **Transition fee:** One-time per-object fee when moving to lower-cost tier
+ **Restore cost:** /GB charge when restoring data same as the warm tier restore
+ **No retrieval fees:** There are no additional retrieval charges

**Cost monitoring**

Monitor your tiering cost savings through:
+ **AWS Cost Explorer:** Separate usage types for each storage tier
+ **AWS Cost and Usage Reports:** Detailed breakdown with cost allocation tags
+ **AWS Backup console:** Configuration information

**Example cost savings**

For a 500TB S3 bucket with 1 billion objects where 60% are eligible for tiering:
+ **Before tiering:** \$125,600/month
+ **After tiering:** \$121,000/month
+ **Monthly savings:** \$14,600/month (18% reduction)
+ **One-time transition fee:** \$16,000

## Configuration Examples
<a name="backup-tiering-configuration-examples"></a>

**Example 1: Account-wide tiering**

Apply tiering to all S3 resources across all backup vaults:

```
{
  "TieringConfigurationName":"MyTieringConfig",
  "BackupVaultName":"*",
  "ResourceSelection":[
    {
      "Resources":["*"],
      "TieringDownSettingsInDays":60,
      "ResourceType":"S3"
    }
  ]
}
```

**Example 2: **

Tier all resources in a MyBackupVault4 vault:

```
{
  "TieringConfigurationName":"MyTieringConfig",
  "BackupVaultName":"MyBackupVault4",
  "ResourceSelection":[
    {
      "Resources":["*"],
      "TieringDownSettingsInDays":60,
      "ResourceType":"S3"
    }
  ]
}
```

**Example 3:**

Tier specific buckets with different rules:

```
{
  "TieringConfigurationName":"MyTieringConfig",
  "BackupVaultName":"MyBackupVault",
  "ResourceSelection":[
    {
      "Resources": ["arn:aws:s3:::mybucket1", "arn:aws:s3:::mybucket2"],
      "TieringDownSettingsInDays": 60,
      "ResourceType": "S3"
    },
    {
      "Resources": ["arn:aws:s3:::mybucket3"],
      "TieringDownSettingsInDays": 120,
      "ResourceType": "S3"
    }
  ]
}
```

**Example 4:**

Set the rule to not tier a bucket (set tiering to 36500):

```
{
  "TieringConfigurationName":"MyTieringConfig",
  "BackupVaultName":"*",
  "ResourceSelection":[
    {
      "Resources":["arn:aws:s3:::mybucket7", "arn:aws:s3:::mybucket8"],
      "TieringDownSettingsInDays":36500,
      "ResourceType":"S3"
    }
  ]
}
```

## Supported features and limitations
<a name="backup-tiering-supported-features"></a>

**Supported features**
+ **Backup types:** Both continuous and periodic backups
+ **Vault types:** Standard backup vaults and logically air-gapped vaults
+ **Vault lock:** Full compatibility with locked backup vaults
+ **Cross-region/account:** Copying tiered data (copies land in standard tier at destination)
+ **Restore capabilities:** Point-in-time recovery and item-level restores
+ **Search and indexing:** Full compatibility with backup search functionality
+ **Compliance:** Maintains all compliance and audit capabilities

**Limitations**
+ **Minimum transition time:** 60 days before data can be moved to lower-cost tier
+ **Resource limit:** Up to 100 specific resources per configuration
+ **Configuration limit:** Up to 5 different resource selection rules per configuration
+ **One configuration per vault:** Each vault can only have one vault specific tiering configuration, apart from the default
+ **One-way transition:** Data moved to lower-cost tier remains there until deletion

## Troubleshooting
<a name="backup-tiering-troubleshooting"></a>

**Common issues**

Configuration not applying to existing backups
+ Verify that the configuration is properly assigned to the correct vaults
+ Check that resources are correctly selected in targeted configurations
+ Ensure backup data meets the minimum age requirement (60 days)

`AlreadyExistsException` when creating configuration
+ Ensure the tiering configuration name is unique within your account
+ Check if the target vault already has an active tiering configuration

`LimitExceededException` errors
+ Verify you have not exceeded the maximum of 5 resource selection group per configuration
+ Check that you have not selected more than 100 specific resources

Higher than expected transition costs
+ Review the number of objects being transitioned
+ Consider the transition fee impact for frequently changing data
+ Evaluate whether the minimum threshold settings is appropriate for your use case

# Restore a backup by resource type
<a name="restoring-a-backup"></a>

## How to restore
<a name="how-to-restore"></a>

For console restore instructions and links to documentation for each AWS Backup-supported resource type, see the links at the bottom of this page.

To restore a backup programmatically, use the [StartRestoreJob](API_StartRestoreJob.md) API operation.

The configuration values ("restore metadata") that you need to restore your resource varies depending on the resource that you want to restore. To get the configuration metadata that your backup was created with, you can call [GetRecoveryPointRestoreMetadata](API_GetRecoveryPointRestoreMetadata.md). Restore metadata examples are also available in the links at the bottom of this page.

Restoring from cold storage typically takes 4 hours more than restoring from warm storage.

For each restore, a restore job is created with a unique job ID—for example, `1323657E-2AA4-1D94-2C48-5D7A423E7394`.

**Note**  
AWS Backup does not provide any service-level agreements (SLAs) for a restore time. Restore times can vary based upon system load and capacity, even for restores containing the same resources.

## Non-destructive restores
<a name="non-destructive-restores"></a>

When you use AWS Backup to restore a backup, it creates a new resource with the backup that you are restoring. This is to protect your existing resources from being destroyed by your restore activity.

## Restore testing
<a name="restore-testing-intro"></a>

You can conduct tests on your resources to simulate a restore experience. This helps determine if you meet your organizational Restore Time Objective (RTO) and helps prepare for future restore needs.

For more information, see [Restore testing](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing.html).

## Copy tags during a restore
<a name="tag-on-restore"></a>

**Note**  
Restores of Amazon DynamoDB, Amazon S3, SAP HANA on Amazon EC2 instances, virtual machines, and Amazon Timestream resources currently do not have this feature available.

### Introduction
<a name="w2aac17c31b9b5"></a>

You can copy tags as you restore a resource if the tags belonged to the protected resource at the time of backup. Tags, which are labels containing a key and value pair, can help you identify and search for resources. When you start a restore job, tags that belonged to the original backed-up resources can be added to the resource being restored.

When you choose to include tags during a restore job, this step can replace the overhead and labor of manually applying tags to resources after a restore job is completed. Note this is distinct from adding new tags to restored resources.

When you restore a backup in the console flow, your source tags will be copied by default. In the console, uncheck the box if you wish to opt out of copying tags to a restored resource

In the API operation `StartRestoreJob`, the parameter `CopySourceTagsToRestoredResource` is set to `false` by default, which will exclude the original source tags from the resource you are restoring. If you wish to *include* tags from the original source, set this to `True`.

### Considerations
<a name="w2aac17c31b9b7"></a>
+ A resource can have up to 50 tags, including restored resources. Please see [Tagging your AWS resources ](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) for more information about tag limits.
+ Ensure the correct permissions are present in the role used for restores to copy tags. The default role for restores contains the necessary permissions. A custom role must include additional permissions to tag resources. 
+ The following resources are not currently supported for restore tag inclusion: VMware Cloud™ on AWS, VMware Cloud™ on AWS Outposts, on-premises systems, SAP HANA on Amazon EC2 instances, Timestream, DynamoDB, Advanced DynamoDB, and Amazon S3.
+ For continuous backups, the tags on the original resource as of the most recent backup will be copied to the restored resource.
+ Tags will not be copied for item-level restores.
+ Tags that were added to a backup after the backup job was completed but were not present on the original resource prior to the backup will not be copied to the restored resource. Only Backups created after May 22, 2023 are eligible for tag copy on restore.

### Tag interaction with specific resources
<a name="backup-tag-resources"></a>
+ **Amazon EC2**
  + Tags applied to restored **Amazon EC2** instances are also applied to the attached restored **Amazon EBS** volumes.
  + Tags applied to the EBS volumes attached to source instances are not copied to the volumes attached to restored instances. If you have IAM policies that allow or deny users access to EBS volumes based on their tags, you must manually reassign the required tags to the restored volumes to ensure your policies remain in effect.
+ When you restore an **Amazon EFS** resource, it must be copied to a new file system. Restorations to an existing file system cannot have tags copied to it.
+ **Amazon RDS**
  + If the RDS cluster that was backed up is still active, tags from this cluster will be copied.
  + If the original cluster is no longer active, tags from the snapshot of the cluster will be copied instead.
  + Tags which were present on the resource at the time of the backup will be copied during the restore regardless if the Boolean parameter for `CopySourceTagsToRestoredResource` is set to `True` or `False`. However, if the snapshot does not contain tags, then the above Boolean setting will be used.
+ **Amazon Redshift** clusters, by default, always include tags during a restore job. 

### Copy tags via the console
<a name="w2aac17c31b9c15"></a>

1. Open the [AWS Backup console](https://console.aws.amazon.com/backup/)

1. In the navigation pane, choose **Protected resources**, and select the Amazon S3 resource ID that you want to restore.

1. On the **Resource details** page, you will see a list of recovery points for the selected resource ID. To restore a resource:

   1. In the **Backup** pane, choose the recovery point ID of the resource.

   1. In the upper-right corner of the pane, choose **Restore** (alternatively, you can go to the backup vault, find the recovery point, and then click **Actions** then click **Restore**).

1. On the **Restore backup page**, locate the panel named Restore with tags. To include all tags from the original resource, retain the check the box (note in the console this box is checked by default).

1. Click **Restore backup** after you have selected all your preferred settings and roles.

### To include tags programmatically
<a name="w2aac17c31b9c17"></a>

Use the API operation `StartRestoreJob` . Ensure the following Boolean parameter is set to `True`:

```
CopySourceTagsToRestoredResource = true
```

If the boolean parameter `CopySourceTagsToRestoredResource` = `True`, the restore job will copy the tags from the original resource(s) to the restored material. 

**Important**  
The restore job will fail if this parameter is included for an unsupported resource (VMware, AWS Outposts, on-premises systems, SAP HANA on EC2 instances, Timestream, DynamoDB, Advanced DynamoDB, and Amazon S3).

```
{
    "RecoveryPointArn": "arn:aws:ec2:us-east-1::image/ami-1234567890a1b234",
    "Metadata": {
        "InstanceInitiatedShutdownBehavior": "stop",
        "DisableApiTermination": "false",
        "EbsOptimized": "false",
        "InstanceType": "t1.micro",
        "SubnetId": "subnet-123ab456cd7efgh89",
        "SecurityGroupIds": "[\"sg-0a1bc2d345ef67890\"]",
        "Placement": "{\"GroupName\":null,\"Tenancy\":\"default\"}",
        "HibernationOptions": "{\"Configured\":false}",
        "IamInstanceProfileName": "UseBackedUpValue",
        "aws:backup:request-id": "1a2345b6-cd78-90e1-2345-67f890g1h2ij"
    },
    "IamRoleArn": "arn:aws:iam::123456789012:role/EC2Restore",
    "ResourceType": "EC2",
    "IdempotencyToken": "34ab5678-9012-3c4d-5678-efg9h01f23i4",
    "CopySourceTagsToRestoredResource": true
}
```

### Troubleshoot tag restore issues
<a name="w2aac17c31b9c19"></a>

**ERROR:** Insufficient Permissions

**REMEDY:** Ensure you have the necessary permissions in your restore role so you can include tags on your restored resource. The default [AWS managed](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#aws-managed-policies) service role policy for restores, [AWSBackupServiceRolePolicyForRestores](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForRestores$jsonEditor), contains the necessary permissions for this task. 

If you choose to use a custom role, ensure the following permissions are present:
+ `elasticfilesystem:TagResource`
+ `storagegateway:AddTagsToResource`
+ `rds:AddTagsToResource`
+ `ec2:CreateTags`
+ `cloudformation:TagResource`

For more information, see [API permissions](https://docs.aws.amazon.com/aws-backup/latest/devguide/access-control.html#backup-api-permissions-ref).

## Restore job statuses
<a name="restore-job-statuses"></a>

You can view the status of a restore job on the **Jobs** page of the AWS Backup console. Restore job statuses include **pending**, **running**, **completed**, **aborted**, and **failed**.

**Topics**
+ [How to restore](#how-to-restore)
+ [Non-destructive restores](#non-destructive-restores)
+ [Restore testing](#restore-testing-intro)
+ [Copy tags during a restore](#tag-on-restore)
+ [Restore job statuses](#restore-job-statuses)
+ [Restoring an Amazon Aurora cluster](restoring-aur.md)
+ [Amazon Aurora DSQL restore](restore-auroradsql.md)
+ [Restore CloudFormation stacks](restore-application-stacks.md)
+ [Restoring a DocumentDB cluster](restoring-docdb.md)
+ [Restore a Amazon DynamoDB table](restoring-dynamodb.md)
+ [Restore an Amazon EBS volume](restoring-ebs.md)
+ [Restore an Amazon EC2 instance](restoring-ec2.md)
+ [Restore an Amazon EFS file system](restoring-efs.md)
+ [Restore an Amazon EKS cluster](restoring-eks.md)
+ [Restore an FSx file system](restoring-fsx.md)
+ [Restore a Neptune cluster](restoring-nep.md)
+ [Restore an RDS database](restoring-rds.md)
+ [Restore an Amazon Redshift cluster](redshift-restores.md)
+ [Amazon Redshift Serverless restore](redshift-serverless-restore.md)
+ [Restore an SAP HANA database on an Amazon EC2 instance](saphana-restore.md)
+ [Restore S3 data using AWS Backup](restoring-s3.md)
+ [Restore a Storage Gateway volume](restoring-storage-gateway.md)
+ [Restore an Amazon Timestream table](timestream-restore.md)
+ [Restore a virtual machine using AWS Backup](restoring-vm.md)

# Restoring an Amazon Aurora cluster
<a name="restoring-aur"></a>

## Use the AWS Backup console to restore Aurora recovery points
<a name="aur-restore-console"></a>

AWS Backup restores your Aurora cluster; it does not create or attach an Amazon RDS instance to your cluster. In the following steps, you will create and attach an Amazon RDS instance to your restored Aurora cluster using the CLI.

Restoring an Aurora cluster requires that you specify multiple restore options. For information about these options, see [Overview of Backing Up and Restoring an Aurora DB Cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html) in the *Amazon Aurora User Guide*. Specifications for the restore options can be found in the API guide for [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html).

**To restore an Amazon Aurora cluster**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources** and the Aurora resource ID that you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. To restore a resource, in the **Backups** pane, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. In the **Instance specifications** pane, accept the defaults or specify the options for the **DB engine**, **DB engine version**, and **Capacity type** settings. 
**Note**  
If **Serverless** capacity type is selected, a **Capacity settings** pane appears. Specify the options for the **Minimum Aurora capacity unit** and **Maximum Aurora capacity unit** settings, or choose different options from the **Additional scaling configuration** section.

1. In the **Settings** pane, specify a name that is unique for all DB cluster instances owned by your AWS account in the current Region.

1. In the **Network & Security** pane, accept the defaults or specify the options for the **Virtual Private Cloud (VPC)**, **Subnet group**, and **Availability zone** settings. 

1. In the **Database options** pane, accept the defaults or specify the options for **Database port**, **DB cluster parameter group**, and **IAM DB Authentication Enabled** settings. 

1. In the **Backup** pane, accept the default or specify the option for the **Copy tags to snapshots** setting. 

1. In the **Backtrack** pane, accept the default or specify the options for the **Enable Backtrack** or **Disable Backtrack** settings. 

1. In the **Encryption** pane, accept the default or specify the options for the **Enable encryption** or **Disable encryption** settings. 

1. In the **Log exports** pane, choose the log types to publish to Amazon CloudWatch Logs. The **IAM role** is already defined. 

1. In the **Restore role** pane, choose the IAM role that AWS Backup will assume for this restore. 

1. After specifying all your settings, choose **Restore backup**.

   The **Restore jobs** pane appears. A message at the top of the page provides information about the restore job.

1. After your restore finishes, attach your restored Aurora cluster to an Amazon RDS instance.

   Using the AWS CLI:
   + For Linux, macOS, or Unix:

     ```
     aws rds create-db-instance --db-instance-identifier sample-instance \ 
                   --db-cluster-identifier sample-cluster --engine aurora-mysql --db-instance-class db.r4.large
     ```
   + For Windows:

     ```
     aws rds create-db-instance --db-instance-identifier sample-instance ^ 
                   --db-cluster-identifier sample-cluster --engine aurora-mysql --db-instance-class db.r4.large
     ```

See [continuous backups and point-in-time restore (PITR)](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) for information about continuous backups and restoring to a chosen point in time.

## Use the AWS Backup API, CLI, or SDK to restore Amazon Aurora recovery points
<a name="aur-restore-cli"></a>

Use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`. The metadata you can include for a restore job will depend if you are restoring a continuous backup to a point in time (PITR) or if you are restoring a snapshot.

**Restore a cluster from a snapshot**  
You can specify the following metadata for an Aurora snapshot restore job. See [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html) in the *Amazon Relational Database Service API Reference* for additional information and accepted values.

```
// Required metadata:
dbClusterIdentifier // string
engine // string

// Optional metadata:          
availabilityZones // array of strings
backtrackWindow // long
copyTagsToSnapshot // Boolean
databaseName // string
dbClusterParameterGroupName // string
dbSubnetGroupName // string
enableCloudwatchLogsExports // array of strings
enableIAMDatabaseAuthentication // Boolean
engineMode // string
engineVersion // string
kmsKeyId // string
optionGroupName // string
port // integer
scalingConfiguration // object
vpcSecurityGroupIds // array of strings
```

Example:

```
"restoreMetadata":"{\"EngineVersion\":\"5.6.10a\",\"KmsKeyId\":\"arn:aws:kms:us-east-1:234567890123:key/45678901-ab23-4567-8cd9-012d345e6f7\",\"EngineMode\":\"serverless\",\"AvailabilityZones\":\"[\\\"us-east-1b\\\",\\\"us-east-1e\\\",\\\"us-east-1c\\\"]\",\"Port\":\"3306\",\"DatabaseName\":\"\",\"DBSubnetGroupName\":\"default-vpc-05a3b07cf6e193e1g\",\"VpcSecurityGroupIds\":\"[\\\"sg-012d52c68c6e88f00\\\"]\",\"ScalingConfiguration\":\"{\\\"MinCapacity\\\":2,\\\"MaxCapacity\\\":64,\\\"AutoPause\\\":true,\\\"SecondsUntilAutoPause\\\":300,\\\"TimeoutAction\\\":\\\"RollbackCapacityChange\\\"}\",\"EnableIAMDatabaseAuthentication\":\"false\",\"DBClusterParameterGroupName\":\"default.aurora5.6\",\"CopyTagsToSnapshot\":\"true\",\"Engine\":\"aurora\",\"EnableCloudwatchLogsExports\":\"[]\"}"
```

**Restore a cluster to a point in time (PITR)**  
You can specify the following metadata when you want to restore an Aurora continuous backup (recovery point) to a specific point in time (PITR). See [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html) in the *Amazon Relational Database Service API Reference* for additional information and accepted values.

```
// Required metadata:
dbClusterIdentifier // string
engine // string
restoreToTime // timestamp; must be specified if UseLatestRestorableTime parameter isn't provided

// Optional metadata:          
backtrackWindow // long
copyTagsToSnapshot // Boolean
dbClusterParameterGroupName // string
dbSubnetGroupName // string
enableCloudwatchLogsExports // array of strings
enableIAMDatabaseAuthentication // Boolean
engineMode // string
engineVersion // string
kmsKeyId // string
optionGroupName // string
port // integer
scalingConfiguration // object
vpcSecurityGroupIds // array of strings
```

# Amazon Aurora DSQL restore
<a name="restore-auroradsql"></a>

**Topics**
+ [Overview](#restore-auroradsql-overview)
+ [Restore Aurora DSQL single Region cluster](#restore-auroradsql-singleregion)
+ [Restore an Aurora DSQL multi-Region cluster](#restore-auroradsql-multiregion)
+ [Troubleshoot Aurora DSQL restore issues](#restore-auroradsql-troubleshoot)
+ [Aurora DSQL restore frequently asked questions](#restore-auroradsql-faq)

## Overview
<a name="restore-auroradsql-overview"></a>

To restore a Amazon Aurora DSQL single-Region cluster, use the AWS Backup console or CLI to select the recovery point (backup) you wish to restore. To restore a Aurora DSQL multi-Region cluster, you can now use either the AWS Backup console or CLI.

For single-Region restore, include the name, cluster encryption, and deletion protection, then initiate the restore to a newly created cluster.

For multi-Region restore, you'll need to specify additional parameters including a witness Region, peer Region(s), and regional configuration settings. Multi-Region restore creates a cluster that spans multiple AWS Regions, providing enhanced availability and disaster recovery capabilities.

## Restore Aurora DSQL single Region cluster
<a name="restore-auroradsql-singleregion"></a>

You can restore an Aurora DSQL cluster to a single Region by using the AWS Backup console or AWS CLI.

------
#### [ Console ]

****

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Select the "Restore" button next to the recovery point you wish to restore.

1. Configure the settings for the new cluster to which your recovery point will be restored.

   1. By default, the AMK (AWS managed key) will be used to encrypt the restored data. You may alternatively specify a different key.

   1. [Deletion protection](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_DeleteCluster.html#USER_DeletionProtection) for your Aurora clusters is enabled by default, but unselect the box to turn off the option.

1. Review the settings; when they are satisfactory, select the **Restore backup** button.

AWS Backup will create a new Aurora DSQL cluster.

------
#### [ AWS CLI ]

**Single Region restore**

1. Use the CLI command `aws backup start-restore-job` to restore an Aurora cluster from the specified recovery point.

1. Include the necessary metadata for the restore job. Example:  
**Example**  

   ```
   aws backup start-restore-job \
       --recovery-point-arn "arn:aws:dsql:us-east-1:123456789012:cluster/example-cluster/backup/example-backup" \
       --iam-role-arn "arn:aws:iam::123456789012:role/service-role/AWSBackupDefaultServiceRole" \
       --metadata '{"regionalConfig":"[{\"region\":\"us-east-1\",\"isDeletionProtectionEnabled\":true,\"kmsKeyId\":\"my_key\"}]"}' \
       --copy-source-tags-to-restored-resource
   ```

------

## Restore an Aurora DSQL multi-Region cluster
<a name="restore-auroradsql-multiregion"></a>

Aurora DSQL multi-Region cluster restore occurs within a closed Region triplet, which is a group of three AWS Regions peers. Multi-Region restore requires that the Regions you specify in the operation are contained in one triplet. For more information about multi-Region clusters, see [Configuring multi-Region clusters](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/configuring-multi-region-clusters.html).

Triplets from the following groups are supported. Where there are more than Regions, choose three in the same group.
+ US East (N. Virginia); US East (Ohio); US West (N. California)
+ Europe (Ireland); Europe (London); Europe (Paris); Europe (Frankfurt)
+ Asia Pacific (Tokyo); Asia Pacific (Seoul); Asia Pacific (Osaka)

To complete multi-Region restore, ensure you have the following permissions:
+ `backup:StartRestoreJob`
+ `dsql:UpdateCluster`
+ `dsql:AddPeerCluster`
+ `dsql:RemovePeerCluster`

You can restore a backup of an Aurora DSQL cluster to multiple Regions using either the AWS Backup console or CLI commands.

**Tip**  
If you have a backup plan with a rule that automatically creates a cross-Region copy to one of the indicated Regions, the created copy can be used for this multi-Region restore.

Multi-Region restore starts with your current Region. You will also need a:
+ Peer Region with an identical cross-Region copy of the recovery point in your current Region
+ Witness Region, a designated AWS Region that participates in multi-Region cluster configurations by supporting transaction log-only writes without consuming storage for the actual data. For more information about witness Regions, see [Creating a multi-Region cluster](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/getting-started.html#getting-started-multi-region).

The individual steps are shown below:

------
#### [ Console ]

The AWS Backup console now supports multi-Region restore for Aurora DSQL clusters, providing a streamlined process for creating clusters that span multiple Regions. For more information about multi-Region clusters, see [Configuring multi-Region clusters](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/configuring-multi-region-clusters.html).

1. Sign in to the AWS Management Console and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup vaults**.

1. Choose the backup vault that contains the Aurora DSQL recovery point you want to restore.

1. Select the recovery point you want to restore, then choose **Restore**.

1. On the restore page, under **Restore options**, select **Add peer Regions** to enable multi-Region restore.

1. Select a **Peer cluster Region** from the dropdown menu. This Region must be within the same triplet as your current Region and also must contain a cross-Region copy from the recovery point in the current (first) Region.

1. Select a **Witness Region** from the dropdown menu. This Region must also be within the same triplet.

1. Configure the **Cluster settings** for both the primary and peer Region clusters:

   1. For the primary cluster, configure:
      + **Cluster encryption** (optional): Select a KMS key for encryption.
      + **Deletion protection**: Enable or disable deletion protection.

   1. For the peer Region cluster, configure:
      + **Peer Region cluster encryption** (optional): Select a KMS key for encryption.
      + **Peer Region cluster deletion protection**: Enable or disable deletion protection.

1. Review your settings and choose **Restore backup**.

1. The console will initiate the multi-Region restore process, which creates clusters in both Regions and automatically links them together.

------
#### [ AWS CLI ]

Multi-Region restore can now be achieved using the new orchestrated restore metadata with AWS Backup CLI commands. This approach simplifies the process by handling the cluster linking automatically. For more information about creating multi-Region clusters programmatically, see [Configuring multi-Region clusters](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/configuring-multi-region-clusters.html) in the Aurora DSQL User Guide.

**Important**  
Both the primary cluster and peer cluster must be in Regions within the same group. The operation will fail if the clusters are in Regions outside the group. Supported groups include:  
US East (N. Virginia); US East (Ohio); US West (N. California)
Europe (Ireland); Europe (London); Europe (Paris); Europe (Frankfurt)
Asia Pacific (Tokyo); Asia Pacific (Seoul); Asia Pacific (Osaka)

**Multi-Region restore through AWS CLI using orchestrated restore metadata**

1. Create a restore job using the CLI command `aws backup start-restore-job` with the new multi-Region orchestration metadata:  
**Example**  

   ```
   aws backup start-restore-job \
   --recovery-point-arn "arn:aws:backup:us-east-1:123456789012:recovery-point:abcd1234" \
   --iam-role-arn "arn:aws:iam::123456789012:role/service-role/AWSBackupDefaultServiceRole" \
   --metadata '{
       "witnessRegion":"us-west-1",
       "useMultiRegionOrchestration":"true",
       "peerRegion":"[\"us-east-2\"]",
       "regionalConfig":"[{\"region\":\"us-east-1\",\"isDeletionProtectionEnabled\":true,\"kmsKeyId\":\"arn:aws:kms:us-east-1:123456789012:key/ba4b3773-4bb8-4a7a-994c-46ede70202f5\"},{\"region\":\"us-west-2\",\"isDeletionProtectionEnabled\":true,\"kmsKeyId\":\"arn:aws:kms:us-west-2:123456789012:key/ba4b3773-4bb8-4a7a-994c-46ede70202f5\"}]"
   }' \
   --copy-source-tags-to-restored-resource
   ```

   The metadata structure includes:
   + `witnessRegion`: The Region that will serve as the witness for the multi-Region cluster. For more information, see [Resilience in Amazon Aurora DSQL](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/disaster-recovery-resiliency.html).
   + `useMultiRegionOrchestration`: Set to `true` to enable multi-Region orchestration.
   + `peerRegion`: An array containing the Region(s) with peer clusters in the multi-Region cluster. For more information, see [MultiRegionProperties](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_MultiRegionProperties.html) in the Aurora DSQL API Reference.
   + `regionalConfig`: An array containing configuration for each Region:
     + `region`: The AWS Region identifier.
     + `isDeletionProtectionEnabled`: Boolean flag to enable/disable deletion protection. For more information, see [CreateCluster](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_CreateCluster.html#API_CreateCluster_RequestSyntax) in the Aurora DSQL API Reference.
     + `kmsKeyId`: The KMS key ARN for encryption (optional).

     If `regionalConfig` properties are not specified, then default values will be applied: default encryption and `isDeletionProtectionEnabled` = `TRUE`.

1. Monitor the restore job status using the `aws backup describe-restore-job` command:

   ```
   aws backup describe-restore-job --restore-job-id job-12345678
   ```

1. Once the restore job completes, you can verify the multi-Region cluster configuration using the Aurora DSQL CLI:

   ```
   aws dsql describe-cluster --cluster-identifier your-cluster-id
   ```

   For more information about multi-Region cluster operations, see [UpdateCluster](https://docs.aws.amazon.com/aurora-dsql/latest/APIReference/API_UpdateCluster.html) in the Aurora DSQL API Reference.

------

## Troubleshoot Aurora DSQL restore issues
<a name="restore-auroradsql-troubleshoot"></a>

**Error:** Insufficient permissions

**Possible cause:** If you try to copy an Aurora DSQL recovery point into an account (cross-account copy) that has never interacted with DSQL API, you may get a permission issue error since the DSQL service-linked role isn't set up in the destination account.

**Remedy:** Attach the [DSQL managed policy](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/working-with-service-linked-roles.html) that includes the DSQL service-linked role, [AuroraDsqlServiceLinkedRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/             AuroraDsqlServiceLinkedRolePolicy.html), to a role in the destination account.

If you encounter any other issues with the backup or restore process, you can check the status of your backup and restore jobs in the AWS Backup console or using the AWS CLI. Additionally, you can review the AWS CloudTrail logs for any relevant error messages or events related to your AWS Backup operations.

## Aurora DSQL restore frequently asked questions
<a name="restore-auroradsql-faq"></a>

1. *"Can I use AWS Backup for Aurora DSQL from the Aurora DSQL console?"*

   No, you can only perform backups and restores, as well as managing backups, from AWS Backup console, SDK, or CLI.

   

1. *"What backup granularity is available for Aurora DSQL? Can I backup specific tables or databases in my cluster"*

   You can only back up and restore a whole Aurora DSQL cluster. 

   

1. *"Are backups of Aurora DSQL full backups or incremental backups?"*

   Recovery points of Aurora DSQL clusters (backups) are full backups of your clusters.

   

1. *"Can I create backups for my Aurora DSQL multi-Region clusters?"*

   Yes, you can create backups for each cluster in multi-Region clusters in the using the same steps as when you create a backup of a single cluster in a single Region.

    AWS Backup recommends as a best practice to create a cross-Region copy of your backup in the other Region from which you plan to restore the Multi-Region cluster, as multi-Region restore requires an identical copy of the same recovery point [*identical* in this operation means the recovery points have the same resource name and creation time].

   

1. *"Will my restored cluster overwrite my existing cluster?"*

   No. When you restore your Aurora DSQL data, AWS Backup creates a new cluster from your snapshots; the restored cluster won’t overwrite the source cluster.

   

# Restore CloudFormation stacks
<a name="restore-application-stacks"></a>

A CloudFormation composite backup is a combination of a CloudFormation template and all associated nested recovery points. Any number of nested recovery points can be restored, but the composite recovery point (which is the top-level recovery point) cannot be restored.

When you restore a CloudFormation template recovery point, you create a new stack with a change set to represent the backup.

## Restore CloudFormation with the AWS Backup console
<a name="restoring-stack-console"></a>

From the [CloudFormation console](https://console.aws.amazon.com/cloudformation/) you can see the new stack and change set. To learn more about change sets, see [ Updating stacks using change sets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html) in the *CloudFormation User Guide*.

Determine which nested recovery points you want to restore from with your CloudFormation stack, and then restore them using the AWS Backup console.

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Go to **Backup vaults**, select the backup vault containing your desired recovery point, then click on **Recovery points**.

1. Restore the CloudFormation template recovery point.

   1. Click the composite recovery point containing the nested recovery points you want to restore to bring up the Details page for the composite recovery point.

   1. Under **Nested recovery points**, the nested recovery points will be displayed. Each recovery point will have a recovery point ID, a status, a resource ID, a resource type, a backup type, and the time that recovery point was created. Click the radio button next to the CloudFormation recovery point, then click **Restore**. Ensure that you are selecting the recovery point that has **resource type: CloudFormation** and **backup type: backup.** 

1. Once the restore job for the CloudFormation template is completed, your restored CloudFormation template will be visible in the [CloudFormation console](https://console.aws.amazon.com/cloudformation/) under **Stacks**.

1. Under **Stack names** you should find the restored template with the status of `REVIEW_IN_PROGRESS`.

1. Click on the name of the stack to see the stack's details.

1. There are tabs under the stack name. Click on **Change sets**.

1. Execute the change set.

1. After this processes, the resources in the original stack will be recreated in the new stack. The stateful resources will be recreated empty. To recover the stateful resources, go back to the list of recovery points in the AWS Backup console, select the recovery point you need, and initiate a restore.

**Note**  
If a CloudFormation restore operation fails, the stack may remain in `REVIEW_IN_PROGRESS` status with a `FAILED` change set. Delete these stacks manually to avoid naming conflicts when you retry the restore operation.  
 For more information, see [Deleting a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html) in the *AWS CloudFormation User Guide*.

## Restore CloudFormation with AWS CLI
<a name="restoring-cfn-cli"></a>

In the command line interface, [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/start-restore-job.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/start-restore-job.html) allows you to restore a CloudFormation stack.

The following list is the accepted metadata to restore an CloudFormation resource.

```
// Mandatory metadata:
ChangeSetName // This is the name of the change set which will be created
StackName // This is the name of the stack that will be created by the new change set
        
// Optional metadata:
ChangeSetDescription // This is the description of the new change set
StackParameters // This is the JSON of the stack parameters required by the stack
aws:backup:request-id
```

# Restoring a DocumentDB cluster
<a name="restoring-docdb"></a>

## Use the AWS Backup console to restore Amazon DocumentDB recovery points
<a name="docdb-restore-console"></a>

Restoring a Amazon DocumentDB cluster requires that you specify multiple restore options. For information about these options, see [ Restoring from a Cluster Snapshot](https://docs.aws.amazon.com/documentdb/latest/developerguide/backup_restore-restore_from_snapshot.html) in the *Amazon DocumentDB Developer Guide*.

**To restore a Amazon DocumentDB cluster**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources** and the Amazon DocumentDB resource ID that you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. To restore a resource, in the **Backups** pane, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. Ensure you are on the console page **Restore Amazon DocumentDB cluster snapshots**.

1. For **Restore options**, you can configure the following:
   + **Engine version** - Select the DocumentDB engine version for the restored cluster.
**Note**  
Instance class and number of instances cannot be configured during the restore process. The restored DocumentDB cluster will use the default instance configuration. You can modify the instance class and add or remove instances after the restore completes by using the Amazon DocumentDB console or API.

1. In the **Settings** pane, input a unique name for your DB cluster identifier.

   You can use letters, numbers, and hyphens, though you cannot have two consecutive hyphens or end the name with a hyphen. The final name will be all lowercase.

1. In the **Database options** pane, select the database port.

   This is the TCP/IP port that the DB instance or cluster will use for application connections. The connection string of any application connecting to the DB instance or cluster must specify its port number. Both the security group applied to the DB instance or cluster and your organization firewalls must allow connections to the port. All DB instances in a DB cluster use the same port.

1. Also in the **Database options** pane, select the DB cluster parameter group.

   This is the parameter group associated with this instance's DB cluster. The DB cluster parameter group acts as a container for engine configuration values that are applied to every DB instance in the cluster.

1. In the **Encryption** pane, select the key that will be used to encrypt this database volume. The default is `aws/rds`. You may alternatively use a customer managed key (CMK).

1. In the **Log exports** pane, choose the log types to publish to Amazon CloudWatch Logs. The **IAM role** is already defined. 

1. In the **Restore role** pane, choose either the default IAM role for the restore job or a different IAM role.

1. In the Protected resource tags pane, you may optionally choose to copy tags from the backup to the restored database cluster.

1. After specifying all your settings, choose **Restore backup**.

   The **Restore jobs** pane appears. A message at the top of the page provides information about the restore job.

1. After your restore finishes, attach your restored Amazon DocumentDB cluster to an Amazon RDS instance.

## Use the AWS Backup API, CLI, or SDK to restore Amazon DocumentDB recovery points
<a name="docdb-restore-cli"></a>

First, restore your cluster. Use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`. You can specify the following metadata during Amazon DocumentDB restores:

```
availabilityZones
backtrackWindow
copyTagsToSnapshot // Boolean 
databaseName // string 
dbClusterIdentifier // string 
dbClusterParameterGroupName // string 
dbSubnetGroupName // string 
enableCloudwatchLogsExports // string 
enableIAMDatabaseAuthentication // Boolean 
engine // string 
engineMode // string 
engineVersion // string 
kmsKeyId // string 
port // integer 
optionGroupName // string 
scalingConfiguration
vpcSecurityGroupIds // string
```

 Then, attach your restored Amazon DocumentDB cluster to an Amazon RDS instance using `create-db-instance`.
+ For Linux, macOS, or Unix:

  ```
  aws docdb create-db-instance --db-instance-identifier sample-instance / 
                    --db-cluster-identifier sample-cluster --engine docdb --db-instance-class db.r5.large
  ```
+ For Windows:

  ```
  aws docdb create-db-instance --db-instance-identifier sample-instance ^ 
                    --db-cluster-identifier sample-cluster --engine docdb --db-instance-class db.r5.large
  ```

# Restore a Amazon DynamoDB table
<a name="restoring-dynamodb"></a>

## Use the AWS Backup console to restore DynamoDB recovery points
<a name="ddb-restore-console"></a>

**To restore a DynamoDB table**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources** and the DynamoDB resource ID you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. To restore a resource, in the **Backups** pane, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. For **Settings**, **New table name** text field, enter a new table name.

1. For **Restore role**, choose the IAM role that AWS Backup will assume for this restore.

1. For **Encryption settings**:

   1. If your backup is managed by DynamoDB (its ARN begins with `arn:aws:dynamodb`), AWS Backup encrypts your restored table using an AWS-owned key.

      To choose a different key to encrypt your restored table, you can either use the AWS Backup [StartRestoreJob operation](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html) or perform the restore from the [DynamoDB console](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Restore.Tutorial.html#restoretable_console).

   1. If your backup supports full AWS Backup management (its ARN begins with `arn:aws:backup`), you can choose any of the following encryption options to protect your restored table:
      + (Default) DynamoDB-owned KMS key (no additional charge for encryption)
      + DynamoDB-managed KMS key (KMS charges apply)
      + Customer-managed KMS key (KMS charges apply)

      "DynamoDB-owned" and "DynamoDB-managed" keys are the same as "AWS-owned" and "AWS-managed" keys, respectively. For clarification, see [ Encryption at Rest: How It Works](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.howitworks.html) in the *Amazon DynamoDB Developer Guide*.

      For more information about full AWS Backup management, see [Advanced DynamoDB backup](advanced-ddb-backup.md).
**Note**  
The following guidance applies only if you restore a copied backup AND want to encrypt the restored table with the same key you used to encrypt your original table.  
When restoring a cross-Region backup, to encrypt your restored table using the same key you used to encrypt your original table, your key must be a multi-Region key. AWS-owned and AWS-managed keys are not multi-Region keys. To learn more, see [Multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.  
When restoring a cross-account backup, to encrypt your restored table using the same key you used to encrypt your original table, you must share the key in your source account with your destination account. AWS-owned and AWS-managed keys cannot be shared between accounts. To learn more, see [Allowing users in other accounts to use a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the *AWS Key Management Service Developer Guide*.

1. Choose **Restore backup**.

   The **Restore jobs** pane appears. A message at the top of the page provides information about the restore job.

## Use the AWS Backup API, CLI, or SDK to restore DynamoDB recovery points
<a name="ddb-restore-cli"></a>

Use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`. You can specify the following metadata during any DynamoDB restore. The metadata is not case-sensitive.

```
targetTableName
encryptionType
kmsMasterKeyArn
aws:backup:request-id
```

The following is an example of the `restoreMetadata` argument for a `StartRestoreJob` operation in the CLI:

```
aws backup start-restore-job \
--recovery-point-arn "arn:aws:backup:us-east-1:123456789012:recovery-point:abcdef12-g3hi-4567-8cjk-012345678901" \
--iam-role-arn "arn:aws:iam::123456789012:role/YourIamRole" \
--metadata 'TargetTableName=TestRestoreTestTable,EncryptionType=KMS,kmsMasterKeyArn=arn:aws:kms:us-east-1:123456789012:key/abcdefg' \
--region us-east-1 \
--endpoint-url https://endpointurl.com
```

The preceding example encrypts the restored table using a customer-managed key.

To encrypt your restored table using an AWS-owned key, specify the following restore metadata: `"encryptionType\":\"Default\"`.

To encrypt your restored table using an AWS-managed key, omit the `kmsMasterKeyArn` parameter and specify: `"encryptionType\":\"KMS\"`.

To encrypt your restored table using a customer-managed key, specify the following restore metadata: `"encryptionType\":\"KMS\",\"kmsMasterKeyArn\":\"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"`.

# Restore an Amazon EBS volume
<a name="restoring-ebs"></a>

When you restore an Amazon Elastic Block Store (EBS) snapshot, you can choose to restore it as an EBS volume, restore it to a AWS Storage Gateway volume, or restore selected items from it to an Amazon S3 bucket.

## Restore to an EBS volume
<a name="restore-to-ebs-volume"></a>

When you restore a snapshot (periodic backup of EBS data) to a new volume, you will specify the volume type, size in GiB, and an availability zone. You can optionally choose to encrypt the new volume with an existing or new AWS KMS key.

## Restore to a gateway volume
<a name="restore-to-gateway-volume"></a>

When you restore to a gateway volume, you will need to specify a gateway in a reachable state, choose your iSCSI target name, and choose a disk ID if your gateway is volume stored or a capacity equal or greater than your snapshot if your gateway is volume cached.

## File level restore to an Amazon S3 bucket
<a name="restore-to-s3-bucket"></a>

Prior to starting a restore job of EBS resources to an Amazon S3 bucket, review [ EBS permissions](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForRestores.html) and [Amazon S3 restore permissions](restoring-s3.md#s3-restore-permissions) for access requirements. The necessary permissions are contained in the managed policy [AWSBackupServiceRolePolicyForItemRestores](security-iam-awsmanpol.md#AWSBackupServiceRolePolicyForItemRestores) and should be included in the [IAM role](authentication.md) used for the restore operation.

All new object uploads, including restored data, to an S3 bucket is automatically encrypted. When you choose this type of restore, specify SSE-S3 (server-side Amazon S3 managed key) or SSE-KMS (server-side AWS KMS managed key). SSE-S3 is the default.

You can input up to five paths when restoring from the AWS Backup console; you can specify multiple paths through the command line. A path must have a length less than 1024 bytes in UTF-8 encoded strings, including the user-designated and AWS Backup-designated prefixes

If your snapshot contains multiple partitions, specify the file system identifier of the partition that contains the data you plan to restore. This identifier can be found using [Backup search](backup-search.md) and is the same of the UUID or file system Disk ID.


|  | To new EBS volume | To gateway | File level restore to S3 bucket | 
| --- | --- | --- | --- | 
| Encryption | Optional. You can choose an existing AWS KMS key or create a new KMS key. |  | Required. Choose from SSE-S3, SSE-KMS, or the default destination bucket encryption1. | 
| Permissions and roles | Choose existing role; If none exists, default role with correct permissions is created. | Choose existing role;If none exists, default role with correct permissions is created | Role choice must have sufficient [EBS](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForRestores.html) and [Amazon S3 restore permissions](restoring-s3.md#s3-restore-permissions). | 
| Restore from cold storage (EBS Archive Tier) | Available | Unavailable | Unavailable | 
| Settings to specify | Volume type; size (GiB); Availability zone; Throughput | Gateway (in a reachable state); iSCSI target name; Disk id (for volume stored gateways); Capacity (for volume cached gateways) | Restore type, including: Destination bucket name; Path(s) to restore; Encryption type; File level restore KMS Key Id if SSE-KMS is set as encryption type | 

1In the AWS Backup console, you select one of the three encryption options; if you use CLI to restore, omit `encryptionType` to restore to the default destination bucket encryption.

## Restore an EBS snapshot with the AWS Backup console
<a name="ebs-restore-console"></a>

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources** and then choose the EBS resource ID you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. To restore a resource, in the **Backups** pane, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. Specify the restore parameters for your resource. The restore parameters you enter are specific to the resource type that you selected.

   For **Resource type**, choose the AWS resource to create when restoring this backup.

1. If you choose **EBS volume**, provide the values for **Volume type**, **Size (GiB)**, and choose an **Availability zone**. After **Throughput**, there will be an optional checkbox **Encrypt this volume**. *This option will stay active if the EBS recovery point is encrypted.*. You may specify a KMS key or you may create an AWS KMS key.

   If you choose **Storage Gateway volume**, choose a **Gateway** in a reachable state. Also choose your **iSCSI target name**. For *Volume stored* gateways, choose a **Disk Id**. For *Volume cached* gateways, choose a capacity that is at least as large as your protected resource.

   If you choose **file level restore**, you can include up to 5 objects or folders from the snapshot. You can [search your indexed backups](backup-search.md) to find the file name or path.
   + Input the file paths.
   + Choose to use an existing Amazon S3 bucket or create a new bucket for the destination where the objects or folders will be restored.
   + Set the encryption of the restored object(s). You can choose the default destination bucket encryption, SSE-S3, or SSE-KMS. For additional detail, see [Restore S3 data using AWS Backup](restoring-s3.md).

1. For **Restore role**, choose the IAM role that AWS Backup will assume for this restore. If the AWS Backup default role is not present in your account, a **Default role** is created for you with the correct permissions. You can delete this default role or make it unusable.

1. Choose **Restore backup** (**Restore items** is displayed for file level restore).

   The **Restore jobs** pane will appear. A message at the top of the page provides information about the restore job.

### Restore from archived EBS snapshots
<a name="restore-archived-ebs"></a>

Restoring an archived EBS snapshot moves it from cold to warm storage temporarily to create a new EBS volume. This type of restore incurs a one-time retrieval charge. Storage costs for both warm and cold storage are billed during this restore period.

**Tip**  
EBS volumes in cold storage can't be restored to a gateway volume or be restored at the file level. 

You can restore an archived EBS snapshot in cold storage by using the [AWS Backup console](https://console.aws.amazon.com/backup/) or the command line. A restore from cold storage can take up to 72 hours. For more information, see [Archive Amazon EBS snapshots](https://docs.aws.amazon.com/ebs/latest/userguide/snapshot-archive.html) in the *Amazon EBS User Guide*.

------
#### [ Console ]

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Navigate to **Backup vaults** > *Vault* > **Restore archived EBS snapshot**.

1. In the **Settings** section, input a value from 0 to 180, inclusive, that specifies the number of days to temporarily restore an archived snapshot.

1. Input other settings: volume type, size, IOPS, availability zone, throughput, and encryption.

1. Choose your **restore role**.

1. Select **Restore backup**. On the confirmation pop up, confirm the snapshots and restore type. Then, select **Restore snapshot**.

------
#### [ AWS CLI ]

1. Use [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/start-restore-job.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/start-restore-job.html)

1. Include the required parameters for archived EBS snapshot restore:

   ```
   --recovery-point-arn arn:aws:backup:region:account-id:recovery-point:recovery-point-id
   --metadata '{"temporaryRestoreDays":"value","volumeType":"value","volumeSize":"value","availabilityZone":"value"}'
   --iam-role-arn arn:aws:iam::account-id:role/service-role/AWSBackupDefaultServiceRole
   --resource-type EBS
   ```

1. Specify the temporary restore duration (0-180 days) in the `temporaryRestoreDays` parameter. This determines how long the archived snapshot will be available in warm storage.

1. Configure the new EBS volume settings including `volumeType` (gp2, gp3, io1, io2, st1, sc1), `volumeSize` in GiB, and target `availabilityZone`.

1. Monitor the restore job status using [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/describe-restore-job.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/describe-restore-job.html) with the returned restore job ID. Archive restores can take up to 72 hours to complete.

------

## Restore an EBS snapshot by AWS CLI
<a name="ebs-restore-cli"></a>

To restore Amazon EBS using the API or CLI, use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`. You can specify the following metadata during an Amazon EBS restore:

```
aws:backup:request-id
availabilityZone
encrypted // if set to true, encryption will be enabled as volume is restored
iops
kmsKeyId // if included, this key will be used to encrypt the restored volume instead of default KMS Key Id
restoreType // include for file level restore - see details below
throughput
temporaryRestoreDays
volumeType
volumeSize
```

Example:

```
"restoreMetadata": "{\"encrypted\":\"false\",\"volumeId\":\"vol-04cc95f3490b5ceea\",\"availabilityZone\":null}"
```

**File level restore specifications**

`restoreType` is required for file level restore. For this type of restore, the following unique metadata is required:

```
destinationBucketName //
pathsToRestore //
encryptionType // You can specify SSE-S3 or SSE-KMS; do not include if you want to restore to default encryption
kmsKeyId //
```

Filesystem identifier is optional for single partition Snapshots. If this information is not passed, then just the absolute path without the “:” separator (such as `{"/data/process/abc.txt", "/data/department/xyz.txt"}`) will be accepted.

# Restore an Amazon EC2 instance
<a name="restoring-ec2"></a>

When you restore an EC2 instance, AWS Backup creates an Amazon Machine Image (AMI), an instance, the Amazon EBS root volume, Amazon EBS data volumes (if the protected resource had data volumes), and Amazon EBS snapshots. You can customize some instance settings using the AWS Backup console, or a larger number of settings using the AWS CLI or an AWS SDK.

The following considerations apply to restoring EC2 instances:
+ AWS Backup configures the restored instance to use the same key pair that the protected resource used originally. You can't specify a different key pair for the restored instance during the restore process.
+ AWS Backup does not back up and restore user-data that is used while launching an Amazon EC2 instance.
+ When configuring the restored instance, you can choose between using the same instance profile that the protected resource used originally or launching without an instance profile. This is to prevent the possibility of privilege escalations. You can update the instance profile for the restored instance using the Amazon EC2 console.

  If you use the original instance profile, you must grant AWS Backup the following permissions, where the resource ARN is the ARN of the IAM role associated with the instance profile.

  ```
  {
        "Effect": "Allow",
        "Action": "iam:PassRole",
        "Resource": "arn:aws:iam::account-id:role/role-name"
  },
  ```

  Replace *role-name* with the name of the EC2 instance profile role that will be attached to the restored EC2 instance. This is not the AWS Backup service role, but rather the IAM role that provides permissions to applications running on the EC2 instance.
+ During a restore, all Amazon EC2 quotas and configuration restrictions apply.
+ If the vault containing your Amazon EC2 recovery points has a vault lock, see [Additional security considerations](vault-lock.md#using-vault-lock-with-backup) for more information.

## Use the AWS Backup console to restore Amazon EC2 recovery points
<a name="restoring-ec2-console"></a>

You can restore an entire Amazon EC2 instance from a single recovery point, including the root volume, data volumes, and some instance configuration settings, such as the instance type and key pair.

**To restore Amazon EC2 resources using the AWS Backup console**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**, then choose the ID of the Amazon EC2 resource to open the resource details page.

1. In the **Recovery points** pane, choose the radio button next to the ID of the recovery point to restore. In the upper-right corner of the pane, choose **Restore**.

1. In the **Network settings** pane, we use the settings from the protected instance to select the default values for the instance type, VPC, subnet, security group, and instance IAM role. You can use these default values or change them as needed.

1. In the **Restore role** pane, use the **Default role** or use **Choose an IAM role** to specify an IAM role that grants AWS Backup permission to restore the backup.

1. In the **Protected resource tags** pane, we select **Copy tags from the protected resource to the restored resource** by default. If you do not want to copy these tags, clear the check box.

1. In the **Advanced settings** pane, accept the default values for the instance settings or change them as needed. For information about these settings, choose **Info** for the setting to open its help pane.

1. When you are finishing configuring the instance, choose **Restore backup**.

## Restore Amazon EC2 with AWS CLI
<a name="restoring-ec2-cli"></a>

In the command line interface, [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/start-restore-job.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/start-restore-job.html) allows you to restore with up to 32 parameters (including some parameters that are not customizable through the AWS Backup console).

The following list is the accepted metadata you can pass to restore an Amazon EC2 recovery point.

```
InstanceType
KeyName
SubnetId
Architecture
EnaSupport
SecurityGroupIds
IamInstanceProfileName
CpuOptions
InstanceInitiatedShutdownBehavior
HibernationOptions
DisableApiTermination
CreditSpecification
Placement
RootDeviceType
RamdiskId
KernelId
UserData
Monitoring
NetworkInterfaces
ElasticGpuSpecification
CapacityReservationSpecification
InstanceMarketOptions
LicenseSpecifications
EbsOptimized
VirtualizationType
Platform
RequireIMDSv2
BlockDeviceMappings
aws:backup:request-id
```

AWS Backup accepts the following information-only attributes. However, including them will not affect the restore:

```
vpcId
```

[https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-block-device-mapping.html#create-ami-bdm](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-block-device-mapping.html#create-ami-bdm) is an optional parameter you can include. AWS Backup supports the following `BlockDeviceMappings` attributes.

**Note**  
`SnapshotId` and `OutpostArn` are not supported.

```
{
  "BlockDeviceMappings": [
    {
        "DeviceName" : string,
        "NoDevice" : string,
        "VirtualName" : string,
        "Ebs": {
            "DeleteOnTermination": boolean,
            "Iops": number,
            "VolumeSize": number,
            "VolumeType": string,
            "Throughput": number,
            "Encrypted": boolean,
            "KmsKeyId": string
        }
    }
 }
```

For example:

```
{
  "BlockDeviceMappings": [
    {
      "DeviceName": "/def/tuvw",
      "Ebs": {
        "DeleteOnTermination": true,
        "Iops": 3000,
        "VolumeSize": 16,
        "VolumeType": "gp3",
        "Throughput": 125,
        "Encrypted": true,
        "KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/ab3cde45-67f8-9g01-hi2j-3456klmno7p8"
      }
    },
    {
      "DeviceName": "/abc/xyz",
      "Ebs": {
        "DeleteOnTermination": false,
        "Iops": 3000,
        "VolumeSize": 16,
        "VolumeType": "gp3",
        "Throughput": 125,
        "Encrypted": false
      }
    }
  ]
}
```

You can also restore an Amazon EC2 instance without including any stored parameters. This option is available on the **Protected resource** tab on the AWS Backup console.

**Important**  
If you do not override the AWS KMS key in the `BlockDeviceMappings` when restoring from cross-account or cross-Region backups, your restore might fail. For more information, see [Troubleshoot Amazon EC2 instance restore issues](#restoring-ec2-troubleshooting).

## Troubleshoot Amazon EC2 instance restore issues
<a name="restoring-ec2-troubleshooting"></a>

**Topics**
+ [Cross-account restore failures](#cross-account-kms-issue)
+ [Cross-Region restore failures](#cross-region-kms-issue)

### Cross-account restore failures
<a name="cross-account-kms-issue"></a>

**Description: **Amazon EC2 instance restore fails when attempting to restore from a backup that is shared with your account.

**Possible issues: ** Your account might not have access to the AWS KMS keys used to encrypt the source volumes in the sharing account. The KMS keys might not be shared with your account.

Or, the volumes attached to the source instance are unencrypted.

**Solution: ** To resolve this issue, set the `encrypted` attribute to `true`, and do one of the following:
+ Override the KMS keys in the `BlockDeviceMappings` and specify a KMS key that you own in your account.
+ Request the owning account to grant you access to the KMS keys used to encrypt the volumes by updating the KMS key policy. For more information, see [Allow users in other accounts to use a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html).

### Cross-Region restore failures
<a name="cross-region-kms-issue"></a>

**Description: **Amazon EC2 instance restore fails when attempting to restore from a cross-Region backup.

**Issue: ** The volumes in the backup might be encrypted with single-Region AWS KMS keys that are not available in the destination Region. Or, the volumes attached to the source instance are unencrypted.

**Solution: ** To resolve this issue, set the `encrypted` attribute to `true`, and override the KMS key in the `BlockDeviceMappings` with a KMS key in the destination Region.

# Restore an Amazon EFS file system
<a name="restoring-efs"></a>

If you are restoring an Amazon Elastic File System (Amazon EFS) instance, you can perform a full restore or an item-level restore.

**Full Restore**

When you perform a full restore, the entire file system is restored.

AWS Backup does not support destructive restores with Amazon EFS. A destructive restore is when a restored file system deletes or overwrites the source or existing file system. Instead, AWS Backup restores your file system to a recovery directory off of the root directory. 

**Item-Level Restore**

When you perform an item-level restore, AWS Backup restores a specific file or directory. You must specify the path relative to the file system root. For example, if the file system is mounted to `/user/home/myname/efs` and the file path is `user/home/myname/efs/file1`, you enter **/file1**. Paths are case sensitive. Wildcard characters and regex strings are not supported. Your path may be different from what is in the host if the file system is mounted using an access point.

You can select up to 10 items when you use the console to perform an EFS restore. There is no item limit when you use CLI to restore; however, there is a 200 KB limit on the length of the restore metadata that can be passed.

You can restore those items to either a new or existing file system. Either way, AWS Backup creates a new Amazon EFS directory (`aws-backup-restore_datetime`) off of the root directory to contain the items. The full hierarchy of the specified items is preserved in the recovery directory. For example, if directory A contains subdirectories B, C, and D, AWS Backup retains the hierarchical structure when A, B, C, and D are recovered. Regardless of whether you perform an Amazon EFS item-level restore to an existing file system or to a new file system, each restore attempt creates a new recovery directory off of the root directory to contain the restored files. If you attempt multiple restores for the same path, several directories containing the restored items might exist.

**Note**  
 If you only keep one weekly backup, you can only restore to the state of the file system at the time you took that backup. You can't restore to prior incremental backups.

## Use the AWS Backup console to restore an Amazon EFS recovery point
<a name="efs-restore-console"></a>

**To restore an Amazon EFS file system**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Your EFS backup vault receives the access policy `Deny` `backup:StartRestoreJob` upon creation. If you are restoring your backup vault for the first time, you must change your access policy as follows.

   1. Choose **Backup vaults**.

   1. Choose the backup vault containing the recovery point you would like to restore.

   1. Scroll down to the vault **Access policy**

   1. If present, delete `backup:StartRestoreJob` from the `Statement`. Do this by choosing **Edit**, deleting `backup:StartRestoreJob`, then choosing **Save policy**.

1. In the navigation pane, choose **Protected resources** and the EFS file system ID you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected file system ID is shown. To restore a file system, in the **Backups** pane, choose the radio button next to the recovery point ID of the file system. In the upper-right corner of the pane, choose **Restore**.

1. Specify the restore parameters for your file system. The restore parameters you enter are specific to the resource type that you selected. 

   You can perform a **Full restore**, which restores the entire file system. Or, you can restore specific files and directories using **Item-level restore**.
   + Choose the **Full restore** option to restore the file system in its entirety including all root level folders and files.
   + Choose the **Item-level restore** option to restore a specific file or directory. You can select and restore up to five items within your Amazon EFS.

     To restore a specific file or directory, you must specify the relative path related to the mount point. For example, if the file system is mounted to `/user/home/myname/efs` and the file path is `user/home/myname/efs/file1`, enter **/file1**. Paths are case sensitive and cannot contain special characters, wildcard characters, and regex strings. 

     1. In the **Item path** text box, enter the path for your file or folder.

     1. Choose **Add item** to add additional files or directories. You can select and restore up to five items within your EFS file system.

1. For **Restore location**
   + Choose **Restore to directory in source file system** if you want to restore to the source file system.
   + Choose **Restore to a new file system** if you want to restore to a different file system.

1. For **File system type**
   + (Recommended) Choose **Regional** if you want to restore your file system across multiple AWS Availability Zones.
   + Choose **One Zone** if you want to restore your file system to a single Availability Zone. Then, in the **Availability Zone** dropdown, choose the destination for your restore.

   For more information, see [Managing Amazon EFS storage classes](https://docs.aws.amazon.com/efs/latest/ug/storage-classes.html) in the *Amazon EFS User Guide.*

1. For **Performance**
   + If you chose to perform a **Regional** restore, choose either **(Recommended) General purpose** or **Max I/O**.
   + If you chose to perform a **One Zone** restore, you must choose **(Recommended) General purpose**. One Zone restores do not support **Max I/O**.

1. For **Enable encryption**
   + Choose **Enable encryption**, if you want to encrypt your file system. KMS key IDs and aliases appear in the list after they have been created using the AWS Key Management Service (AWS KMS) console.
   + In the **KMS key** text box, choose the key you want to use from the list.

1. For **Restore role**, choose the IAM role that AWS Backup will assume for this restore.
**Note**  
If the AWS Backup default role is not present in your account, a **Default role** is created for you with the correct permissions. You can delete this default role or make it unusable.

1. Choose **Restore backup**.

   The **Restore jobs** pane appears. A message at the top of the page provides information about the restore job.
**Note**  
 If you only keep one weekly backup, you can only restore to the state of the file system at the time you took that backup. You can't restore to prior incremental backups.

## Use the AWS Backup API, CLI, or SDK to restore Amazon EFS recovery points
<a name="efs-restore-cli"></a>

Use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`. When restoring an Amazon EFS instance, you can restore an entire file system or specific files or directories. To restore Amazon EFS resources, you need the following information:
+ `file-system-id` — The ID of the Amazon EFS file system that is backed up by AWS Backup. Returned in `GetRecoveryPointRestoreMetadata`. This is not required when a **new** file system is restored (this value is ignored if parameter `newFileSystem` is `True`).
+ `Encrypted` — A Boolean value that, if true, specifies that the file system is encrypted. If `KmsKeyId` is specified, `Encrypted` must be set to `true`.
+ `KmsKeyId` — Specifies the AWS KMS key that is used to encrypt the restored file system.
+ `PerformanceMode` — Specifies the throughput mode of the file system. Valid values are `generalPurpose` (default) and `maxIO`. The `generalPurpose` mode provides the lowest latency per operation and can achieve up to 7,000 file operations per second. The `maxIO` mode can scale to higher levels of aggregate throughput and operations per second with a slightly higher latency for file operations.
+ `CreationToken` — A user-supplied value that ensures the uniqueness (idempotency) of the request.
+ `newFileSystem` — A Boolean value that, if true, specifies that the recovery point is restored to a new Amazon EFS file system.
+ `ItemsToRestore ` — An array of up to five strings where each string is a file path. Use `ItemsToRestore` to restore specific files or directories rather than the entire file system. This parameter is optional.

You may also include `aws:backup:request-id`.

One Zone restores can be performed by including parameters:

```
"singleAzFilesystem": "true" 
"availabilityZoneName": "ap-northeast-3"
```

For more information about Amazon EFS configuration values, see [create-file-system](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/efs/create-file-system.html).

## Disabling automatic backups in Amazon EFS
<a name="efs-backup-disable"></a>

By default, [ Amazon EFS creates backups of data automatically](https://docs.aws.amazon.com/efs/latest/ug/awsbackup.html#automatic-backups). These backups are represented as recovery points in AWS Backup. Attempts to remove the recovery point will result in an error message that notes there are insufficient privileges to perform the action.

It is best practice to keep this auto-backup active. Particularly in the case of accidental data deletion, this backup allows restoration of file system content to the date of the last recovery point created.

In the unlikely event you wish to turn these off, the access policy must be changed from `"Effect": "Deny"` to `"Effect": "Allow"`. See the *Amazon EFS User Guide* for more information about turning [ automatic backups](https://docs.aws.amazon.com/efs/latest/ug/awsbackup.html#automatic-backups) on or off.

# Restore an Amazon EKS cluster
<a name="restoring-eks"></a>

You can restore EKS cluster backups using the AWS Backup console or CLI. EKS backups are composite recovery points that include both EKS cluster state and persistent volume backups.

AWS Backup supports multiple restore experiences including granular namespace-level restores. Restores are non-destructive and will not overwrite any existing Kubernetes objects in your target EKS cluster. Restores will also not overwrite the Kubernetes versions of the target EKS cluster.

EKS Backups have to be restored to a target EKS cluster, meaning an Amazon EKS cluster that has been pre-provisioned. As part of the restore workflow, you can opt to create a new EKS cluster which AWS Backup will create on your behalf.

**Note**  
AWS Backup will provide a limited set of options for creating a new EKS cluster as a part of a restore. For all EKS cluster creation functionality, customers can create a new EKS cluster using the [EKS Console](https://console.aws.amazon.com/eks/home) or APIs and select this as their restore target.

**Restore capabilities for Amazon EKS**


| Restore type | Restore target | Restore behavior | 
| --- | --- | --- | 
| Existing cluster restore | Restore to the source EKS cluster or existing EKS cluster | Restores all Kubernetes resources and persistent volumes to existing EKS clusters. All restores are non-destructives and existing objects are not overwritten. For objects that are skipped, you can subscribe to [SNS Notifications](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-notifications.html) | 
| New cluster restore | Creates a new Amazon EKS cluster as part of your EKS restore | Restore creates new EKS cluster and restores all Kubernetes resources and persistent volumes to newly created cluster | 
| Namespace restore | Existing Amazon EKS cluster | Restores only specified namespaces, their Kubernetes resources and corresponding persistent storage restores are non-destructives and existing objects are not overwritten. For objects that are skipped, you can subscribe to SNS Notifications | 
| Peristent Storage Restore | Persistent Storage Dependent | Restore individual persistent storage as standalone restores. See Restore Behavior of [Amazon EBS](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-ebs.html), [Amazon S3](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-s3.html), [Amazon EFS](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-efs.html). | 

**Permissions**

The permissions required depend on the restore type and target destination.
+ AWS Backup's managed policy [AWSBackupServiceRolePolicyForRestores](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#AWSBackupServiceRolePolicyForRestores) contains the required permissions to restore your Amazon EKS cluster and EBS and EFS persistent storage.
+ If your EKS Cluster contains an S3 bucket, or you are restoring the child S3 recovery point alone you will need to ensure the following policies or permissions within are assigned to your role [AWSBackupServiceRolePolicyForS3Restore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForS3Restore.html).

**Considerations before restoring**

Before you begin an EKS restore job, review the following. If you are restoring an EKS backup that has been copied across account or region ensure you check these considerations ahead of restores to prevent restore failures.

1. **IAM Roles**: when restoring onto a different cluster, the IAM Roles used in the source cluster (such as Pod identity, IRSA. OIDC provider configs etc) must be present in the account / region as the destination cluster.

1. **Ensure EKS Version and Compatibility**: The API Versions of the objects that you're wanting to restore should be the same version (or as close to as possible) and supported in the new cluster. AWS Backup will perform a best effort restore between EKS versions, though compatibility issues may arise when restoring between significantly different versions.

1. **Matching Storage Classes**: For restores to an existing EKS cluster, ensure that the appropriate CSI Storage Driver add-ons are installed prior to restore

1. **S3 Buckets**: When restoring an EKS cluster with S3 Buckets, ensure your S3 bucket are versioned and accessible in the destination account or region.

1. **Image Repository**: When restoring an EKS cluster ensure that the destination EKS cluster's account or region have access to the images that are being referenced as part of the restore. Check that your registry has the sufficient cross-region / account policy permissions.

1. **Security Groups**: Security groups should be pre-created for ALB, Pod Identities, EKS Node Groups etc. in the target account and region if creating a new EKS cluster as part of your restore

1. **EBS Availability Zones and Nodes**: The Availability Zones where you recover your EBS volumes should be mapped to the Availability Zone of an existing EKS node

1. **Non-destructive restores**: All EKS restores will be non-destructive and not overwrite Kubernetes objects of the target restore.

1. **Enable EKS Audit Logs**: Enable EKS Audit Logs for additional logging and troubleshooting prior to restore. You can also subscribe to [SNS notifications](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-notifications.html) to notify of skipped or failed objects on restore.

**EKS Configurations**

When you restore the composite Amazon AWS Backup, you choose the restore type and target destination. You can choose to restore to the source EKS cluster, an existing EKS cluster or create a new EKS cluster as the restore target. For new EKS clusters, you can choose to use the same existing infrastructure settings (e.g. VPC, subnets) as the backed up cluster or configure new ones. AWS Backup is designed to perform a non-destructive restore that doesn't overwrite existing resources.

For namespace restores, you can specify up to 5 namespaces to restore selectively. Only namespace-scoped resources are restored, while cluster-scoped resources are excluded except for related persistent volumes.

As an advanced setting you can opt to change the restore order of the Kubernetes Objects. By Default, AWS Backup will restore all Kubernetes objects in the following order: 

**Cluster Scoped Kubernetes Resources**

1. Custom Resource Definitions

1. Namespaces (the namespace itself, not the resources within that namespace)

1. StorageClasses

1. PersistentVolumes

**Namespace Scoped Kubernetes Resources**

1. PersistentVolumeClaims

1. Secrets

1. ConfigMaps

1. ServiceAccounts

1. LimitRanges

1. Pods

1. ReplicaSets

**Persistent Storage Configurations**

As part of the composite Amazon EKS backup restore, the second step will be to configure your Persistent Storage configurations. This will vary based on the persistent storage backed up as part of your EKS cluster.

For Amazon EBS Snapshots you are required to provide an Availability Zone, where the Amazon EBS volume will be restored and created. AWS Backup will then attempt to create the EKS pod in the same availability zone as selected so your volume can be remounted to your EKS cluster as part of restore.

As part of the restore, AWS Backup will remount your Amazon EBS volumes and Amazon S3 buckets to your restored EKS cluster. Amazon EFS filesystems restore to random prefixes and require manual access point creation after restore to remount to your EKS cluster. AWS Backup does not create access points or mount targets on your behalf, refer to guidance here for [ access points](https://docs.aws.amazon.com/efs/latest/ug/create-access-point.html) and [mount targets](https://docs.aws.amazon.com/efs/latest/ug/manage-fs-access-create-delete-mount-targets.html).

## Amazon EKS restore procedure
<a name="eks-restore-backup-section"></a>

Follow these steps to restore Amazon EKS backups using the AWS Backup console or AWS CLI:

------
#### [ Console ]

**To restore your Amazon EKS cluster**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup vaults**.

1. Choose the backup vault that contains your Amazon EKS backup, then select the recovery point for your Amazon EKS backup.

1. Choose **Restore**.

1. In the **Restore options** pane, choose your restore type:
   + **Restore full EKS cluster** - Restores the entire Amazon EKS composite recovery point
   + **Select namespaces to restore** - Restores up to five specific namespaces

1. Configure the target destination:
   + For cluster restore, choose to create a new cluster or use an existing cluster
   + For new clusters, specify cluster name, Kubernetes version, VPC configuration, IAM roles, subnets, Additional security groups, node group settings, fargate profiles and Pod identity IAM roles
   + For existing clusters, select the target cluster from the dropdown
   + For namespace restore, specify the target cluster and namespace names

1. Optionally, configure advanced settings for custom restore order for Kubernetes resources.

1. Choose the IAM restore role for the job. If not using the default role, ensure the selected role includes the iam:PassRole permission.

1. Choose **Restore backup**.

------
#### [ AWS CLI ]

Use the `aws backup start-restore-job` command with Amazon EKS-specific metadata.

The required metadata depends on your restore type. All restore operations require the `clusterName` parameter.

**Restore Amazon EKS recovery points through AWS CLI**

Use StartRestoreJob. You can specify the following metadata during Amazon EKS restores:

**Mandatory metadata:**
+ `clusterName` - Name of cluster to restore to

**Optional metadata:**
+ `newCluster` - (true/false) If we should create a new EKS cluster during restore. If newCluster is "true", the following metadata fields apply:
  + `eksClusterVersion` - Desired K8s version of cluster if wanting to increase cluster version during restore
  + `clusterRole` - The IAM Role ARN to attach to the created EKS cluster
  + `encryptionConfigProviderKeyArn` - Specify the KMS key ARN to encrypt the destination cluster. This can be either the KMS key from the source cluster, or a different KMS key. A different KMS key must be provided when performing cross-region or cross-account restore. Omit this metadata entirely if the source cluster is not encrypted.
  + `clusterVpcConfig` - VPC/Networking configuration for the created EKS cluster. This field has the following nested fields:
    + `vpcId` - The VPC associated with your cluster
    + `subnetIds [Required]` - The subnets associated with your cluster
    + `securityGroupIds [Required]` - The additional security groups associated with your cluster
  + `nodeGroups` - The Managed Node Groups to be created on the EKS Cluster. The NodeGroups for restore must have all of the same node groups from backup time and have matching nodeGroupId.
    + `nodeGroupId [Required]` - The ID of the node group
    + `subnetIds [Required]` - The subnets that were specified for the Auto Scaling group that is associated with your node group
    + `instanceTypes` - If the node group wasn't deployed with a launch template, then this is the instance type that is associated with the node group
    + `nodeRole [Required]` - The IAM role associated with your node group
    + `securityGroupIds` - The security group IDs that are allowed SSH access to the nodes
    + `remoteAccessEc2SshKey` - The Amazon EC2 SSH key name that provides access for SSH communication with the nodes in the managed node group
    + `launchTemplateId` - Specify the launch template ID to create the node group. This can be either the launch template ID from the source cluster, or a different launch template ID. If the source cluster's launch template contains hard-coded endpoint that points to the source cluster itself, you must provide a different launch template ID. Omit this metadata entirely if the source cluster does not use a launch template.
    + `launchTemplateVersion` - Launch template version associated with the specified launch template ID.
  + `fargateProfiles` - The Fargate Profiles to be created on the EKS Cluster. The Fargate Profiles for restore must have all the same Fargate Profiles from backup time and have matching name.
    + `name [Required]` - The name of the Fargate profile
    + `subnetIds` - The IDs of subnets to launch a Pod into
    + `podExecutionRoleArn [Required]` - The IAM Role ARN of the Pod execution role to use for a Pod that matches the selectors in the Fargate profile
  + `podIdentityAssociations` - The Pod Identity Associations to be created on the EKS Cluster
    + `associationId` - The ID of the Pod Identity Association
    + `roleArn` - The IAM Role ARN for the Pod Identity Association
+ `kubernetesRestoreOrder` - Override the order the Kubernetes manifests are restored in. This order will take precedence over the default service restore order. This follow the format: group/version/kind or version/kind

  Ex: `["v1/persistentvolumes","v1/pods","customresource/v2/custom"]`
+ `namespaceLevelRestore` - (true/false) If you would like to perform a namespace level restore
+ `namespaces` - A list of namespaces to restore if namespaceLevelRestore is "true". Can provide up to 5 namespaces to restore.

  Ex: `["ns-1","ns-2","ns-3","ns-4","ns-5"]`
+ `restoreKubernetesManifestsOnly` - (true/false) If you would like to only restore the Kubernetes manifest files and no persistent storage systems (EBS, S3, EFS, etc.)
+ `nestedRestoreJobs` - Restore Metadata configuration of all of the nested Recovery Points for the PersistentVolume storage systems in the composite Recovery Point. This is a map of RecoveryPointArn: RestoreMetadata of that Recovery Point

**Restore to existing cluster**

```
aws backup start-restore-job \
    --recovery-point-arn "arn:aws:backup:us-west-2:123456789012:recovery-point:composite:eks/my-cluster-20240115" \
    --iam-role-arn "arn:aws:iam::123456789012:role/AWSBackupDefaultServiceRole" \
    --metadata '{"clusterName":"existing-cluster","newCluster":"false"}' \
    --resource-type "EKS"
```

**Restore specific namespaces to an existing cluster:**

```
aws backup start-restore-job \
    --recovery-point-arn "arn:aws:backup:us-west-2:123456789012:recovery-point:composite:eks/my-cluster-20240115" \
    --iam-role-arn "arn:aws:iam::123456789012:role/AWSBackupDefaultServiceRole" \
    --metadata '{"clusterName":"existing-cluster","newCluster":"false","namespaceLevelRestore":"true","namespaces":"[\"ns-1\",\"ns-2\",\"ns-3\",\"ns-4\",\"ns-5\"]"}' \
    --resource-type "EKS"
```

**Restore nested persistent volumes to an existing cluster:**

```
aws backup start-restore-job \
    --recovery-point-arn "arn:aws:backup:us-west-2:123456789012:recovery-point:composite:eks/my-cluster-20240115" \
    --iam-role-arn "arn:aws:iam::123456789012:role/AWSBackupDefaultServiceRole" \
    --metadata '{"clusterName":"existing-cluster","newCluster":"false","namespaceLevelRestore":"true","nestedrestorejobs":"{\"arn:aws:ec2:us-west-2::snapshot/snap-abc123\":\"{\\\"AvailabilityZone\\\":\\\"us-west-2a\\\"}\",\"arn:aws:backup:us-west-2:123456789012:recovery-point:fa71a304-2555-4c37-8128-f154b9578032\":\"{\\\"DestinationBucketName\\\":\\\"bucket-name\\\"}\"}"}' \
    --resource-type "EKS"
```

**Restore to new cluster**

```
aws backup start-restore-job \
    --recovery-point-arn "arn:aws:backup:us-west-2:123456789012:recovery-point:composite:eks/my-cluster-20240115" \
    --iam-role-arn "arn:aws:iam::123456789012:role/AWSBackupDefaultServiceRole" \
    --metadata '{"clusterName":"new-cluster","newCluster":"true","clusterRole":"arn:aws:iam::123456789012:role/EKSClusterRole","eksClusterVersion":"1.33","encryptionConfigProviderKeyArn":"arn:aws:kms:us-west-2:123456789012:key/ecb2b326-784d-4ec0-8d07-20ab826b5a13","clusterVpcConfig":"{\"vpcId\":\"vpc-1234\",\"subnetIds\":[\"subnet-1\",\"subnet-2\",\"subnet-3\"],\"securityGroupIds\":[\"sg-123\"]}","nodeGroups":"[{\"nodeGroupId\":\"nodegroup-1\",\"subnetIds\":[\"subnet-1\",\"subnet-2\",\"subnet-3\"],\"nodeRole\":\"arn:aws:iam::123456789012:role/EKSNodeGroupRole\",\"instanceTypes\":[\"t3.small\"],\"launchTemplateId\":\"lt-0b13949aae3f2b867\",\"launchTemplateVersion\":\"1\"}]","fargateProfiles":"[{\"name\":\"fargate-profile-1\",\"subnetIds\":[\"subnet-1\",\"subnet-2\",\"subnet-3\"],\"podExecutionRoleArn\":\"arn:aws:iam::123456789012:role/EKSFargateProfileRole\"}]"}' \
    --resource-type "EKS"
```

After starting the restore job, use `describe-restore-job` to monitor progress:

```
aws backup describe-restore-job --restore-job-id restore-job-id
```

------

You can subscribe to **Notification Events** for failed and skipped objects for restore. For more information, see [ Notification options with AWS Backup.](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-notifications.html)

# Restore an FSx file system
<a name="restoring-fsx"></a>

The restore options that are available when you use AWS Backup to restore Amazon FSx file systems are the same as using the native Amazon FSx backup. You can use a backup's recovery point to create a new file system and restore a point-in-time snapshot of another file system. 

AWS Backup supports restoring file systems that use Intelligent Tiering storage for both FSx for Lustre and FSx for OpenZFS file systems. Intelligent Tiering file systems have specific configuration requirements during restore operations.

When restoring Amazon FSx file systems, AWS Backup creates a new file system and populates it with the data (Amazon FSx for NetApp ONTAP allows restoring a volume to an existing file system). This is similar to how native Amazon FSx backs up and restores file systems. Restoring a backup to a new file system takes the same amount of time as creating a new file system. The data restored from the backup is lazy-loaded onto the file system. You might therefore experience slightly higher latency during the process.

**Note**  
You can't restore to an existing Amazon FSx file system, and you can't restore individual files or folders.  
FSx for ONTAP doesn’t support backing up certain volume types, including DP (data-protection) volumes, LS (load-sharing) volumes, full volumes, or volumes on file systems that are full. For more information, please see [FSx for ONTAP Working with backups](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/using-backups.html).  
 AWS Backup vaults that contain recovery points of Amazon FSx file systems are visible outside of AWS Backup. You can restore the recovery points using Amazon FSx but you can't delete them.

You can see backups created by the built-in Amazon FSx automatic backup functionality from the AWS Backup console. You can also recover these backups using AWS Backup. However, you can't delete these backups or change the automatic backup schedules of your Amazon FSx file systems using AWS Backup.

## Use the AWS Backup console to restore Amazon FSx recovery points
<a name="fsx-restore-console"></a>

You can restore most Amazon FSx backups created by AWS Backup using the AWS Backup console, API, or AWS CLI.

This section shows you how to use the AWS Backup console to restore Amazon FSx file systems.

**Topics**

### Restoring an FSx for Windows File Server file system
<a name="fsx-windows"></a>

**To restore an FSx for Windows File Server file system**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**, and then choose the Amazon FSx resource ID that you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. Choose the recovery point ID of the resource.

1.  In the upper-right corner of the pane, choose **Restore** to open the **Restore backup** page.

1. In the **File system details** section, the ID of your backup is shown under **Backup ID**, and the file system type is shown under **File system type**. You can restore both FSx for Windows File Server and FSx for Lustre file systems.

1. For **Deployment type**, accept the default. You can't change the deployment type of a file system during restore.

1. Choose the **Storage type** to use. If the storage capacity of your file system is less than 2,000 GiB, you can't use the **HDD** storage type.

1. For **Throughput capacity**, choose **Recommended throughput capacity** to use the recommended 16 MB per second (MBps) rate, or choose **Specify throughput capacity** and enter a new rate. 

1. In the **Network and security** section, provide the required information.

1. If you are restoring an FSx for Windows File Server file system, provide the** Windows authentication** information used to access the file system, or you can create a new one. 
**Note**  
When restoring a backup, you can't change the type of Active Directory on the file system.

   For more information about Microsoft Active Directory, see [Working with Active Directory in Amazon FSx for Windows File Server](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/aws-ad-integration-fsxW.html) in the *Amazon FSx for Windows File Server User Guide*.

1. (Optional) In the **Backup and maintenance** section, provide the information to set your backup preferences.

1. In the **Restore role** section, choose the IAM role that AWS Backup will use to create and manage your backups on your behalf. We recommend that you choose the **Default role**. If there is no default role, one is created for you with the correct permissions. You can also provide your own IAM role.

1. Verify all your entries, and choose **Restore Backup**.

### Restoring an Amazon FSx for Lustre file system
<a name="restore-fsx-lustre"></a>

 AWS Backup supports Amazon FSx for Lustre file systems that have persistent storage deployment type and are not linked to a data repository like Amazon S3. 

**Note**  
You can only restore your backup to a file system of the same deployment type, storage class, throughput capacity, storage capacity, data compression type, and AWS Region as the original. You can increase your restored file system's storage capacity after it becomes available.

**To restore an Amazon FSx for Lustre file system**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**, and then choose the Amazon FSx resource ID that you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. Choose the recovery point ID of the resource. 

1.  In the upper-right corner of the pane, choose **Restore** to open the **Restore backup to new file system** page.

1. In the **Settings** section, the ID of your backup is shown under **Backup ID**, and the file system type is shown under **File system type**. **File system type** should be **Lustre**.

1. Choose a **Deployment type**. AWS Backup only supports the persistent deployment type. You can't change the deployment type of a file system during restore.

   Persistent deployment type is for long-term storage. For detailed information about FSx for Lustre deployment options, see [Using Available Deployment Options for Amazon FSx for Lustre File Systems](https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-fsx-lustre.html) in the *Amazon FSx for Lustre User Guide*.

1. Choose the **Throughput per unit storage** that you want to use.
**Note**  
Throughput per unit storage cannot be configured for file systems using Intelligent-Tiering storage class.

1. (Optional) For file systems using Intelligent-Tiering storage class, choose the SSD read cache sizing mode and capacity. For more information, see the FSx documentation for [ managing provisioned SSD read cache](https://docs.aws.amazon.com/fsx/latest/LustreGuide/managing-ssd-read-cache.html). 

1. (Optional) For file systems using Intelligent-Tiering storage class, choose whether to enable EFA (Elastic Fabric Adapter). To enable EFA, make sure that your security group allows all inbound and outbound traffic within the security group.

1. Specify the **Storage capacity** to use. Enter a capacity between 32 GiB and 64,436 GiB.
**Note**  
For Intelligent Tiering file systems, storage capacity is elastic and cannot be specified during restore. The capacity will automatically scale based on your data usage.

1. In the **Network and security** section, provide the required information.

1. (Optional) In the **Backup and maintenance** section, provide the information to set your backup preferences.

1. In the **Restore role** section, choose the IAM role that AWS Backup will use to create and manage your backups on your behalf. We recommend that you choose the **Default role**. If there is no default role, one is created for you with the correct permissions. You can also provide your IAM role.

1. Verify all your entries, and choose **Restore Backup**.

## Restoring Amazon FSx for NetApp ONTAP volumes
<a name="restore-fsx-ontap"></a>

**To restore Amazon FSx for NetApp ONTAP volumes:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**, and then choose the Amazon FSx resource ID that you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. Choose the recovery point ID of the resource. 

1.  In the upper-right corner of the pane, choose **Restore** to open the **Restore** page.

   The first section, **File system details**, displays the recovery point ID, the file system ID, and the file system type.

1. Under **Restore options**, there are several selections. First, choose the **File system** from the dropdown menu.

1. Next, choose the preferred **Storage virtual machine** from the dropdown menu.

1. Enter a name for your volume.

1. Specify the **Junction Path**, which is location within your file system where your volume will be mounted.

1. Specify the **Volume size** in megabytes (MB) that you are creating.

1. (*Optional*) You can choose to **Enable storage efficiency** by checking the box. This will allow deduplication, compression, and compaction.

1. In the **Capacity pool tiering policy** dropdown menu, select the tiering preference.

1. In the **Restore permissions**, choose the IAM role that AWS Backup will use to restore backups.

1. Verify all your entries, and choose **Restore Backup**.

## Restoring an Amazon FSx for OpenZFS file system
<a name="restore-fsx-openzfs"></a>

**Note**  
Restoring from a backup with a given storage class to a file system with a different storage class is not supported.

**To restore an FSx for OpenZFS file system**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**, and then choose the Amazon FSx resource ID that you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. Choose the recovery point ID of the resource. 

1.  In the upper-right corner of the pane, choose **Restore** to open the **Restore backup** page.

   In the **File system details** section, the ID of your backup is shown under **Backup ID**, and the file system type is shown under **File system type**. File system type should be **FSx for OpenZFS**.

1. Under **Restore options**, you may select **Quick restore** or **Standard restore**. Quick restore will use the default settings of the source file system. If you are doing Quick Restore, skip to Step 7.

   If you choose Standard restore, specify the additional following configurations:

   1. **Provisioned SSD IOPS**: You can choose the **Automatic radio button** or you can choose the **User-provisioned option** if available.
**Note**  
SSD IOPS cannot be set for file systems using Intelling-Tiering storage class

   1. **Throughput capacity**: You can choose the **Recommended throughput capacity** of 64 MB/sec (for SSD storage class), and 160 MB/sec (for Intelligent-Tiering storage class), or you can choose to **Specify throughput capacity**.

   1. (*Optional*) **VPC security groups**: You can specify VPC security groups to associate with your file system’s network interface.

   1. **Encryption key**: Specify the AWS Key Management Service key to protect the restored file system data at rest.

   1. (*Optional*) **Root Volume configuration**: This configuration is collapsed by default. You may expand it by clicking the down-pointing carat (arrow). Creating a file system from a backup will create a new file system; the volumes and snapshots will retain their source configurations.

   1. (*Optional*) **Backup and maintenance**: To set a scheduled backup, click the down-pointing carat (arrow) to expand the section. You may choose the backup window, hour and minute, retention period, and weekly maintenance window.

1. The **SSD Storage capacity** will display the file system’s storage capacity.
**Note**  
For Intelligent Tiering file systems, storage capacity is elastic and cannot be specified during restore. The capacity will automatically scale based on your data usage.

1. (Optional) For file systems using Intelligent-Tiering storage class, choose the SSD read cache sizing mode and capacity. For more information, see the FSx documentation for [ managing provisioned SSD read cache](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/managing-ssd-read-cache.html). 

1. Choose the **Virtual Private Cloud** (VPC) from which your file system can be accessed.

1. In the **Subnet** dropdown menu, choose the subnet in which your file system’s network interface resides.

1. In the **Restore role** section, choose the IAM role that AWS Backup will use to create and manage your backups on your behalf. We recommend that you choose the **Default role**. If there is no default role, one is created for you with the correct permissions. You can also choose an IAM role.

1. Verify all your entries, and choose **Restore Backup**.

## Use the AWS Backup API, CLI, or SDK to restore Amazon FSx recovery points
<a name="fsx-restore-cli"></a>

To restore Amazon FSx using the API or CLI, use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`. You can specify the following metadata during an Amazon FSx restore:

```
StorageCapacity
StorageType
VpcId
KmsKeyId
SecurityGroupIds
SubnetIds
DeploymentType
WeeklyMaintenanceStartTime
DailyAutomaticBackupStartTime
AutomaticBackupRetentionDays
CopyTagsToBackups
WindowsConfiguration
LustreConfiguration
OntapConfiguration
OpenZFSConfiguration
aws:backup:request-id
```

**Note**  
Storage capacity cannot be specified for Intelligent Tiering file systems as they use elastic storage that scales automatically based on data usage.

### FSx for Windows File Server restore metadata
<a name="fsx-restore-metadata-windows"></a>

You can specify the following metadata during an FSx for Windows File Server restore:
+ ThroughputCapacity
+ PreferredSubnetId
+ ActiveDirectoryId

### FSx for Lustre restore metadata
<a name="fsx-restore-metadata-lustre"></a>

You can specify the following subfields of `LustreConfiguration` in the metadata during an FSx for Lustre restore:
+ `PerUnitStorageThroughput` - Specifies the throughput capacity per unit of storage provisioned, measured in MB/s per TiB of storage. 
+ `DriveCacheType` - The type of drive cache used by `PERSISTENT_1` file systems that are provisioned with HDD storage devices. This parameter is required when `StorageType` is set to HDD.
+ `DataReadCacheConfiguration` - Specifies the provisioned SSD read cache for Intelligent Tiering file systems. Required when `StorageType` is set to `INTELLIGENT_TIERING`. See [ LustreReadCacheConfiguration](https://docs.aws.amazon.com/fsx/latest/APIReference/API_LustreReadCacheConfiguration.html) for more details.
+ `EfaEnabled` - Specifies whether Elastic Fabric Adapter (EFA) and GPUDirect Storage (GDS) support is enabled for the FSx for Lustre file system. 

For complete details about all available parameters in `LustreConfiguration`, please see [ CreateFileSystemLustreConfiguration](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateFileSystemLustreConfiguration.html) in the *Amazon FSx API Reference*.

### FSx for ONTAP restore metadata
<a name="fsx-restore-metadata-ontap"></a>

You can specify the following metadata during an FSx for ONTAP restore:
+ Name \$1name of volume to be created
+ OntapConfiguration: \$1 ontap configuration
+ `junctionPath`
+ `sizeInMegabytes`
+ `storageEfficiencyEnabled`
+ `storageVirtualMachineId`
+ `tieringPolicy`

### FSx for OpenZFS restore metadata
<a name="fsx-restore-metadata-openzfs"></a>

You can specify the following subfields of `OpenZFSConfiguration` in the metadata during an FSx for OpenZFS restore:
+ `ThroughputCapacity` - Specifies the throughput capacity of the restored file system, measured in MB/s.
+ `DiskIopsConfiguration` - When specifying Iops for SSD storage class, use a value between 0 and 160,000. Do not include Mode when Iops is specified.
+ `ReadCacheConfiguration` - Specifies the provisioned SSD read cache for Intelligent Tiering file systems. Required when `StorageType` is set to `INTELLIGENT_TIERING`. See [ OpenZFSReadCacheConfiguration](https://docs.aws.amazon.com/fsx/latest/APIReference/API_OpenZFSReadCacheConfiguration.html) for more details.

For complete details about all available parameters in `OpenZFSConfiguration`, please see [ CreateFileSystemOpenZFSConfiguration](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateFileSystemOpenZFSConfiguration.html) in the *Amazon FSx API Reference*.

Example CLI restore command:

```
aws backup start-restore-job --recovery-point-arn "arn:aws:fsx:us-west-2:1234:backup/backup-1234" --iam-role-arn "arn:aws:iam::1234:role/Role" --resource-type "FSx" --region us-west-2 --metadata 'SubnetIds="[\"subnet-1234\",\"subnet-5678\"]",StorageType=HDD,SecurityGroupIds="[\"sg-bb5efdc4\",\"sg-0faa52\"]",WindowsConfiguration="{\"DeploymentType\": \"MULTI_AZ_1\",\"PreferredSubnetId\": \"subnet-1234\",\"ThroughputCapacity\": \"32\"}"'
```

Example restore metadata:

```
"restoreMetadata":  "{\"StorageType\":\"SSD\",\"KmsKeyId\":\"arn:aws:kms:us-east-1:123456789012:key/123456a-123b-123c-defg-1h2i2345678\",\"StorageCapacity\":\"1200\",\"VpcId\":\"vpc-0ab0979fa431ad326\",\"FileSystemType\":\"LUSTRE\",\"LustreConfiguration\":\"{\\\"WeeklyMaintenanceStartTime\\\":\\\"4:10:30\\\",\\\"DeploymentType\\\":\\\"PERSISTENT_1\\\",\\\"PerUnitStorageThroughput\\\":50,\\\"CopyTagsToBackups\\\":true}\",\"FileSystemId\":\"fs-0ca11fb3d218a35c2\",\"SubnetIds\":\"[\\\"subnet-0e66e94eb43235351\\\"]\"}"
```

# Restore a Neptune cluster
<a name="restoring-nep"></a>

## Use the AWS Backup console to restore Amazon Neptune recovery points
<a name="nep-restore-console"></a>

Restoring an Amazon Neptune database requires that you specify multiple restore options. For information about these options, see [ Restoring from a DB Cluster Snapshot](https://docs.aws.amazon.com/neptune/latest/userguide/backup-restore-restore-snapshot.html) in the *Neptune User Guide*.

**To restore an Neptune database**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources** and the Neptune resource ID that you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. To restore a resource, in the **Backups** pane, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. In the **Instance specifications** pane, accept the defaults or specify the **DB engine** and **Version**.

1. In the **Settings** pane, specify a name that is unique for all DB cluster instances owned by your AWS account in the current Region. The DB cluster identifier is case insensitive, but it is stored as all lowercase, as in "`mydbclusterinstance`". This is a required field. 

1. In the **Database options** pane, accept the defaults or specify the options for **Database port** and **DB cluster parameter group**. 

1. In the **Encryption** pane, accept the default or specify the options for the **Enable encryption** or **Disable encryption** settings.

1. In the **Log exports** pane, choose the log types to publish to Amazon CloudWatch Logs. The **IAM role** is already defined. 

1. In the **Restore role** pane, choose the IAM role that AWS Backup will assume for this restore.

1. After specifying all your settings, choose **Restore backup**.

   The **Restore jobs** pane appears. A message at the top of the page provides information about the restore job.

1. After your restore finishes, attach your restored Neptune cluster to an Amazon RDS instance.

## Use the AWS Backup API, CLI, or SDK to restore Neptune recovery points
<a name="nep-restore-cli"></a>

First, restore your cluster. Use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`. You can specify the following metadata during Amazon DocumentDB restores:

```
availabilityZones
backtrackWindow
copyTagsToSnapshot // Boolean 
databaseName // string 
dbClusterIdentifier // string 
dbClusterParameterGroupName // string 
dbSubnetGroupName // string 
enableCloudwatchLogsExports // string 
enableIAMDatabaseAuthentication // Boolean 
engine // string 
engineMode // string 
engineVersion // string 
kmsKeyId // string 
port // integer 
optionGroupName // string 
scalingConfiguration
vpcSecurityGroupIds // string
```

Then, attach your restored Neptune cluster to an Amazon RDS instance using `create-db-instance`.
+ For Linux, macOS, or Unix:

  ```
  aws neptune create-db-instance --db-instance-identifier sample-instance \ 
                    --db-instance-class db.r5.large --engine neptune --engine-version 1.0.5.0 --db-cluster-identifier sample-cluster --region us-east-1
  ```
+ For Windows:

  ```
  aws neptune create-db-instance --db-instance-identifier sample-instance ^
                    --db-instance-class db.r5.large --engine neptune --engine-version 1.0.5.0 --db-cluster-identifier sample-cluster --region us-east-1
  ```

For more information, see [https://docs.aws.amazon.com/neptune/latest/userguide/api-snapshots.html#RestoreDBClusterFromSnapshot](https://docs.aws.amazon.com/neptune/latest/userguide/api-snapshots.html#RestoreDBClusterFromSnapshot) in the *Neptune Management API reference* and [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/neptune/restore-db-cluster-from-snapshot.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/neptune/restore-db-cluster-from-snapshot.html) in the *Neptune CLI guide*.

# Restore an RDS database
<a name="restoring-rds"></a>

Restoring an Amazon RDS database requires specifying multiple restore options. For more information about these options, see [Backing Up and Restoring an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_CommonTasks.BackupRestore.html) in the *Amazon RDS User Guide*.

## Use the AWS Backup console to restore Amazon RDS recovery points
<a name="rds-restore-console"></a>

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources** and the Amazon RDS resource ID you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. To restore a resource, in the **Backups** pane, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. In the **Instance specifications** pane, accept the defaults or specify the options that you need.

1. In the **Settings** pane, specify a name that is unique for all DB instances and clusters owned by your AWS account in the current Region. The DB instance identifier is case insensitive, but it is stored as all lowercase, as in "`mydbinstance`". This is a required field. 

1. In the **Network & Security** pane, accept the defaults or specify the options that you need. 

1. In the **Database options** pane, accept the defaults or specify the options that you need. 

1. In the **Encryption** pane, use the default settings. If the source database instance for the snapshot was encrypted, the restored database instance will also be encrypted. This encryption cannot be removed.

1. In the **Log exports** pane, choose the log types to publish to Amazon CloudWatch Logs. The **IAM role** is already defined. 

1. In the **Maintenance** pane, accept the default or specify the option for **Auto minor version upgrade**. 

1. In the **Restore role** pane, choose the IAM role that AWS Backup will assume for this restore. 

1. Choose **Restore backup**.

   The **Restore jobs** pane appears. A message at the top of the page provides information about the restore job.

## Use the AWS Backup API, CLI, or SDK to restore Amazon RDS recovery points
<a name="rds-restore-cli"></a>

Use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`. For information on accepted metadata and values, see [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html) and [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html) in the *Amazon RDS API Reference*. Additionally, AWS Backup accepts the following information-only attributes. However, including them will not affect the restore:

```
EngineVersion
KmsKeyId       
Encrypted       
vpcId
```

# Restore an Amazon Redshift cluster
<a name="redshift-restores"></a>

You can restore automated and manual snapshots in the AWS Backup console or through CLI.

When you restore a Amazon Redshift cluster, the original cluster settings are input into the console by default. You can specify different settings for the configurations below. When restoring a table, you must specify the source and target databases. For more information on these configurations, see [ Restoring a cluster from a snapshot](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html#working-with-snapshot-restore-cluster-from-snapshot) in the *Amazon Redshift Management Guide*.
+ **Single table or cluster**: You can choose to restore an entire cluster or a single table. If you choose to restore a single table, the source database, source schema, and source table name are needed, as well as the target cluster, schema, and new table name.
+ **Node type**: Each Amazon Redshift cluster consists of a leader node and at least one compute node. When you restore a cluster, you need to specify the node type that meets your requirements for CPU, RAM, storage capacity, and drive type.
+ **Number of nodes**: When restoring a cluster, you need to specify the number of nodes needed.
+ **Configuration summary**
+ **Cluster Permissions**

## To restore an Amazon Redshift cluster or table using the AWS Backup console
<a name="redshift-restore-console"></a>

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Settings** and the Amazon Redshift resource ID that you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. To restore a resource, in the **Recovery Points** pane, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. Restore Options

   1. Restore cluster from snapshot, or

   1. Restore single table within a snapshot to new cluster. If you choose this options, then you must configure the following:

      1. Toggle on or off case-sensitive names.

      1. Input the source table values, including the database, the schema, and the table. The source table information can be found in the [Amazon Redshift console](https://console.aws.amazon.com/rds/).

      1. Input the target table values, including the database, the schema, and the new table name.

1. Specify your new cluster configuration settings.

   1. For cluster restore: choose Cluster identifier, Node type, and number of nodes.

   1. Specify availability zone and maintenance windows.

   1. You can associate additional roles by clicking **Associate IAM roles**.

1. *Optional:* Additional configurations:

   1. **Use defaults** is toggled on by default.

   1. Use the dropdown menus to select settings for Networking and security, VPC security groups, Cluster subnet group, and Availability zone.

   1. Toggle **Enhanced VPC routing** on or off.

   1. Determine if you want to make your cluster endpoint **publicly accessible**. If it is, instances and devices outside the VPC can connect to your database through the cluster endpoint. If this is toggled on, input the elastic IP address.

1. *Optional:* Database configuration. You may choose to input 

   1. Database port (by typing into the text field)

   1. Parameter groups

1. Maintenance: You can choose the 

   1. Maintenance window

   1. Maintenance track, from among current, trailing, or preview. This controls which cluster version is applied during a maintenance window.

1. Automated snapshot is set to default.

   1. Automated snapshot retention period. Retention period must be 0 to 35 days. Choose 0 to not create automated snapshots.

   1. The manual snapshot retention period is 1 to 3653 days.

   1. There is an optional checkbox for cluster relocation. If this is checked, it permits the ability to relocate your cluster in another Availability Zone. After you enable relocation, you can use the VPC endpoint.

1. Monitoring: After a cluster is restored, you can set up monitoring through CloudWatch or Amazon Redshift.

1. Choose IAM role to be passed to perform restores. You can use the default role, or you can specify a different one.

Your restore jobs will be visible under **Jobs**. You can see the current status of your restore job by clicking the refresh button or CTRL-R.

## Restore an Amazon Redshift cluster using API, CLI, or SDK
<a name="redshift-restore-api"></a>

Use [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html) to restore an Amazon Redshift cluster.

To restore a Amazon Redshift using the AWS CLI, use the command `start-restore-job` and specify the following metadata:

```
ClusterIdentifier // required string
AdditionalInfo // optional string
AllowVersionUpgrade // optional Boolean
AquaConfigurationStatus // optional string
AutomatedSnapshotRetentionPeriod // optional integer 0 to 35
AvailabilityZone // optional string
AvailabilityZoneRelocation // optional Boolean
ClusterParameterGroupName // optional string
ClusterSecurityGroups // optional array of strings
ClusterSubnetGroupName // optional strings
DefaultIamRoleArn // optional string
ElasticIp // optional string
Encrypted // Optional TRUE or FALSE 
EnhancedVpcRouting // optional Boolean 
HsmClientCertificateIdentifier // optional string
HsmConfigurationIdentifier // optional string
IamRoles // optional array of strings
KmsKeyId // optional string
MaintenanceTrackName // optional string
ManageMasterPassword // optional Boolean
ManualSnapshotRetentionPeriod // optional integer
MasterPasswordSecretKmsKeyId // optional string
NodeType // optional string
NumberOfNodes // optional integer
OwnerAccount // optional string
Port // optional integer
PreferredMaintenanceWindow // optional string
PubliclyAccessible // optional Boolean
ReservedNodeId // optional string
SnapshotClusterIdentifier // optional string
SnapshotScheduleIdentifier // optional string
TargetReservedNodeOfferingId // optional string
VpcSecurityGroupIds // optional array of strings
RestoreType // CLUSTER_RESTORE or TABLE_RESTORE or NAMESPACE_RESTORE
```

 For more information, see [https://docs.aws.amazon.com/redshift/latest/APIReference/API_RestoreFromClusterSnapshot.html](https://docs.aws.amazon.com/redshift/latest/APIReference/API_RestoreFromClusterSnapshot.html) in the *Amazon Redshift API Reference* and [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/redshift/restore-from-cluster-snapshot.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/redshift/restore-from-cluster-snapshot.html) in the *AWS CLI guide*. 

Here is an example template:

```
aws backup start-restore-job \
-\-recovery-point-arn "arn:aws:backup:region:account:snapshot:name
-\-iam-role-arn "arn:aws:iam:account:role/role-name" \
-\-metadata
-\-resource-type Redshift \
-\-region AWS Region
-\-endpoint-url URL
```

Here is an example:

```
aws backup start-restore-job \
-\-recovery-point-arn "arn:aws:redshift:us-west-2:123456789012:snapshot:redshift-cluster-1/awsbackup:job-c40dda3c-fdcc-b1ba-fa56-234d23209a40" \
-\-iam-role-arn "arn:aws:iam::974288443796:role/Backup-Redshift-Role" \
-\-metadata 'RestoreType=CLUSTER_RESTORE,ClusterIdentifier=redshift-cluster-restore-78,Encrypted=true,KmsKeyId=45e261e4-075a-46c7-9261-dfb91e1c739c' \
-\-resource-type Redshift \
-\-region us-west-2 \
```

You can also use [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_DescribeRestoreJob.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_DescribeRestoreJob.html) to assist with restore information.

In the AWS CLI, use the operation `describe-restore-job` and use the following metadata:

```
Region
```

Here is an example template:

```
aws backup describe-restore-job —restore-job-id restore job ID
-\-region AWS Region
```

Here is an example:

```
aws backup describe-restore-job -\-restore-job-id BEA3B353-576C-22C0-9E99-09632F262620 \
-\-region us-west-2 \
```

# Amazon Redshift Serverless restore
<a name="redshift-serverless-restore"></a>

You can restore manual snapshots of databases or tables using the AWS Backup console or AWS CLI.

Redshift Serverless and AWS Backup support *interchangeable restore* for data warehouse snapshots. This means you can restore Redshift Serverless backups to [Amazon Redshift provisioned clusters](redshift-backups.md) or restore provisioned backups to Redshift Serverless namespaces. This applies only to full database restore, not single table restore.


**Restore capabilities for Redshift Serverless**  

| Restore capabilities | Namespace | Single table | 
| --- | --- | --- | 
| Type of snapshot | Manual | Manual | 
| Information needed |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/redshift-serverless-restore.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/redshift-serverless-restore.html)  | 
| Restore target effect | Restores to an existing namespace through a destructive restore that overwrites existing data | Restores to a new table | 
| Interchangeable restore? |  Yes. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/redshift-serverless-restore.html)  | Not supported. | 

For more information about configurations, see [Snapshots and recovery points](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-snapshots-recovery-points.html) in the *Amazon Redshift Management Guide*.

## Considerations before restoring
<a name="redshift-serverless-restore-considerations"></a>

Before you begin a restore job, review the following:

**Configurations**

When you restore an Redshift Serverless snapshot, you choose the target namespace to where you want to restore all the databases or a single table. 

When you restore the databases in a snapshot to a Serverless namespace, it is a destructive restore. This means all previously extant data in the target restore namespace is overwritten when you restore to that namespace.

When you restore a single table, it is not a destructive restore. To restore a table, specify the workgroup, snapshot, source database, source table, target restore namespace, and the new table name.

**Permissions**

The permissions required are determined by the target data warehouse (that is, the namespace or provisioned cluster where you will restore the databases or table). The following table can help you determine the permissions, role, and policy to use. For more information on managing IAM policies, see [Identity and access management in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-authentication-access-control.html).


**Required permissions and roles for restore operations**  

| Restore target | Needed permission(s) | IAM role and policy | 
| --- | --- | --- | 
| Amazon Redshift provisioned cluster | redshift:RestoreFromClusterSnapshot | AWSBackupServiceRolePolicyForRestores contains this permission; it can be used for aws backup start-restore-job. | 
| Redshift Serverless namespace | redshift-serverless:RestoreFromSnapshot |  You must add this permission to the role and policy you will use to call **aws backup start-restore-job**. Since this is a destructive restore job, the service role policy for restores cannot be used.  | 

## Redshift Serverless restore procedure
<a name="redshift-serverless-restore-procedure"></a>

Follow these steps to restore Redshift Serverless backups using the AWS Backup console or AWS CLI:

------
#### [ Console ]

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Settings** and select the Redshift Serverless resource ID to restore.

1. On the **Resource details** page, select the recovery point ID in the **Recovery Points** pane, then choose **Restore**.

1. In the **Restore options** pane, choose to restore the entire data warehouse or a single table.

1. Select the destination target in the **Target data warehouse configuration** pane.
   + For a full data warehouse restore, choose between Amazon Redshift provisioned cluster or Redshift Serverless namespace.
   + For a single table restore, specify the source snapshot, database, schema, table name, and target details.

1. Choose the IAM restore role for the job. If not using the default role, ensure the selected role includes the `iam:PassRole` permission.

------
#### [ AWS CLI ]

Use the **aws backup start-restore-job** command.

AWS Backup works with Redshift Serverless to orchestrate the restore job. The CLI command will be prepended with `aws backup` but will also contain metadata relevant to Redshift Serverless or Amazon Redshift. 

The required and optional metadata depends on whether you're restoring a whole data warehouse or a single table.
+ For single table restore, see [restore-table-from-snapshot](https://docs.aws.amazon.com/cli/latest/reference/redshift-serverless/restore-table-from-snapshot.html) in the *AWS CLI Command Reference*.
+ For namespace restore, see [restore-from-snapshot](https://docs.aws.amazon.com/cli/latest/reference/redshift-serverless/restore-from-snapshot.html) in the *AWS CLI Command Reference*.
+ To restore to a Amazon Redshift provisioned cluster, see [restore-from-cluster-snapshot](https://docs.aws.amazon.com/cli/latest/reference/redshift/restore-from-cluster-snapshot.html) in the *AWS CLI Command Reference*.

**Example template for `start-restore-job` to restore to a Serverless namespace:**  

```
aws backup start-restore-job \
--recovery-point-arn "arn:aws:backup:region:account:snapshot:name--iam-role-arn "arn:aws:iam:account:role/role-name" \
--metadata \
--resource-type Redshift Serverless \
--region Region \
--endpoint-url URL
```

**Example for `start-restore-job` to restore to a Serverless namespace:**  

```
aws backup start-restore-job \
--recovery-point-arn "arn:aws:redshift-serverless:us-east-1:123456789012:snapshot/a12bc34d-567e-890f-123g-h4ijk56l78m9" \
--iam-role-arn "arn:aws:iam::974288443796:role/Backup-Redshift-Role" \
--metadata 'RestoreType=NAMESPACE_RESTORE,NamespaceIdentifier=redshift-namespace-1-restore' \
--resource-type "RedshiftServerless" \
--region us-west-2
```

After starting the restore job, use **describe-restore-job** to monitor progress.

------

# Restore an SAP HANA database on an Amazon EC2 instance
<a name="saphana-restore"></a>

SAP HANA databases on EC2 instances can be restored using the AWS Backup console, using API, or using AWS CLI.

**Topics**
+ [Restore an SAP HANA database with the AWS Backup console](#w2aac17c31c43b9)
+ [[StartRestoreJob API](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html) for SAP HANA on EC2](#w2aac17c31c43c11)
+ [CLI for SAP HANA on EC2](#w2aac17c31c43c13)
+ [SAP HANA High Availability (HA) restore](#saphanarestoreha)
+ [Troubleshooting](#saphanarestoretroubleshooting)

## Restore an SAP HANA database with the AWS Backup console
<a name="w2aac17c31c43b9"></a>

Note that backup jobs and restore jobs involving the same database cannot occur concurrently. When an SAP HANA database restore job is occurring, attempts to back up the same database will likely result in an error: "Database cannot be backed up while it is stopped."

1. Access the AWS Backup console using the credentials from prerequisites.

1. Under the **Target restore location** dropdown menu, choose a database to overwrite with the recovery point you are using to restore (note that the instance hosting the restore target database must also have the permissions from the prerequisites).
**Important**  
SAP HANA database restores are destructive. Restoring a database will overwrite the database at the specified target restore location.

1. Complete this step only if you are performing a system copy restore; otherwise, skip to step 4.

   System copy restores are restore jobs which restore to a target database different from the source database which generated the recovery point. For system copy restores, notice the `aws ssm-sap put-resource-permission` command provided for you on the console. This command must be copied, pasted, and executed on the machine that completed the prerequisites. When running the command, use the credentials from the role in the prerequisite where you set up the required permissions for registering applications.

   ```
   // Example command
   aws ssm-sap put-resource-permission \
   --region us-east-1 \
   --action-type RESTORE \
   --source-resource-arn arn:aws:ssm-sap-east-1:112233445566:HANA/Foo/DB/HDB \
   --resource-arn arn:aws:ssm-sap:us-east-1:112233445566:HANA/Bar/DB/HDB
   ```

1. Once you choose the restore location, you can see the target database’s **Resource ID**, **Application name**, **Database type**, and the **EC2 instance**.

1. *Optionally*, you may expand **Advanced restore settings** to change your catalog restore option. Available options vary based on selected restore settings.

1. Click **Restore backup**.

1. The target location will be overwritten during restore (**"destructive restore"**), so you must provide confirmation that you permit this in the next pop-up dialog box.

   1. To proceed, you must understand that the existing database will be overwritten by the one you are restoring.

   1. Once this is understood, you must acknowledge the existing data will be overwritten. To acknowledge this and to proceed, type **overwrite** into the text input field.

1. Click **Restore backup**.

If the procedure was successful, a blue banner will appear at the top of the console. This signifies that the restore job is in progress. You will be automatically redirected to the Jobs page where your restore job will appear in the list of restore jobs. This most recent job will have a status of `Pending`. You can search for and then click on the restore job ID too see details of each restore job. You can refresh the restore jobs list by clicking the refresh button to view changes to the restore job status.

## [StartRestoreJob API](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html) for SAP HANA on EC2
<a name="w2aac17c31c43c11"></a>

This action recovers the saved resource identified by an Amazon Resource Name (ARN).

**Request Syntax**

```
PUT /restore-jobs HTTP/1.1
Content-type: application/json
{
   "[IdempotencyToken](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html#Backup-StartRestoreJob-request-IdempotencyToken)": "string",
   "[Metadata](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html#Backup-StartRestoreJob-request-Metadata)": { 
      "string" : "string" 
   },
   "[RecoveryPointArn](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html#Backup-StartRestoreJob-request-RecoveryPointArn)": "string",
   "[ResourceType](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html#Backup-StartRestoreJob-request-ResourceType)": "string"
}
```

**URI Request Parameters**: The request does not use any URI parameters.

**Request Body**: The request accepts the following data in JSON format:

**IdempotencyToken**A customer-chosen string that you can use to distinguish between otherwise identical calls to `StartRestoreJob`. Retrying a successful request with the same idempotency token results in a success message with no action taken.

Type: String

Required: No

**Metadata**

A set of metadata key-value pairs. Contains information, such as a resource name, required to restore a recovery point. You can get configuration metadata about a resource at the time it was backed up by calling `GetRecoveryPointRestoreMetadata`. However, values in addition to those provided by `GetRecoveryPointRestoreMetadata` might be required to restore a resource. For example, you might need to provide a new resource name if the original already exists.

You need to include specific metadata to restore an SAP HANA on Amazon EC2 instance. See [ StartRestoreJob metadata](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html#API_StartRestoreJob_RequestBody) for SAP HANA-specific items.

To retrieve the relevant metadata, you can use the call [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_GetRecoveryPointRestoreMetadata.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_GetRecoveryPointRestoreMetadata.html).

Example of a standard SAP HANA database recovery point:

```
"RestoreMetadata": {
        "BackupSize": "1660948480", 
        "DatabaseName": "DATABASENAME",
        "DatabaseType": "SYSTEM",
        "HanaBackupEndTime": "1674838362",
        "HanaBackupId": "1234567890123",
        "HanaBackupPrefix": "1234567890123_SYSTEMDB_FULL",
        "HanaBackupStartTime": "1674838349",
        "HanaVersion": "2.00.040.00.1553674765",
        "IsCompressedBySap": "FALSE",
        "IsEncryptedBySap": "FALSE",
        "SourceDatabaseArn": "arn:aws:ssm-sap:region:accountID:HANA/applicationID/DB/DATABASENAME",
        "SystemDatabaseSid": "HDB",
        "aws:backup:request-id": "46bbtt4q-7unr-2897-m486-yn378k2mrw9c"
    }
```

Example of a continuous SAP HANA database recovery point:

```
"RestoreMetadata": {
        "AvailableRestoreBases": "[1234567890123,9876543210987,1472583691472,7418529637418,1678942598761]",
        "BackupSize": "1711284224",
        "DatabaseName": "DATABASENAME",
        "DatabaseType": "TENANT",
        "EarliestRestorablePitrTimestamp": "1674764799789",
        "HanaBackupEndTime": "1668032687",
        "HanaBackupId": "1234567890123",
        "HanaBackupPrefix": "1234567890123_HDB_FULL",
        "HanaBackupStartTime": "1668032667",
        "HanaVersion": "2.00.040.00.1553674765",
        "IsCompressedBySap": "FALSE",
        "IsEncryptedBySap": "FALSE",
        "LatestRestorablePitrTimestamp": "1674850299789",
        "SourceDatabaseArn": "arn:aws:ssm-sap:region:accountID:HANA/applicationID/DB/SystemDatabaseSid",
        "SystemDatabaseSid": "HDB",
        "aws:backup:request-id": "46bbtt4q-7unr-2897-m486-yn378k2mrw9d"
    }
```

## CLI for SAP HANA on EC2
<a name="w2aac17c31c43c13"></a>

The command `start-restore-job` recovers the saved resource identified by an Amazon Resource Name (ARN). CLI will follow the API guideline above.

**Synopsis:**

```
start-restore-job
--recovery-point-arn value
--metadata value
--aws:backup:request-id value          
[--idempotency-token value]
[--resource-type value]
[--cli-input-json value]
[--generate-cli-skeleton value]
[--debug]
[--endpoint-url value]
[--no-verify-ssl]
[--no-paginate]
[--output value]
[--query value]
[--profile value]
[--region value]
[--version value]
[--color value]
[--no-sign-request]
[--ca-bundle value]
[--cli-read-timeout value]
[--cli-connect-timeout value]
```

**Options**

`--recovery-point-arn` (string) is a string in the form of an Amazon Resource Number (ARN) that uniquely identifies a recovery point; for example `arn:aws:backup:region:123456789012:recovery-point:46bbtt4q-7unr-2897-m486-yn378k2mrw9d`

`--metadata` (map): A set of metadata key-value pairs. Contains information, such as a resource name, required to restore a recovery point. You can get configuration metadata about a resource at the time it was backed up by calling `GetRecoveryPointRestoreMetadata` . However, values in addition to those provided by `GetRecoveryPointRestoreMetadata` might be required to restore a resource. You need to specify specific metadata to restore an SAP HANA on Amazon EC2 instance:
+ `aws:backup:request-id`: This is any UUID string used for idempotency. It does not alter your restore experience in any way.
+ `aws:backup:TargetDatabaseArn`: Specify the database to which you want to restore. This is the SAP HANA on Amazon EC2 database ARN.
+ `CatalogRestoreOption`: Specify where to restore your catalog from. One of `NO_CATALOG`, `LATEST_CATALOG_FROM_AWS_BACKUP`, `CATALOG_FROM_LOCAL_PATH`
+ `LocalCatalogPath`: If CatalogRestoreOption metadata value is `CATALOG_FROM_LOCAL_PATH`, then specify the path to local catalog on your EC2 instance. This should be a valid file path in your EC2 instance.
+ `RecoveryType`: Currently, `FULL_DATA_BACKUP_RECOVERY`, `POINT_IN_TIME_RECOVERY`, and `MOST_RECENT_TIME_RECOVERY` recovery types are supported.

key = (string); value = (string). Shorthand syntax:

```
KeyName1=string,KeyName2=string
```

JSON syntax:

```
{"string": "string"
  ...}
```

`--idempotency-token` is a user-chosen string that you can use to distinguish between otherwise identical calls to `StartRestoreJob`. Retrying a successful request with the same idempotency token results in a success message with no action taken.

`--resource-type` is a string that starts a job to restore a recovery point for one of the following resources: `SAP HANA on Amazon EC2` for SAP HANA on Amazon EC2. *Optionally*, SAP HANA resources can be tagged using the command `aws ssm-sap tag-resource`

**Output**: `RestoreJobId` is a string that uniquely identifies the job that restores a recovery point.

## SAP HANA High Availability (HA) restore
<a name="saphanarestoreha"></a>

There are important considerations and additional steps to include when you are restoring a high availability (HA) system of SAP HANA. Expand the section below that best aligns your use case.

Restore scenario:

### System database to an SAP HANA HA target
<a name="systemdbtargetha"></a>

Before you restore to the target (destination) SAP HANA HA system,

1. If a cluster is installed, put all cluster notes in Maintenance mode.

1. Stop the SAP HANA database on all nodes, including primary and secondary.

1. *(Recommended)* Disable any backup plans to ensure they don't interfere with the restore operation.

After the restore job completes, go to the restored SAP HANA HA system, then:

1. Start the SAP HANA database on the primary mode.

1. Manually start any tenant database in which the system database was restored but its tenants were not restored.

1. Re-establish SAP HANA system replication (HSR) between the primary and secondary nodes.

1. Start the SAP HANA database on the secondary node.

1. If a cluster is installed, ensure all cluster nodes are online.

1. Enable any backup plans you disabled prior to the restore operation.

*(Optional)* You can keep the application in sync on [AWS Systems Manager for SAP](https://docs.aws.amazon.com/ssm-sap/latest/userguide/what-is-ssm-for-sap.html) by calling [https://docs.aws.amazon.com/ssmsap/latest/APIReference/API_StartApplicationRefresh.html](https://docs.aws.amazon.com/ssmsap/latest/APIReference/API_StartApplicationRefresh.html), or you can wait for the scheduled application refresh that will bring the latest SAP metadata.

### System database to an SAP HANA single-node target
<a name="systemdbtargetsingle"></a>

Before you begin a restore job, go to the target single-node SAP HANA system, then:

1. Stop the SAP HANA database on the target SAP HANA system.

1. *(Recommended)* Disable any backup plans to ensure they don't interfere with the restore operation.

After the restore job completes, go to the target single-node SAP HANA system, then:

1. Start SAP HANA on the target SAP HANA system.

1. Manually start each tenant database on the target node.

1. Enable any backup plans you disabled prior to the restore operation.

*(Optional)* You can keep the application in sync on [AWS Systems Manager for SAP](https://docs.aws.amazon.com/ssm-sap/latest/userguide/what-is-ssm-for-sap.html) by calling [https://docs.aws.amazon.com/ssmsap/latest/APIReference/API_StartApplicationRefresh.html](https://docs.aws.amazon.com/ssmsap/latest/APIReference/API_StartApplicationRefresh.html), or you can wait for the scheduled application refresh that will bring the latest SAP metadata.

### Tenant database (in place or system copy)
<a name="tenantdb"></a>

Before you start a restore job, go to the target SAP HANA system, then:

1. *(Optional, but recommended)* Put any installed clusters into maintenance mode to avoid an unexpected takeover during the restore operation.

1. Ensure the system database is running on the target SAP HANA system.

1. *(Recommended)* Disable any backup plans to ensure they don't interfere with the restore operation.

After the restore job completes:
+ Enable any backup plans you disabled prior to the restore operation.

## Troubleshooting
<a name="saphanarestoretroubleshooting"></a>

If any of the following errors occur while attempting a backup operation, see the associated resolution.
+ **Error:** Continuous backup log error

  To maintain recovery points for continuous backups, logs are created by SAP HANA for all changes. When the logs are unavailable, the status of each of these continuous recovery points is `STOPPED`. The last certain viable recovery point that can be used to restore is one that has the status of `AVAILABLE`. If the log data is missing for the time between recovery points with a `STOPPED` status and points with `AVAILABLE`, these times cannot be guaranteed to have a successful restore. If you input a date and time within this range, AWS Backup will attempt the backup, but will use the closest available restorable time. This error will be shown by the message `“Encountered an issue with log backups. Please check SAP HANA for details."`

  **Resolution:** In the console, the most recent restorable time, based on the logs, is displayed. You can input a time more recent than the time shown. However, if the data for this time is unavailable from the logs, AWS Backup will use the most recent restorable time.
+ **Error:** `Internal error`

  **Resolution:** Create a support case from your console or contact Support with the details of your restore such as the restore job ID.
+ **Error:** `The provided role arn:aws:iam::ACCOUNT_ID:role/ServiceLinkedRole cannot be assumed by AWS Backup`

  **Resolution:** Ensure that the role assumed when calling the restore has the required permissions to create service linked roles.
+ **Error:** `User: arn:aws:sts::ACCOUNT_ID:assumed-role/ServiceLinkedRole/AWSBackup-ServiceLinkedRole is not authorized to perform: ssm-sap:GetOperation on resource: arn:aws:ssm-sap:us-east-1:ACCOUNT_ID:...`

  **Resolution:** Ensure that the role assumed when calling the restore permissions outlined in the prerequisites is entered correctly.
+ **Error:** `b* 449: recovery strategy could not be determined: [111014] The backup with backup id '1660627536506' cannot be used for recovery SQLSTATE: HY000\n`

  **Resolution:** Ensure that Backint agent was properly installed. Check all the prerequisites, particularly [ Install AWS BackInt Agent and AWS Systems Manager for SAP](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-installing-configuring.html) on your SAP application server and then retry installing the BackInt Agent again.
+ **Error:** `IllegalArgumentException: Restore job provided is not ready to return chunks, current restore job status is: CANCELLED`

  **Resolution:** Restore job was cancelled by the service workflow. Retry restore job.
+ **Error:** Encountered an issue restore a tenant database on an SAP HANA High Availability system: `b* -10709: Connection failed (RTE:[89006] System call 'connect' failed, rc=111:Connection refused ([::1]:40404 → localhost:30013))\n`

  **Resolution:** Check SAP HANA to ensure that the SYSTEMDB is up and running.
+ **Error:** `b'* 448: recovery could not be completed: [301102] exception 301153: Sending root key to secondary failed: connection refused. This may be caused by a stopped system replication secondary. Please keep the secondary online to receive the restored root key. Alternatively you could unregister the secondary site in case of an urgent recovery.\n SQLSTATE: HY000\n'`

  **Resolution:** On a SAP HANA High Availability system, SAP HANA may not be running on the secondary node while an active restore operation is running. Start SAP HANA on the secondary node, then retry the restore job again.
+ **Error:** `RequestError: send request failed\ncaused by: read tcp 10.0.131.4:40482->35.84.99.47:443: read: connection timed out"`

  **Resolution:** Transient network instability is occurring on the instance. Retry the restore. If this issue happens consistently, try adding `ForceRetry: "true"` to agent config file at `/hana/shared/aws-backint-agent/aws-backint-agent-config.yaml.` 

For any other AWS Backint agent related issue, refer to [Troubleshoot AWS Backint Agent For SAP HANA](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-troubleshooting.html).

# Restore S3 data using AWS Backup
<a name="restoring-s3"></a>

You can restore the S3 data that you backed up using AWS Backup to the S3 Standard storage class. You can restore all the objects in a bucket or specific objects. You can restore them to an existing or new bucket.

## Amazon S3 restore permissions
<a name="s3-restore-permissions"></a>

Before you begin restoring resources, ensure the role you're using has sufficient permissions.

For more information, see the following entries on policies:

1. [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForS3Restore.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForS3Restore.html)

1. [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForRestores.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForRestores.html)

1. [Managed policies for AWS Backup](security-iam-awsmanpol.md)

## Amazon S3 restore considerations
<a name="s3-restore-considerations"></a>
+ If ACLs were enabled during the backup, the destination bucket for the restore must have ACLs enabled; otherwise, the restore job fails.
+ If Block Public Access is enabled on the destination bucket, the restore job completes successfully, but objects with public ACLs are not restored.
+ Restores of objects are skipped if the destination bucket has an object with the same name or version ID.
+ When you restore to the original S3 bucket,
  + AWS Backup does not perform a destructive restore, which means AWS Backup will not put an object into a bucket in place of an object that already exists, regardless of version.
  + A delete marker in the current version is treated as the object as nonexistent, so a restore can occur.
  + AWS Backup does not delete objects (without delete markers) from a bucket during a restore (example: keys currently in the bucket which were not present during the backup will remain).
+ **Restoring cross-Region copies**
  + While S3 backups can be copied cross-Region, restore jobs only occur in the same Region in which the original backup or copy is located.  
**Example**  

    **Example: **An S3 bucket created in US East (N. Virginia) Region can be copied to Canada (Central) Region. The restore job can be initiated using the original bucket in US East (N. Virginia) Region and restored to that Region, or the restore job can be initiated using the copy in Canada (Central) Region and restored to that Region.
  + The original encryption method cannot be used to restore a recovery point (backup) copied from another Region. Cross-Region copy AWS KMS encryption is not available for Amazon S3 resources; instead, use a different encryption type for a restore job.

## Restoring ACLs and object tags
<a name="s3-restore-acl-options"></a>

When restoring Amazon S3 data, you choose whether ACLs are part of the restore.

If ACLs are available in the recovery point, you choose to restore or exclude ACLs using the Restore ACLs setting; if ACLs were not in the backup, they cannot be restored regardless of the setting. If you try to create a restore job with ACls enabled but they were not part of the backup, you may see an error such as `Unable to restore Access Control Lists (ACLs) for bucket because backup was created with the 'BackupACLs' option disabled. Please proceed with restoring without ACLs`.

Object tags are automatically restored if they were included in the original backup

**Note**  
Restoring recovery points without ACLs  
If through AWS CLI you attempt to restore ACLs from a backup that excluded ACLs, the restore operation will fail with an error message indicating invalid restore parameters.

## Restore multiple versions
<a name="s3-restore-versions"></a>

By default, AWS Backup restores only the latest version of your objects. You have the choice to restore additional or all versions of the objects.

See step 6 in the following section for how to restore up the 10 latest versions or all versions using the AWS Backup console.

See [Restore Amazon S3 recovery points through AWS CLI](#s3-restore-cli) later on this page for metadata details to include when restoring programmatically.

## Restore through the AWS Backup console
<a name="s3-restore-console"></a>

**To restore your Amazon S3 data using the AWS Backup console:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**, and select the Amazon S3 resource ID that you want to restore.

1. On the **Resource details** page, you will see a list of recovery points for the selected resource ID. To restore a resource:

   1. In the **Backups** pane, choose the recovery point ID of the resource.

   1. In the upper-right corner of the pane, choose **Restore**.

      (Alternatively, you can go to the backup vault, find the recovery point, and then click **Actions** then click **Restore**.)

1. If you are restoring a continuous backup, in the **Restore time** pane, select either option:

   1. Accept the default to restore to the **Latest restorable time**.

   1. **Specify date and time** to restore.

1. In the **Settings** pane, specify whether to ** Restore entire bucket** or perform **Item level restore**. 

   1. If you choose **Item level restore**, you restore up to 5 items (objects or folders in a bucket) per restore job by specifying each item's [S3 URI](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-bucket-intro.html) that uniquely identifies that object.

      (For more information about S3 bucket URIs, see [Methods for accessing a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-bucket-intro.html) in the *Amazon Simple Storage Service User Guide*.)

   1. Choose **Add item** to specify another item to restore.

1. By default, only the latest version of an object is restored. You can restore up to the 10 latest versions or restore all versions of the objects. Select your preference from the drop-down menu.

1. Choose your **Restore destination**. You can either **Restore to source bucket**, **Use existing bucket**, or **Create new bucket**.
**Note**  
Your restore destination bucket must have versioning turned on. AWS Backup notifies you if the bucket you select does not meet this requirement.

   1. If you choose **Use existing bucket**, select the destination S3 bucket from the menu which shows all existing buckets within your current AWS Region.

   1. If you choose **Create new bucket**, type in the **new bucket name**. After the bucket is created, you can modify the BPA (Block Public Access) and S3 versioning default settings.

1. For the encryption of objects in your S3 bucket, you can choose your **Restored object encryption**. Use **original encryption keys** (default), **Amazon S3 key (SSE-S3)**, or **AWS Key Management Service key (SSE-KMS)**.

   These settings only apply to encryption of the objects in the S3 bucket. This does not affect the encryption for the bucket itself.

   1. **Use original encryption keys (default)** restores objects with the same encryption keys used by the source object. If a source object was unencrypted, this method restores the object without encryption.

      This restore option allows you to optionally choose a substitute encryption key to encrypt the restore object(s) if the original key is unavailable.

   1. If you choose **Amazon S3 key (SSE-S3)**, you do not need to specify any other options.

   1. If you choose **AWS Key Management Service key (SSE-KMS)**, you can make the following choices: **AWS managed key (aws/s3)**, **Choose from your AWS KMS keys**, or **Enter AWS KMS key ARN**.

      1. If you choose **AWS managed key (aws/s3)**, you do not need to specify any other options.

      1. If you **Choose from your AWS KMS keys**, select a AWS KMS key from the dropdown menu. Alternatively, choose **Create key**.

      1. If you **Enter AWS KMS key ARN**, type in the ARN into the text box. Alternatively, choose **Create key**.

1. In the **Restore role** pane, choose the IAM role that AWS Backup will assume for this restore. 

1. Choose **Restore backup**. The **Restore jobs** pane appears. A message at the top of the page provides information about the restore job.

## Restore Amazon S3 recovery points through AWS CLI
<a name="s3-restore-cli"></a>

Use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`. You can specify the following metadata during Amazon S3 restores:

```
// Mandatory metadata:
DestinationBucketName // The destination bucket for your restore.
        
// Optional metadata:
RestoreACLs // Boolean. If ACLs were part of the backup, include and set to TRUE. If the backup 
does not include ACLs and this parameter is included, set to FALSE.
EncryptionType // The type of encryption to encrypt your restored objects. Options are original (same encryption as the original object), SSE-S3, or SSE-KMS).
ItemsToRestore // A list of up to five paths of individual objects to restore. Only required for item-level restore.
KMSKey // Specifies the SSE-KMS key to use. Only needed if encryption is SSE-KMS.
RestoreLatestVersionsUpTo // Include this optional parameter to multiple versions.
RestoreTime // The restore time (only valid for continuous recovery points where it is required, in format 2021-11-27T03:30:27Z).
```

`RestoreLatestVersionsUpTo` is an optional metadata key-value pair. By default, or if this is omitted, the latest version is restored. Include this metadata to restore additional versions of your objects. Accepted values are:
+ `1` (to restore the latest version)
+ `n` , where *n* is any positive integer greater than 1. The latest *n* versions of your objects will be restored. If the actual version count of an object is less than *n*, that number of versions will be restored for that object.
+ `all` (to restore all versions)

## Recovery point status
<a name="s3-recovery-point-status"></a>

Recovery points will have a status indicating their state.

`EXPIRED` status indicates that the recovery point has exceeded its retention period, but AWS Backup lacks permission or is otherwise unable to delete it. To manually delete these recovery points, see [ Step 3: Delete the recovery points](https://docs.aws.amazon.com/aws-backup/latest/devguide/gs-cleanup-resources.html#cleanup-backups) in the *Clean up resources* section of *Getting started*.

`STOPPED` status occurs on a continuous backup where a user has taken some action that causes the continuous backup to be disabled. This can be caused by the removal of permissions, turning off versioning, turning off events being sent to Amazon EventBridge, or disabling the EventBridge rules that are put in place by AWS Backup.

To resolve `STOPPED` status, ensure that all requested permissions are in place and that versioning is enabled on the S3 bucket. Once these conditions are met, the next instance of a backup rule running will result in a new continuous recovery point being created. The recovery points with STOPPED status do not need to be deleted.

## S3 restore messages
<a name="s3-restore-messages"></a>

When a restore job completes or fails, you may see the following message. The following table can help you determine the possible cause of the status message.


| Scenario | Job Status | Message | Example | 
| --- | --- | --- | --- | 
| All objects failed to be restored. | `FAILED` | "No objects were restored from **RecoveryPointARN** to **bucket**. To get notified of these failures, enable SNS event notifications." |  The role used to start the restore job does not have permission to put objects in the destination bucket. The restore role does not have permission to verify if object version exists in the destination bucket.  | 
| One or more (but not all) objects failed to be restored. | COMPLETED |  "One or more objects failed to be restored from **RecoveryPointARN** to **bucket**. To get notified of these failures, enable SNS event notifications."  |  The role used to start the restore job does not have access to the KMS key used by one or more of the original objects.  | 
| There are no objects to restore. | COMPLETED | "There are no objects that match the restore request for **RecoveryPointARN**." |  The recovery point (backup) of source bucket to be restored has no objects. The prefix used for the restore job does not correspond with any object.  | 

# Restore a Storage Gateway volume
<a name="restoring-storage-gateway"></a>

If you are restoring an AWS Storage Gateway volume snapshot, you can choose to restore the snapshot as an Storage Gateway volume or as an Amazon EBS volume. This is because AWS Backup integrates with both services, and any Storage Gateway snapshot can be restored to either an Storage Gateway volume or an Amazon EBS volume.

## Restore Storage Gateway through the AWS Backup console
<a name="restoring-sgw-console"></a>

**To restore an Storage Gateway volume**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources** and then choose the Storage Gateway resource ID you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. To restore a resource, in the **Backups** pane, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. Specify the restore parameters for your resource. The restore parameters you enter are specific to the resource type that you selected.

   For **Resource type**, choose the AWS resource to create when restoring this backup.

1. If you choose **Storage Gateway volume**, choose a **Gateway** in a reachable state. Also choose your **iSCSI target name**.

   1. For "Volume stored" gateways, choose a **Disk Id**.

   1. For "Volume cached" gateways, choose a capacity that is at least as large as your protected resource.

   If you choose **EBS volume**, provide the values for **Volume type**, **Size (GiB)**, and choose an **Availability zone**.

1. For **Restore role**, choose the IAM role that AWS Backup will assume for this restore.
**Note**  
If the AWS Backup default role is not present in your account, a **Default role** is created for you with the correct permissions. You can delete this default role or make it unusable.

1. Choose **Restore backup**.

   The **Restore jobs** pane appears. A message at the top of the page provides information about the restore job.

## Restore Storage Gateway with AWS CLI
<a name="restoring-sgw-cli"></a>

In the command line interface, [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/start-restore-job.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/start-restore-job.html) allows you to restore a Storage Gateway volume.

The following list is the accepted metadata.

```
gatewayArn // The Amazon Resource Name (ARN) of the gateway. Use the ListGateways operation to return a list of gateways for your account and AWS Region.
gatewayType // The type of created gateway. Valid value is BACKUP_VM
targetName
kmsKey
volumeSize
volumeSizeInBytes
diskId
```

# Restore an Amazon Timestream table
<a name="timestream-restore"></a>

When you restore a Amazon Timestream table, there are several options to configure, including the new table name, the destination database, your storage allocation preferences (memory and magnetic storage), and which role you’ll use to complete the restore job. You can also choose an Amazon S3 bucket in which to store error logs. Magnetic storage writes are asynchronous, so you may wish you log the errors.

Timestream data storage has two tiers: a memory store and a magnetic store. Memory store is required, but you have the option of transferring your restored table to magnetic storage after the specified memory time is finished. Memory store is optimized for high throughput data writes and fast point-in-time queries. The magnetic store is optimized for lower throughput late-arrival data writes, long-term data storage, and fast analytical queries.

When you restore a Timestream table, you determine how long you want the table to remain in each storage tier. Using the console or API, you can set the storage time for both. Note that the storage is linear and sequential. Timestream will store your restored table in memory storage first, then automatically transition it to magnetic storage when the memory storage time has been reached.

**Note**  
The magnetic store retention period must be equal or greater than the original retention period (shown at the top-right of the console), or data will be lost.

*Example:* You set the memory store allocation to hold data for one week and set the magnetic store allocation to hold the same data for one year. When the data in the memory store becomes a week old, it is automatically moved to the magnetic store. It is then retained in the magnetic store for a year. At the end of that time, it is deleted from Timestream and from AWS Backup.

## To restore a Amazon Timestream table using the AWS Backup console
<a name="timestream-restore-console"></a>

You can restore Timestream tables in the AWS Backup console that were created by AWS Backup.

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources** and the Amazon Timestream resource ID that you want to restore.

1. On the **Resource details** page, a list of recovery points for the selected resource ID is shown. To restore a resource, in the **Backups** pane, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. Specify your new table configuration settings, including:

   1. **New table name**, consisting of 2 to 256 characters (letters, numbers, dashes, periods, and underscores).

   1. **Destination database**, chosen from the drop down menu.

1. **Storage allocation**: Set the amount of time the restored table will first reside in [ memory storage](https://docs.aws.amazon.com/timestream/latest/developerguide/storage.html), and set the amount of time the restored table will then reside in [magnetic storage](https://docs.aws.amazon.com/timestream/latest/developerguide/storage.html). Memory storage can be set to hours, days, weeks, or months. Magnetic storage can be set to days, weeks, months, or years.

1. *(Optional)*** Enable magnetic storage writes**: You have the option of allowing magnetic storage writes. With this option checked, late-arriving data, which is data with a timestamp outside the memory storage retention period, will be written directly into the magnetic store.

1. *(Optional)* **Amazon S3 error logs location**: You can specify an S3 location in which your error logs will be stored. Browse your S3 files or copy and paste the S3 file path.
**Note**  
If you choose to specify an S3 error log location, the role you use for this restore must have permission to write to an S3 bucket or it must contain a policy with that permission.

1. Choose the IAM role to be passed to perform restores. You can use the default IAM role or specify a different one.

1. Click **Restore backup**.

Your restore jobs will be visible under protected resources. You can see the current status of your restore job by clicking the refresh button or CTRL-R.

## To restore a Amazon Timestream table using API, CLI, or SDK
<a name="timestream-restore-api"></a>

Use [`StartRestoreJob` to restore a Timestream table via API.](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html).

To restore a Timestream using the AWS CLI, use the operation `start-restore-job.` and specify the following metadata:

```
TableName: string;
DestinationDatabase: string;
MemoryStoreRetentionPeriodInHours: value: number unit: 'hours' | 'days' | 'weeks' | 'months' 
MagneticStoreRetentionPeriodInDays: value: number unit: 'days' | 'weeks' | 'months' | 'years' 
EnableMagneticStoreWrites?: boolean;
aws:backup:request-id
```

Here is an example template:

```
aws backup start-restore-job \
--recovery-point-arn "arn:aws:backup:us-west-2:accountnumber:recovery-point:1a2b3cde-f405-6789-012g-3456hi789012_beta" \
--iam-role-arn "arn:aws:iam::accountnumber:role/rolename" \
--metadata 'TableName=tablename,DatabaseName=databasename,MagneticStoreRetentionPeriodInDays=1,MemoryStoreRetentionPeriodInHours=1,MagneticStoreWriteProperties="{\"EnableMagneticStoreWrites\":true,\"MagneticStoreRejectedDataLocation\":{\"S3Configuration\":{\"BucketName\":\"bucketname\",\"EncryptionOption\":\"SSE_S3\"}}}"' \
--region us-west-2 \
--endpoint-url url
```

You can also use [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_DescribeRestoreJob.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_DescribeRestoreJob.html) to assist with restore information.

In the AWS CLI, use the operation `describe-restore-job` and use the following metadata:

```
TableName: string;
DestinationDatabase: string;
MemoryStoreRetentionPeriodInHours: value: number unit: 'hours' | 'days' | 'weeks' | 'months' 
MagneticStoreRetentionPeriodInDays: value: number unit: 'days' | 'weeks' | 'months' | 'years' 
EnableMagneticStoreWrites?: boolean;
```

Here is an example template:

```
aws backup describe-restore-job \
--restore-job-id restore job ID \
--region awsregion \
--endpoint-url url
```

# Restore a virtual machine using AWS Backup
<a name="restoring-vm"></a>

You can restore a virtual machine to VMware, VMware Cloud on AWS, VMware Cloud on AWS Outposts, an Amazon EBS volume, or [to an Amazon EC2 instance](https://docs.aws.amazon.com/aws-backup/latest/devguide/restoring-ec2.html). Restoring (or migrating) a virtual machine to EC2 requires a license. By default, AWS will include a license (charges apply). For more information, see [Licensing options](https://docs.aws.amazon.com/vm-import/latest/userguide/licensing.html) in the *VM Import/Export User Guide*.

You can restore a VMware virtual machine using the AWS Backup console or through the AWS CLI. When a virtual machine is restored, the VMware Tools folder is not included. See VMware documentation to reinstall VMware Tools.

AWS Backup restores of virtual machines are non-destructive, meaning AWS Backup does not overwrite existing virtual machines during a restore. Instead, the restore job deploys a new virtual machine.

**Topics**
+ [Considerations when restoring a VM to an Amazon EC2 instance](#vm-restore-ec2)
+ [Use the AWS Backup console to restore virtual machine recovery points](#vm-restore-console)
+ [Use AWS CLI to restore virtual machine recovery points](#vm-restore-cli)

## Considerations when restoring a VM to an Amazon EC2 instance
<a name="vm-restore-ec2"></a>
+ Restoring (or migrating) a virtual machine to EC2 requires a license. By default, an AWS will include a license (charges apply). For more information, see [Licensing options](https://docs.aws.amazon.com/vm-import/latest/userguide/licensing.html) in the *VM Import/Export User Guide*.
+ There is a maximum limit of 5 TB (terabytes) for each virtual machine disk.
+ You can't specify a key pair when you restore the virtual machine to an instance. You can add a key pair to `authorized_keys` during launch (through instance user data) or after launch (as described in [this troubleshooting section](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html#replacing-lost-key-pair) in the Amazon EC2 User Guide).
+ Confirm your [operating system is supported](https://docs.aws.amazon.com/vm-import/latest/userguide/prerequisites.html#vmimport-operating-systems) for import to and export from Amazon EC2 in the *VM Import/Export User Guide*.
+ Review limitations involved with [Importing VMs to Amazon EC2](https://docs.aws.amazon.com/vm-import/latest/userguide/prerequisites.html#limitations-image) in the *VM Import/Export User Guide*.
+ When you restore to an Amazon EC2 instance using AWS CLI, you must specify `"RestoreTo":"EC2Instance"`. All other attributes have default values.
+ Amazon EC2 offers [EC2 Allowed AMIs](https://docs.aws.amazon.com//AWSEC2/latest/UserGuide/ec2-allowed-amis.html). If this setting is enabled in your account, add the alias `aws-backup-vault` to your allowlist. Otherwise, restore operations of VM recovery points to EC2 instances will fail with an error message, such as "Source AMI not found in Region".
+ VMware restores to EC2 involving more than 21 disks are not supported. As a workaround, use [VMware Restores to EBS](https://docs.aws.amazon.com//aws-backup/latest/devguide/restoring-vm.html#restore-vm-ebs) to restore each disk individually as an EBS volume, then attach the EBS volumes to an EC2 instance.

## Use the AWS Backup console to restore virtual machine recovery points
<a name="vm-restore-console"></a>

You can restore a virtual machine from multiple locations in the left navigation pane of the AWS Backup console:
+ Choose **Hypervisors** to view recovery points for virtual machines managed by a hypervisor that is connected to AWS Backup.
+ Choose **Virtual machines** to view recovery points for virtual machines across all your hypervisors that are connected to AWS Backup.
+ Choose **Backup vaults** to view recovery points stored in a specific AWS Backup vault.
+ Choose **Protected resources** to view recovery points across all your AWS Backup protected resources.

If you need to restore a virtual machine that no longer has a connection with Backup gateway, choose **Backup vaults** or **Protected resources** to locate your recovery point.

**Options**
+ [Restore to VMware](#restore-vm-vmware)
+ [Restore to an Amazon EBS volume](#restore-vm-ebs)
+ [Restore to an Amazon EC2 instance](#restore-vm-ec2)<a name="restore-vm-vmware"></a>

**To restore a virtual machine to VMware, VMware Cloud on AWS, and VMware Cloud on AWS Outposts**

1. In the **Hypervisors** or **Virtual machines** views, choose the **VM name** to restore. In the **Protected resources** view, choose the virtual machine **Resource ID** to restore.

1. Choose the radial button next to the **Recovery point ID** to restore.

1. Choose **Restore**.

1. Choose the **Restore type**.

   1. **Full restore** restores all the virtual machine's disks.

   1. **Disk-level restore** restores a user-defined selection of one or more disks. Use the drop-down menu to select which disks to restore.

1. Choose the **Restore location**. The options are **VMware**, **VMware Cloud on AWS**, and **VMware Cloud on AWS Outposts**.

1. If you are doing a full restore, skip to the next step. If you are performing a disk-level restore, there will be a drop-down menu under **VM disks**. Choose one or more bootable volumes to restore.

1. Select a **Hypervisor** from the dropdown menu to manage the restored virtual machine

1. For the restored virtual machine, use your organization’s virtual machine best practices to specify its:

   1. **Name**

   1. **Path** (such as `/datacenter/vm`)

   1. **Compute resource name** (such as VMHost or Cluster)

      If a host is part of a cluster then you cannot restore to the host but only to the given cluster.

   1. **Datastore**

1. For **Restore role,** select either the **Default role** (recommended) or **Choose an IAM role** using the dropdown menu.

1. Choose **Restore backup**.

1. *Optional*: Check when your restore job has the status `Completed`. In the left navigation menu, choose **Jobs**.<a name="restore-vm-ebs"></a>

**To restore a virtual machine to an Amazon EBS volume**

1. In the **Hypervisors** or **Virtual machines** views, choose the **VM name** to restore. In the **Protected resources** view, choose the virtual machine **Resource ID** to restore.

1. Choose the radial button next to the **Recovery point ID** to restore.

1. Choose **Restore**.

1. Choose the **Restore type**.

   1. **Disk restore** restores a user-defined selection of one disk. Use the drop-down menu to select which disk to restore.

1. Choose the **Restore location** as **Amazon EBS**.

1. Under the **VM disk** dropdown menu, choose bootable volume to restore.

1. Under **EBS Volume type**, choose the volume type.

1. Choose your Availability Zone.

1. Encryption (optional). Check the box if you choose to encrypt the EBS volume.

1. Select your KMS key from the menu.

1. For **Restore role,** select either the **Default role** (recommended) or **Choose an IAM role**.

1. Choose **Restore backup**.

1. *Optional*: Check when your restore job has the status `Completed`. In the left navigation menu, choose **Jobs**.

1. *Optional*: Visit [How do I use LVM to create a logical volume on an Amazon EBS volume's partition?](https://repost.aws/knowledge-center/create-lv-on-ebs-partition) to learn more on how to mount managed volumes and access data on the restored Amazon EBS volume.<a name="restore-vm-ec2"></a>

**To restore a virtual machine to an Amazon EC2 instance**

1. In the **Hypervisors** or **Virtual machines** views, choose the **VM name** to restore. In the **Protected resources** view, choose the virtual machine **Resource ID** to restore.

1. Choose the radial button next to the **Recovery point ID** to restore.

1. Choose **Restore**.

1. Choose the **Restore type**.

   1. **Full restore** restores the file system completely, including the root-level folder and files.

1. Choose the **Restore location** as **Amazon EC2**.

1. For **Instance type**, choose the combination of compute and memory required to run your application on your new instance.
**Tip**  
Choose an instance type that matches or exceeds the specifications of the original virtual machine. For more information, see the [Amazon EC2 Instance Types Guide](https://docs.aws.amazon.com/ec2/latest/instancetypes/).

1. For **Virtual Private Cloud (VPC)**, choose a virtual private cloud (VPC), which defines the networking environment for the instance.

1. For **Subnet**, choose one of the subnets in the VPC. Your instance receives a private IP address from the subnet address range.

1. For **security groups**, choose a security group, which acts as a firewall for traffic to your instance.

1. For **Restore role,** select either the **Default role** (recommended) or **Choose an IAM role**.

1. *Optional*: To run a script on your instance at launch, expand **Advanced settings** and enter the script in **User data**.

1. Choose **Restore backup**.

1. *Optional*: Check when your restore job has the status `Completed`. In the left navigation menu, choose **Jobs**.

## Use AWS CLI to restore virtual machine recovery points
<a name="vm-restore-cli"></a>

Use `[StartRestoreJob](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html)`.

You can specify the following metadata for a virtual machine restore to Amazon EC2 and Amazon EBS:

```
RestoreTo
InstanceType
VpcId
SubnetId
SecurityGroupIds
IamInstanceProfileName
InstanceInitiatedShutdownBehavior
HibernationOptions
DisableApiTermination
Placement
CreditSpecification
RamdiskId
KernelId
UserData
EbsOptimized
LicenseSpecifications
KmsKeyId
AvailabilityZone
EbsVolumeType
IsEncrypted
ItemsToRestore
RequireIMDSv2
NetworkInterfaces
```

AWS Backup supports both partial restores to Amazon EBS and full restores to Amazon EC2. For partial restores, use `ItemsToRestore` to specify which disk to restore to the specified EBS volume. When restoring to Amazon EC2, the parameter `ItemsToRestore` can be left blank because it is ignored and the full list of disks is restored.

You can specify the following metadata for a virtual machine restore to VMware, VMware Cloud on AWS, and VMware cloud on AWS Outpost:

```
RestoreTo
HypervisorArn
VMName
VMPath
ComputeResourceName
VMDatastore
DisksToRestore
ItemsToRestore
```

AWS Backup supports both partial and full restores to an on-premises virtual machine. You can choose to restore all disks or only a subset of disks. When performing a partial restore, specify your disk selection in `ItemsToRestore`. When performing a full restore, you must either omit both `DisksToRestore` and `ItemsToRestore`, or specify all the disks in `DisksToRestore`. The `DisksToRestore` parameter does not support subsets of disks.

This example shows how to conduct a full restore to VMware:

```
'{"RestoreTo":"VMware","HypervisorArn":"arn:aws:backup-gateway:us-east-1:209870788375:hypervisor/hype-9B1AB1F1","VMName":"name","VMPath":"/Labster/vm","ComputeResourceName":"Cluster","VMDatastore":"vsanDatastore","DisksToRestore":"[{\"DiskId\":\"2000\",\"Label\":\"Hard disk 1\"}]","vmId":"vm-101"}'
```

# Restore testing
<a name="restore-testing"></a>

*Restore testing*, a feature offered by AWS Backup, provides automated and periodic evaluation of restore viability, as well as the ability to monitor restore job duration times.

**Topics**
+ [Overview](#restore-testing-overview)
+ [Restore testing compared with restore process](#restore-testing-compare)
+ [Restore testing management](#restore-testing-management)
+ [Create a restore testing plan](#restore-testing-create)
+ [Update a restore testing plan](#restore-testing-update)
+ [View existing restore testing plans](#restore-testing-view)
+ [View restore testing jobs](#restore-testing-jobs)
+ [Delete a restore testing plan](#restore-testing-delete)
+ [Audit restore testing](#restore-testing-audit)
+ [Restore testing quotas and parameters](#restore-testing-quotas)
+ [Restore testing failure troubleshooting](#restore-testing-troubleshooting)
+ [Restore testing inferred metadata](restore-testing-inferred-metadata.md)
+ [Restore testing validation](restore-testing-validation.md)

## Overview
<a name="restore-testing-overview"></a>

First, you create a restore testing plan where you provide a name for your plan, the frequency for your restore tests, and the target start time. Then, you assign the resources you want to include in your plan. You then choose to include specific or random recovery points in your test. AWS Backup backup intelligently [infers the metadata](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html) that will be needed for your restore job to be successful.

When the scheduled time in your plan arrives, AWS Backup starts restore jobs based on your plan and monitors the time taken to complete the restore.

After the restore test plan completes its run, you can use the results to show compliance for organizational or governance requirements such as the successful completion of restore test scenarios or the restore job completion time.

Optionally, you can use [Restore testing validation](restore-testing-validation.md) to confirm the restore test results.

Once the optional validation completes or the validation window closes, AWS Backup deletes the resources involved with the restore test, and the resources will be deleted in accordance with service SLAs.

At the end of the testing process, you can view the results and the completion time of the tests.

## Restore testing compared with restore process
<a name="restore-testing-compare"></a>

Restore testing runs restore jobs in the same way as on-demand restores and uses the same recovery points (backups) as an on-demand restore. You will see calls to `StartRestoreJob` in CloudTrail (if opted-in) for each job started by restore testing

However, there are a few differences between the operation of a schedule restore test and an on-demand restore operation:


|  | Restore Testing | Restore | 
| --- | --- | --- | 
| **Account** | Recommended best practice is to designate an account to be used for restore tests | You can restore resources from an account | 
| **AWS Backup Audit Manager** | Can turn on a control to confirm if a restore test meets specified restore objectives |  | 
| **Cadence** | Periodically as part of a scheduled plan. | On demand | 
| **Resources** | The resource types you can assign to your testing plan include: Aurora, Amazon DocumentDB, Amazon DynamoDB, Amazon EBS, Amazon EC2, Amazon EFS, Amazon FSx (Lustre, ONTAP, OpenZFS, Windows), Amazon Neptune, Amazon RDS, and Amazon S3. | All resources can be restored. | 
| **Results** | After the restore testing job is completed, the restored resource is deleted after the [Restore testing validation](restore-testing-validation.md) window finishes. | Once the restore job is completed, the restored version of the resource remains. | 
| **Tags** | For resource types which support tag on restore, testing applies tags on restore. | Tags are optional for supported resources. | 

## Restore testing management
<a name="restore-testing-management"></a>

You can create, view, update, or delete a restore testing plan in the [AWS Backup console](https://console.aws.amazon.com/backup/).

You can use [AWS CLI](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/index.html#cli-aws-backup) to programmatically carry out operations for restore testing plans. Each CLI is specific to the AWS service in which it originates. Commands should be prepended with `aws backup`.

### Data deletion
<a name="restore-testing-data-deletion"></a>

When a restore test is finished, AWS Backup begins deleting the resources involved in the test. This deletion is not instantaneous. Each resource has an underlying configuration that determines how those resources are stored and lifecycled. For example, if Amazon S3 buckets are part of the restore test, [lifecycle rules are added to the bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/intro-lifecycle-rules.html). It can take up to several days for the rules to execute and for the bucket and its objects to be fully deleted, but charges will only occur for these resources until the day when the lifecycle rule initiates (by default this is 1 days). Speed of deletion will depend upon the resource type.

Resources that are part of a restore testing plan contain a tag called `awsbackup-restore-test`. If a user removes this tag, AWS Backup cannot delete the resource at the end of the testing period and the user will have to delete it manually instead.

To check why resources may not have been deleted as expected, you can search through failed jobs in the console or use the command line interface to call the API request `DescribeRestoreJob` to retrieve deletion status messages.

Backup plans (non-restore testing plans) ignore resources created by restore testing (those with tag `awsbackup-restore-test` or a name starting with `awsbackup-restore-test`).

### Cost control
<a name="restore-testing-cost-control"></a>

Restore testing has a cost per restore test. Depending on what resources are included in your restore testing plan, the restore jobs that are part of the plan may also have a cost. See [AWS Backup Pricing](https://aws.amazon.com/backup/pricing/) for full details.

When you set up a restore testing plan for the first time, you may find it beneficial to include a minimum number of resource types and protected resources to familiarize yourself with the feature, the process, and the average costs involved. You can update a plan after its creation to add more resource types and protected resources.

## Create a restore testing plan
<a name="restore-testing-create"></a>

A restore testing plan has two parts: plan creation and assigning resources.

When you use the console, these parts are sequential. In the first part, you set the name, frequency, and start times. During the second part you assign resources to your testing plan.

When using AWS CLI and API, first use [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/create-restore-testing-plan.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/create-restore-testing-plan.html). After you receive a successful response and the plan has been created, then use [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/create-restore-testing-selection.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/create-restore-testing-selection.html), for each resource type to include in your plan.

When you create a restore testing plan, we create a service-linked role for you. For more information, see [Using roles for restore testing](using-service-linked-roles-AWSServiceRoleForBackupRestoreTesting.md).

------
#### [ Console ]

**Part I: Create a restore testing plan using the console**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left-hand navigation, locate **Restore testing** and select it.

1. Choose **create restore testing plan**.

1. **General**

   1. **Name:** Type in a name for your new restore testing plan. The name cannot be changed after creation. The name must consist of only alphanumeric characters and underscores.

   1. **Test frequency:** Choose the frequency at which the restore tests will run. 

   1. **Start Time:** Set the time (in hour and minute) you prefer the test to begin. You can also set the local time zone in which you want the restore testing plan to operate. 

   1. **Start within:** This value (in hours) is the period of time in which the restore test is designated to begin. AWS Backup makes a best effort to commence all designated restore jobs during the start within time and randomizes start times within this period.

1. **Recovery point selection:** Here you set the source vaults, the recovery point range, and selection criteria for which recovery points (backups) you want to be part of the plan.

   1. **Source vaults:** Choose whether to include all available vaults or just specific vaults to help filter which recovery points can be in your plan. If you choose **specific vaults**, select from the drop down menu the vaults you wish to include.

   1. **Eligible recovery points:** Specify the time frame from which recovery points will be selected. You can select 1 to 365 days, 1 to 52 weeks, 1 to 12 months, or 1 year.

   1. **Selection criteria:** Once your date range of recovery points is specified, you can choose whether to include the latest one or one at random in your plan. You may wish to choose a random one to gauge the general health of recovery points at more regular frequency in case a restore to an older version is ever warranted.

   1. **Point-in-time recovery points:** If your plan includes resources that have continuous backup (point-in-time-restore/PITR) points, you can check this box to have your testing plan include continuous backups as eligible recovery points (see [Feature availability by resource](backup-feature-availability.md#features-by-resource) for which resources types have this feature).

1. *(optional)* **Tags added to restore testing plan:** You can choose to add up to 50 tags to your restore testing plan. Each tag must be added separately. To add a new tag, select **Add new tag**.

**Part II: Assign resources to the plan using the console**

In this section, you choose the resources you have backed up to include in your restore testing plan. You will choose the name of the resource assignment, choose the role you use for the restore test, and set the retention period before cleanup. Then, you will select the resource type, select the scope, and optionally refine your selection with tags.
**Tip**  
To navigate back to the restore testing plan to which you want to add resources, you can go to the [AWS Backup console](https://console.aws.amazon.com/backup), select **Restore testing**, then find your preferred testing plan and select it.

1. **General**

   1. **Resource assignment name:** Input a name for this resource assignment using a string of alphanumeric characters and underscores, with no white spaces.

   1. **Restore IAM role:** The test must use an Identity and Access Management (IAM) role you designate. You can choose the AWS Backup default role or a different one. If the AWS Backup default does not yet exist as you finish this process, AWS Backup will create it for you automatically with the necessary permissions. The IAM role you choose for restore testing must contain the permissions found in [https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#aws-managed-policies](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#aws-managed-policies).

   1. **Retention period before cleanup:** During a restore test, backup data is temporarily restored. By default, this data is deleted after the test is complete. You have the option to delay deletion of this data if you wish to run validation on the restore.

      If you plan to run validation, select **retain for a specific number of hours** and input a value from 1 to 168 hours, inclusive. Note that validation can be run programmatically but not from the AWS Backup console.

1. **Protected resources:**

   1. **Select resource type:** Select which resource types and the scope of which backups of those types to include in the resource testing plan. Each plan can contain multiple resource types, but each type of resource must be assigned to the plan individually.

   1. **Resource selection scope:** Once the type is chosen, select if you want to include all available protected resources of that type or if you want to include specific protected resources only.

   1. *(optional)* **Refine resource selection using tags:** If your backups have tags, you can filter by tags to **select specific protected resources**. Enter the tag key, the condition for this key to be or not to be included, and the value for the key. Then, select the **Add tags** button.

      Tags on protected resources are evaluated by checking the tags on the latest recovery point within the backup vault containing the protected resource.

1. **Restore parameters:** Certain resources require specifying parameters in preparation for a restore job. In most cases, AWS Backup will infer the values based on the stored backup.

   It is recommended in most cases to maintain these parameters; however, you can change the values by choosing a different selection from the dropdown menu. Examples where changing the values may be optimal can include overriding encryption keys, Amazon FSx settings where data cannot be inferred, and creation of subnets.

   For example, if an RDS database is one of the resource types you assign to your restore testing plan, parameters such as availability zone, database name, database instance class, and VPC security group will appear with inferred values you can change if applicable.

------
#### [ AWS CLI ]

The CLI command `CreateRestoreTestingPlan` is used to make a restore testing plan.

The testing plan must contain:
+ `RestoreTestingPlan`, which must contain a unique `RestoreTestingPlanName`
+ [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_RestoreTestingPlanForCreate.html#Backup-Type-RestoreTestingPlanForCreate-ScheduleExpression](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_RestoreTestingPlanForCreate.html#Backup-Type-RestoreTestingPlanForCreate-ScheduleExpression) cron expression
+ [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_RestoreTestingRecoveryPointSelection.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_RestoreTestingRecoveryPointSelection.html) 

  Though named similarly, this is **NOT** the same as `RestoreTestingSelection`.

  [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_RestoreTestingRecoveryPointSelection.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_RestoreTestingRecoveryPointSelection.html) has five parameters (three required and two optional). The values you specify determine which recovery point is included in the restore test. You must indicate with `Algorithm` if you want the latest recovery point within your `SelectionWindowDays` or if you want a random recovery point, and you must indicate through `IncludeVaults` from which vaults the recovery points can be chosen.

A selection can have one or more protected resource ARNs or can have one or more conditions, but it cannot have not both.

You can also include:
+ [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_RestoreTestingPlanForCreate.html#Backup-Type-RestoreTestingPlanForCreate-ScheduleExpressionTimeZone](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_RestoreTestingPlanForCreate.html#Backup-Type-RestoreTestingPlanForCreate-ScheduleExpressionTimeZone)
+ [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateRestoreTestingPlan.html#Backup-CreateRestoreTestingPlan-request-Tags](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateRestoreTestingPlan.html#Backup-CreateRestoreTestingPlan-request-Tags)
+ [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateRestoreTestingPlan.html#API_CreateRestoreTestingPlan_RequestSyntax](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateRestoreTestingPlan.html#API_CreateRestoreTestingPlan_RequestSyntax)
+ [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateRestoreTestingPlan.html#API_CreateRestoreTestingPlan_RequestSyntax](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateRestoreTestingPlan.html#API_CreateRestoreTestingPlan_RequestSyntax)

Use CLI command [`create-restore-testing-plan`.](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/create-restore-testing-plan.html)

Once the plan has been created successfully, you need to assign resources to it using [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/create-restore-testing-selection.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/create-restore-testing-selection.html).

This consists of `RestoreTestingSelectionName`, `ProtectedResourceType`, and one of the following:

**Note**  
**RestoreTestingSelectionName naming requirements:**  
Must be 1-256 characters in length
Can contain letters (a-z, A-Z), numbers (0-9), hyphens (-), and underscores (\$1)
Must start with a letter or number
Cannot end with a hyphen or underscore
Must be unique within the restore testing plan
+ `ProtectedResourceArns`
+ `ProtectedResourceConditions`

Each protected resource type can have one single value. A restore testing selection can include a wildcard value ("\$1") for `ProtectedResourceArns` along with `ProtectedResourceConditions`. Alternatively, you can include up to 30 specific protected resource ARNs in `ProtectedResourceArns`.

------

**Restore test frequency**

AWS Backup evaluates cron expressions between 00:00 and 23:59. If you create a restore testing plan for "every 12 hours" but provide a start time of later than 11:59, it will only run once per day.

**Recovery point determination**

Each time a testing plan runs (according to the frequency and start time you specified), one eligible recovery point per protected resource in selection is restored by the restore test. If no recovery points for a resource meet the recovery point selection criteria, that resource will not be included in the test.

A recovery point for a protected resource in a testing selection is eligible if meets the criteria for the specified time frame and included vaults in the restore testing plan.

A protected resource is selected if the resource testing selection includes the resource type and if either of the following conditions are true:
+ The resource ARN is specified in that selection; or,
+ The tag conditions on that selection match the tags on the latest recovery point for the resource

## Update a restore testing plan
<a name="restore-testing-update"></a>

You can update parts of your restore testing plan and the resource selections within it through the console or AWS CLI.

------
#### [ Console ]

**Update restore testing plans and selections in the console**

When you view the restore testing plan details page in the console, you can edit (update) many of the settings of your plan. To do this,

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left-hand navigation, locate **Restore testing** and select it.

1. Select the **Edit** button.

1. Adjust the frequency, the start time, and the time the test will begin within which the test will begin after the chosen start time.

1. Save your changes.

------
#### [ AWS CLI ]

**Update restore testing plans and selections through AWS CLI**

Requests [UpdateRestoreTestingPlan](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_UpdateRestoreTestingPlan.html) and [UpdateRestoreTestingSelection](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_UpdateRestoreTestingSelection.html) can be used to send partial updates to a specified plan or selection. The names cannot be changed, but you can update other parameters. Include only parameters you wish to change in each request.

Before sending an update request, use [GetRestoreTestingPlan](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_GetRestoreTestingPlan.html) and [GetRestoreTestingSelection](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_GetRestoreTestingSelection.html) to determine if your RestoreTestingSelection contains specific ARNs or if it uses the wildcard and conditions.

If your restore testing selection has specified ARNs (instead of wildcard) and you wish to change it to a wildcard with conditions, the update request must include both the ARN wildcard and the conditions. A selection can have either protected resource ARNs or use the wildcard with conditions, but it cannot have both.
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/get-restore-testing-plan.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/get-restore-testing-plan.html)
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/get-restore-testing-selection.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/get-restore-testing-selection.html)
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/updated-restore-testing-plan.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/updated-restore-testing-plan.html)
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/update-restore-testing-selection.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/update-restore-testing-selection.html)

------

## View existing restore testing plans
<a name="restore-testing-view"></a>



------
#### [ Console ]

**View details about an existing restore testing plan and assigned resources in the console**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Select **Restore testing** from the left-hand navigation. The display shows your restore testing plans. The plans are displayed by default by last runtime.

1. Select the link from a plan to see its details, including a summary of the plan, its name, frequency, start time, and start within value.

You can also view the protected resources within this plan, the restore testing jobs from the most recent 30 days included in this plan, and any tags you can created to be part of this testing plan.

------
#### [ AWS CLI ]



**Get details about an existing restore testing plan and testing selection using the command line**
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/list-restore-testing-plan.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/list-restore-testing-plan.html)
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/list-restore-testing-selections.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/list-restore-testing-selections.html)
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/list-restore-testing-plan.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/list-restore-testing-plan.html)
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/get-restore-testing-selection.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/get-restore-testing-selection.html)

------

## View restore testing jobs
<a name="restore-testing-jobs"></a>

------
#### [ Console ]

**View existing restore testing jobs in the console**

Restore testing jobs are included on the restore jobs page.

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Navigate to **Jobs** page.

   Alternatively, you can select **Restore testing**, then select a restore testing plan to see its details and the jobs associated with the plan.

1. Select the **Restore jobs** tab.

   On this page, you can view the status, restore time, restore type, resource ID, resource type, restore testing plan to which the job belongs, the creation time, and the recovery point ID of the restore job. 

   Jobs included in a restore testing plan have the restore type **Test**.

Restore testing jobs have several status categories:
+ A **status** type that requires attention is underlined; hover over the status to see additional details if they are available.
+ A **validation status** will display if [Restore testing validation](restore-testing-validation.md) has been initiated on the test (not available within the console).
+ Deletion status notes the status of the data generated by a restore test. There are three possible deletion statuses: **Successful**, **Deleting**, and **Failed**.

  If a restore test job deletion failed, you will need to remove the resource manually since the restore testing flow could not complete it automatically. Often, a failed deletion is triggered if the tag `awsbackup-restore-test` is removed from the resource.

------
#### [ AWS CLI ]

**View existing restore testing jobs from the command line**
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/list-restore-jobs-by-protected-resource.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/list-restore-jobs-by-protected-resource.html)

------

## Delete a restore testing plan
<a name="restore-testing-delete"></a>



------
#### [ Console ]

**Delete restore testing plan in the console**

1. Go to [View existing restore testing plans](#restore-testing-view) to see your current restore testing plans.

1. On the restore testing plan details page, delete a plan by selecting **Delete**.

1. After you select delete, a pop-up confirmation screen will appear to ensure you want to delete your plan. On this screen, the name of your specific restore testing plan will be displayed in bold. To proceed, type in the exact case-sensitive name of the testing plan, including any underscores, dashes, and periods.

   If the option for **Delete restore testing plan** is not selectable, re-enter the name until it matches the displayed name. Once it is an exact match, the option to delete the restore testing plan will become selectable.

------
#### [ AWS CLI ]

**Delete restore testing plan through the command line**

The CLI command [DeleteRestoreTestingSelection](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_DeleteRestoreTestingSelection.html) can be used to delete a restore testing selection. Include `RestoreTestingPlanName` and `RestoreTestingSelectionName` in the request.

All testing selections associated with a testing plan need to be deleted before you delete the testing plan. Once all testing selections have been deleted, you can use the API request [DeleteRestoreTestingPlan](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_DeleteRestoreTestingPlan.html) to delete a restore testing plan. You need to include `RestoreTestingPlanName`.
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/delete-restore-testing-selection.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/delete-restore-testing-selection.html)
+ [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/delete-restore-testing-plan.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup/delete-restore-testing-plan.html)

------

## Audit restore testing
<a name="restore-testing-audit"></a>

Restore testing integrations with AWS Backup Audit manager to help you evaluate if a restored resource completed within your target restore time.

For more information, see [Restore time for resources meet target](https://docs.aws.amazon.com/aws-backup/latest/devguide/controls-and-remediation.html#restore-time-meets-target-control) control in [AWS Backup Audit Manager controls and remediation](https://docs.aws.amazon.com/aws-backup/latest/devguide/controls-and-remediation.html).

## Restore testing quotas and parameters
<a name="restore-testing-quotas"></a>
+ 100 restore testing plans
+ 50 tags can be added to each restore testing plan
+ 30 selections per plan
+ 30 protected resource ARNs per selection 
+ 30 protected resource conditions per selection (including those within both `StringEquals` and `StringNotEquals`) 
+ 30 vault selectors per selection
+ Max selection window days: 365 days 
+ Start window hours: Min: 1 hour; Max: 168 hours (7 days) 
+ Max plan name length: 50 characters 
+ Max selection name length: 50 characters

Additional information regarding limits can be viewed at [AWS Backup quotas](aws-backup-limits.md).

## Restore testing failure troubleshooting
<a name="restore-testing-troubleshooting"></a>

If you have restore testing jobs with a restore status of `Failed`, the following reasons can help you determine the cause and remedy.

Error message(s) [can be viewed](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing.html#restore-testing-jobs) in the AWS Backup console in the job status details page or by using the CLI commands `list-restore-jobs-by-protected-resource` or `list-restore-jobs`.

1. ***Error:** `No default VPC for this user. GroupName is only supported for EC2-Classic and default VPC.`*

   **Solution 1:** Update your restore testing selection and [override](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html) the parameter `SubnetId`. The AWS Backup console displays this parameter as "Subnet".

   **Solution 2:** Recreate the [default VPC](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#create-default-vpc).

   **Resource types affected:** Amazon EC2

   

1. ***Error:** `No subnets found for the default VPC [vpc]. Please specify a subnet.`*

   **Solution 1:** Update your restore testing selection and [override](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html) the `SubnetId` restore parameter. The AWS Backup console displays this parameter as "Subnet".

   **Solution 2:** [Create a default subnet](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#create-default-subnet) in the default VPC.

   **Resource types affected:** Amazon EC2

   

1. ***Error:** `No default subnet detected in VPC. Please contact AWS Support to recreate default Subnets.`*

   **Solution 1:** Update your restore testing selection and [override](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html) the `DBSubnetGroupName` restore parameter. The AWS Backup console displays this parameter as Subnet group.

   **Solution 2:** [Create a default subnet](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#create-default-subnet) in the default VPC.

   **Resource types affected:** Amazon Aurora, Amazon DocumentDB, Amazon RDS, Neptune

   

1. ***Error:** `IAM Role cannot be assumed by AWS Backup`.*

   **Solution:** The restore role must be assumable by AWS Backup. Either update the role's trust policy in IAM to allow it to be assumed by `"backup.amazonaws.com"` or update your restore testing selection to use a role that is assumable by AWS Backup.

   **Resource types affected:** all

   

1. ***Error:** `Access denied to KMS key.` or `The specified AWS KMS key ARN does not exist, is not enabled or you do not have permissions to access it.`*

   **Solution:** Verify the following:

   1. The restore role has access to the AWS KMS key used to encrypt your backups and, if applicable, the KMS key used to encrypt the restored resource.

   1. The resource policies on the above KMS key(s) allow the restore role to access them.

   If the above conditions are not yet met, configure the restore role and the resource policies for appropriate access. Then, run the restore testing job again.

   **Resource types affected:** all

   

1. ***Errors:** `User ARN is not authorized to perform action on resource because no identity based policy allows the action.` or `Access denied performing s3:CreateBucket on awsbackup-restore-test-xxxxxx`.*

   **Solution:** The restore role does not have adequate permissions. Update the permissions in IAM for the restore role.

   **Resource types affected:** all

   

1. ***Errors:** `User ARN is not authorized to perform action on resource because no resource-based policy allows the action.` or `User ARN is not authorized to perform action on resource with an explicit deny in a resource based policy.`*

   **Solution:** The restore role does not have adequate access to the resource specified in the message. Update the resource policy on the resource mentioned.

   **Resource types affected:** all

   

# Restore testing inferred metadata
<a name="restore-testing-inferred-metadata"></a>

Restoring a recovery point requires restore metadata. To perform restore tests, AWS Backup automatically infers metadata that is likely to result in a successful restore. The command `get-restore-testing-inferred-metadata` can be used to preview what AWS Backup will infer. The command `get-restore-job-metadata` returns the set of metadata inferred by AWS Backup. Note that for some resource types (Amazon FSx), AWS Backup is not able to infer a complete set of metadata.

*Inferred restore metadata* is determined during the restore testing process. You can override certain restore metadata keys by including the parameter `RestoreMetadataOverrides` in the body of `RestoreTestingSelection`. Some metadata overrides are not available in the AWS Backup console.

Each supported resource has both inferred restore metadata keys and values, and overridable restore metadata keys. Only `RestoreMetadataOverrides` key value pairs or nested key value pairs marked with *required for successful restore* are necessary to include; the others are optional. Note that key values are not case sensitive.

**Important**  
AWS Backup can infer that a resource should be restored to the default setting, such as an Amazon EC2 instance or Amazon RDS cluster restored to the default VPC. However, if the default is not present, for example the default VPC or subnet has been deleted and no metadata override has been input, the restore will not be successful.


| Resource type | Inferred restore metadata keys and values | Overridable metadata | 
| --- | --- | --- | 
| **DynamoDB** |  `deletionProtection`, where value is set to `false` `encryptionType` is set to `Default` `targetTableName`, where value is set to random value starting with `awsbackup-restore-test-`  |  `encryptionType` `kmsMasterKeyArn`  | 
| **Amazon EBS** |  `availabilityZone`, whose value is set to a random availability zone `encrypted`, whose value is set to `true`  |  `availabilityZone` `iops` `kmsKeyId` `throughput` `volumesize` `volumetype`  | 
| **Amazon EC2** |  `disableApiTermination` value is set to `false` `instanceType` value is set to the instanceType of the recovery point being restored `requiredImdsV2` value is set to `true`  |  `iamInstanceProfileName` (null or `UseBackedUpValue`) `instanceType` `requireImdsV2` `securityGroupIds` `subnetId`  | 
| **Amazon EFS** |  `encrypted` value is set to `true` `file-system-id` value is set to the file system ID of the recovery point being restored `kmsKeyId value` is set to `alias/aws/elasticfilesystem` `newFileSystem` value is set to `true` `performanceMode` value is set to `generalPurpose`  |  `kmsKeyId` `performanceMode`  | 
| **Amazon FSx for Lustre** |  `lustreConfiguration` has nested keys. One nested key is `automaticBackupRetentionDays`, the value of which is set to `0`  |  `kmsKeyId` `lustreConfiguration` has nested key `logConfiguration` `securityGroupIds` `subnetIds`, *required for successful restore*  | 
| **Amazon FSx for NetApp ONTAP** |  `name` is set to a random value starting with `awsbackup_restore_test_` `ontapConfiguration` has nested keys, including: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html)  |  `ontapConfiguration` has specific overrideable nested keys, including: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html)  | 
| **Amazon FSx for OpenZFS** |  `openZfzConfiguration`, which has nested keys, including: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html)  |  `kmsKeyId` `openZfsConfiguration` has specific overridable nested keys, including: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html) `securityGroupIds` `subnetIds`  | 
| **Amazon FSx for Windows File Server** |  `windowsConfiguration`, which has nested keys including: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html)  |  `kmsKeyId` `securityGroupIds` `subnetIds` *required for successful restore* `windowsConfiguration`, with specific overridable nested keys [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html)  | 
| **Amazon RDS, Aurora, Amazon DocumentDB, Amazon Neptune clusters** |  `availabilityZones` with value set to a list of up to three random availability zones `dbClusterIdentifier` with a random value starting with `awsbackup-restore-test` `engine` with value set to the engine of the recovery point being restored  |  `availabilityZones` `databaseName` `dbClusterParameterGroupName` `dbSubnetGroupName` `enableCloudwatchLogsExports` `enableIamDatabaseAuthentication` `engine` `engineMode` `engineVersion` `kmskeyId` `port` `optionGroupName` `scalingConfiguration` `vpcSecurityGroupIds`  | 
| **Amazon RDS instances** |  `dbInstanceIdentifier` with a random value starting with `awsbackup-restore-test-` `deletionProtection` with value set to `false` `multiAz` with value set to `false` `publiclyAccessible` with value set to false  |  `allocatedStorage` `availabilityZones` `dbInstanceClass` `dbName` `dbParameterGroupName` `dbSubnetGroupName` `domain` `domainIamRoleName` `enableCloudwatchLogsExports` `enableIamDatabaseAuthentication` `iops` `licensemodel` `multiAz` `optionGroupName` `port` `processorFeatures` `publiclyAccessible` `storageType` `vpcSecurityGroupIds`  | 
| **Amazon Simple Storage Service (Amazon S3)** |  `destinationBucketName` with a random value starting with `awsbackup-restore-test-` `encrypted ` with value set to `true` `encryptionType` with value set to `SSE-S3` `newBucket` with value set to `true`  |  `encryptionType` `kmsKey`  | 

# Restore testing validation
<a name="restore-testing-validation"></a>

You have the option of creating an event-driven validation that runs when a restore testing job completes.

First, create a validation workflow with any target supported by Amazon EventBridge, such as AWS Lambda. Second, add an EventBridge rule that listens for the restore job reaching the status `COMPLETED`. Third, create a restore testing plan (or let an existing one run as scheduled). Finally, after the restore test has finished, monitor the logs of the validation workflow to ensure it ran as expected (once validation has run, a validation status will display in the [AWS Backup console](https://console.aws.amazon.com/backup)).

1. 

**Set up validation workflow**

   You can set up a validation workflow using Lambda or any other target supported by EventBridge. For example, if you are validating a restore test containing an Amazon EC2 instance, you may include code that pings a healthcheck endpoint.

   You can use the details in the event to determine which resource(s) to validate.

   You can use [Lambda layers](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html) to use the latest SDK (because `PutRestoreValidationResult` is not available through the Lambda SDK).

   Here is a sample:

   ```
   import { Backup } from "@aws-sdk/client-backup";
   
   export const handler = async (event) => {
     console.log("Handling event: ", event);
   
     const restoreTestingPlanArn = event.detail.restoreTestingPlanArn;
     const resourceType = event.detail.resourceType;
     const createdResourceArn = event.detail.createdResourceArn;
   
     // TODO: Validate the resource
     
     const backup = new Backup();
     const response = await backup.putRestoreValidationResult({
       RestoreJobId: event.detail.restoreJobId,
       ValidationStatus: "SUCCESSFUL", // TODO
       ValidationStatusMessage: "" // TODO
     });
     
     console.log("PutRestoreValidationResult: ", response);
     console.log("Finished");
   };
   ```

1. 

**Add an EventBridge rule**

   [Create an EventBridge rule](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-get-started.html#eb-gs-create-rule) that listens for the restore job [`COMPLETED` event](https://docs.aws.amazon.com/aws-backup/latest/devguide/eventbridge.html#monitoring-events-in-eventbridge).

   Optionally, you can filter events by resource type or restore testing plan ARN. Set the target of this rule to invoke the validation workflow you defined in Step 1. Here is an example:

   ```
   {
     "source":[
       "aws.backup"
     ],
     "detail-type":[
       "Restore Job State Change"
     ],
     "detail":{
       "resourceType":[
         "..."
       ],
       "restoreTestingPlanArn":[
         "..."
       ],
       "status":[
         "COMPLETED"
       ]
     }
   }
   ```

1. 

**Let the restore testing plan run and complete**

   The restore testing plan will run according to the schedule you have configured.

   See [Create a restore testing plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing.html#restore-testing-create) if you do not yet have one or [Update a restore testing plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing.html#restore-testing-update) if you wish to change the settings.

1. 

**Monitor the results**

   Once a restore testing plan has run as scheduled, you can check the logs of your validation workflow to ensure it ran correctly.

   You can call the API `PutRestoreValidationResult` to post the results, which will then be viewable in the [AWS Backup console](https://console.aws.amazon.com/backup) and through AWS Backup API calls that describe and list restore jobs, such as `DescribeRestoreJob` or `ListRestoreJob`.

   Once a validation status is set, it cannot be changed.

# Stop a backup job
<a name="stopping-a-backup-job"></a>

You can stop a backup job in AWS Backup after it has been initiated. When you do this, the backup is not created, and the backup job record is retained with the status of **aborted**.

**To stop a backup job using the AWS Backup console**

1. Sign in to the AWS Management Console, and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane on the left, choose **Jobs**.

1. Choose the backup job that you want to stop.

1. In the backup job details pane, choose **Stop**.

# View existing backups
<a name="listing-backups"></a>

You can view a list of your backups using the [AWS Backup console](https://console.aws.amazon.com/backup) or programmatically.

**Topics**
+ [Listing backups by protected resource in the console](#list-backups-by-protected-resources)
+ [Listing backups by backup vault in the console](#list-backups-by-vault)
+ [Listing backups programmatically](#list-backups-programmatically)

## Listing backups by protected resource in the console
<a name="list-backups-by-protected-resources"></a>

Follow these steps to view a list of backups of a particular resource on the AWS Backup console. 

1. Sign in to the AWS Management Console, and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**.

1. Choose a protected resource in the list to view the list of backups. Only resources that have been backed up by AWS Backup are listed under **Protected resources**. 

You can view the backups for the resource. From this view, you can also choose a backup and restore it.

## Listing backups by backup vault in the console
<a name="list-backups-by-vault"></a>

Follow these steps to view a list of backups organized in a backup vault.

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup vaults**.

1. In the **Backups** section, view the list of all the backups organized in this backup vault. In this view, you can sort backups by any of the column headers (including status), as well as select a backup to restore it, edit it, or delete it.

## Listing backups programmatically
<a name="list-backups-programmatically"></a>

You can list backups programmatically using the `ListRecoveryPoint` API operations:
+ `[ListRecoveryPointsByBackupVault](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_ListRecoveryPointsByBackupVault.html)`
+ `[ListRecoveryPointsByResource](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_ListRecoveryPointsByResource.html)`

For example, the following AWS Command Line Interface (AWS CLI) command lists all your backups with the `EXPIRED` status:

```
aws backup list-recovery-points-by-backup-vault \
  --backup-vault-name sample-vault \
  --query 'RecoveryPoints[?Status == `EXPIRED`]'
```