

# Backup creation by resource type
<a name="creating-a-backup"></a>

With AWS Backup, you can create backups automatically using backup plans or manually by initiating an on-demand backup. 

## Creating automatic backups
<a name="creating-automatic-backups"></a>

When backups are created automatically by backup plans, they are configured with the lifecycle settings that are defined in the backup plan. They are organized in the backup vault that is specified in the backup plan. They are also assigned the tags that are listed in the backup plan. For more information about backup plans, see [Backup plans](about-backup-plans.md).

## Creating on-demand backups
<a name="creating-on-demand-backups"></a>

When you create an on-demand backup, you can configure these settings for the backup that is being created. When a backup is created either automatically or manually, a backup *job* is initiated. For how to create an on-demand backup, see [Creating an on-demand backup using AWS Backup](recov-point-create-on-demand-backup.md).

Note: An on-demand backup creates a backup job; the backup job will transition in state of `Running` within an hour (or when specified). You can choose an on-demand backup if you wish to create a backup at a time other than the scheduled time defined in a backup plan. An on-demand backup can be used, for example, to test backup and functionality at any time.

[ On-demand backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) cannot be used with [ point-in-time restore (PITR)](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) since an on-demand backup preserves resources in the state they are in when the backup is taken, whereas PITR uses [ continuous backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html#point-in-time-recovery-working-with) which record changes over a period of time.

## Backup job statuses
<a name="backup-job-statuses"></a>

Each backup job has a unique ID. For example, `D48D8717-0C9D-72DF-1F56-14E703BF2345`.

You can view the status of a backup job on the **Jobs** page of the AWS Backup console. Backup job statuses include `CREATED`, `PENDING`, `RUNNING`, `ABORTING`, `ABORTED`, `COMPLETED`, `FAILED`, `EXPIRED`, and `PARTIAL`.

## Incremental backups
<a name="incremental-backup-works"></a>

Many resources support incremental backup with AWS Backup. A full list is available in the incremental backup section of the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table.

Although each backup after the first (full) one is incremental (meaning it only captures changes from the previous backup), all backups made with AWS Backup retain the necessary reference data to allow a full restore. This is true even if the original (full) backup has reached the end of its lifecycle and been deleted.

For example, if your day 1 (full) backup was deleted due to a 3-day lifecycle policy, you would still be able to perform a full restore with the backups from days 2 and 3. AWS Backup maintains the necessary reference data from day 1 to do so.

**Incremental backups and Regions**

Backups of resources which are fully managed by AWS Backup can only be incremental if the vault in which the backup is created also contains an earlier backup (incremental or full); other resource types (not fully managed by AWS Backup) can have incremental backups as long as there is a previous backup of the resource within the same *Region*.

**Note**  
Not all resource types support incremental backups. Some resources, such as Amazon Aurora, offer incremental backup only through continuous backups and point-in-time restore (PITR), not through snapshot-based backups. For a full list of which resources support incremental backups, see the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table.

## Access to source resources
<a name="source-resource-statuses"></a>

AWS Backup needs access to your source resources to back them up. For example:
+ To back up an Amazon EC2 instance, the instance can be in the `running` or `stopped` state, but not the `terminated` state. This is because a `running` or `stopped` instance can communicate with AWS Backup, but a `terminated` instance cannot.
+ To back up a virtual machine, its hypervisor must have the Backup gateway status `ONLINE`. For more information, see [Understanding hypervisor status](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html#understand-hypervisor-status).
+ To back up an Amazon RDS database, Amazon Aurora, or Amazon DocumentDB cluster, those resources must have the status `AVAILABLE`.
+ To back up an Amazon Elastic File System (Amazon EFS), it must have the status `AVAILABLE`.
+ To back up an Amazon FSx file system, it must have the status `AVAILABLE`. If the status is `UPDATING`, the backup request is queued until the file system becomes `AVAILABLE`.

  FSx for ONTAP doesn’t support backing up certain volume types, including DP (data-protection) volumes, LS (load-sharing) volumes, full volumes, or volumes on file systems that are full. For more information, please see [FSx for ONTAP Working with backups](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/using-backups.html).

AWS Backup retains previously-created backups consistent with your lifecycle policy, regardless of the health of your source resource.

**Topics**
+ [Creating automatic backups](#creating-automatic-backups)
+ [Creating on-demand backups](#creating-on-demand-backups)
+ [Backup job statuses](#backup-job-statuses)
+ [Incremental backups](#incremental-backup-works)
+ [Access to source resources](#source-resource-statuses)
+ [CloudFormation stack backups](applicationstackbackups.md)
+ [Amazon Aurora DSQL backups](backup-aurora.md)
+ [Advanced DynamoDB backup](advanced-ddb-backup.md)
+ [Amazon EBS and AWS Backup](multi-volume-crash-consistent.md)
+ [Amazon Relational Database Service backups](rds-backup.md)
+ [Amazon Redshift backups](redshift-backups.md)
+ [Amazon Redshift Serverless backups](redshift-serverless-backups.md)
+ [Amazon EKS backups](eks-backups.md)
+ [SAP HANA backup on Amazon EC2](backup-saphana.md)
+ [Amazon S3 backups](s3-backups.md)
+ [Amazon Timestream backups](timestream-backup.md)
+ [Virtual machine backups](vm-backups.md)
+ [Create Windows VSS backups](windows-backups.md)

# CloudFormation stack backups
<a name="applicationstackbackups"></a>

A CloudFormation stack consists of multiple stateful and stateless resources that you can back up as a single unit. In other words, you can backup and restore an application containing multiple resources by backing up a stack and restoring the resources within it. All the resources in a stack are defined by the stack's CloudFormation template.

When a CloudFormation stack is backed up, recovery points are created for the CloudFormation template and for each additional resource supported by AWS Backup in the stack. These recovery points are grouped together within a overarching recovery point called a **composite**.

This composite recovery point cannot be restored, but nested recovery points can be restored. You can restore anywhere from one to all nested backups within a composite backup using the console or the AWS CLI.

## CloudFormation application stack terminology
<a name="appstackterminology"></a>
+ **Composite recovery point**: A recovery point used to group nested recovery points together, as well other metadata.
+ **Nested recovery point**: A recovery point of a resource that is part of a CloudFormation stack and is backed up as part of the composite recovery point. Each nested recovery point belongs in the stack of one composite recovery point.
+ **Composite job**: A backup, copy, or restore job for a CloudFormation stack which can trigger other backup jobs for individual resources within the stack.
+ **Nested job**: A backup, copy, or restore job for a resource within a CloudFormation stack.

## CloudFormation stack backup jobs
<a name="howtobackupcfn"></a>

The process of a backup creation is called a backup job. A CloudFormation stack backup job has a [ status](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup.html#backup-job-statuses). When a backup job has finished, it has the status of `Completed`. This signifies a [CloudFormation recovery point](#cfnrecoverypoints) (a backup) has been created.

CloudFormation stacks can be backed up using the console or backed up programatically. To backup any resource, including a CloudFormation stack, see [ Creating a backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup.html) elsewhere in this *AWS Backup Developer Guide*.

CloudFormation stacks can be backed up using the API command `StartBackupJob`. Note that the documentation and console refer to composite and nested recovery points; the API language uses the terminology "parent and child recovery points" in the same contextual relationship.

CloudFormation stacks contain all AWS resources are indicated by your [CloudFormation template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html). Note that your template may contain resources not yet supported by AWS Backup. If your template contains a combination of AWS supported resources and unsupported resources, AWS Backup will still back up the template into a composite stack, but Backup will only create recovery points of the Backup-supported services. All resource types contained within the CloudFormation template will be included within a backup, even if you have not opted into to a particular service (toggling a service to “Enabled” in console Settings).

## CloudFormation recovery point
<a name="cfnrecoverypoints"></a>

### Recovery point status
<a name="cfnrecoverypointstatus"></a>

When the backup job of a stack is finished (the job status is `Completed`), a backup of the stack has been created. This backup is also known as a composite recovery point. A composite recovery point can have one of the following statuses: `Completed`, `Failed`, or `Partial`. Note that a backup job has a status, and a recovery point (also called a backup) also has a separate status.

A completed backup job means your entire stack and the resources within in are protected by AWS Backup. A failed status indicates that the backup job was unsuccessful; you should create the backup again once the issue that caused the failure is corrected.

A `Partial` status means that not all the resources in the stack were backed up. This may happen if the CloudFormation template contains resources that are not currently supported by AWS Backup, or it may happen if one or more of the backup jobs belonging to resources within the stack (nested resources) have statuses other than `Completed`. You can manually create an on-demand backup to rerun any resources that resulted in a status other than `Completed`. If you expected the stack to have the status of `Completed` but it is marked as `Partial` instead, check to see which of the conditions above might be true about your stack.

Each nested resource within the composite recovery point has its own individual recovery point, each with its own status (either `Completed` or `Failed`). Nested recovery points with a status of `Completed` can be restored.

### Manage recovery points
<a name="cfnmanagerecoverypoints"></a>

Composite recovery points (backups) can be copied; nested recovery points can be copied, deleted, disassociated, or restored. A composite recovery point which contains nested backups cannot be deleted. After the nested recovery points within a composite recovery point have been deleted or disassociated, you can manually delete the composite recovery point manually or let it remain until the backup plan lifecycle deletes it. 

### Delete a recovery point
<a name="cfndeleterecoverypoint"></a>

You can delete a recovery point using the AWS Backup console or using the AWS CLI.

To delete recovery points using the AWS Backup console,

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Click on **Protected Resources** in the left-hand navigation. In the text box, type `CloudFormation` to display only your CloudFormation stacks.

1. Composite recovery points will be displayed in the Recovery points pane. The plus sign (\$1) to the left of each recovery point ID can be clicked to expand each composite recovery point, showing all nested recovery points contained in the composite. You can check the box to the left of any recovery point to include it in your selection of recovery points you wish to delete.

1. Click the **Delete** button.

When you use the console to delete one or more composite recovery points, a warning box will pop up. This warning box requires you to confirm your intention to delete the composite recovery points, including nested recovery points within composite stacks.

To delete recovery points using API, use the `DeleteRecoveryPoint` command.

When you use API with the AWS Command Line Interface you must delete all nested recovery points prior to deleting a composite point. If you send an API request to delete a composite stack backup (recovery point) that still contains nested recovery points within it, the request will return an error.

### Disassociate a nested recovery point from composite recovery point
<a name="cfndisassociaterecoverypoints"></a>

You can disassociate a nested recovery point from a composite recovery point (for example, you wish to keep the nested recovery point but delete the composite recovery point). Both recovery points will remain, but they will no longer be connected; that is, actions that occur on the composite recovery point will no longer apply to the nested recovery point once it has been disassociated.

You can disassociate the recovery point using the console, or you can call the API `DisassociateRecoveryPointFromParent`. [Note that the API calls use the term “parent” to refer to composite recovery points.]

### Copy a recovery point
<a name="cfncopyrecoverypoint"></a>

You can copy a composite recovery point, or you can copy a nested recovery point if the resource supports [cross-account and cross-Region copy](backup-feature-availability.md#features-by-resource).

To copy recovery points using the AWS Backup console:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Click on **Protected Resources** in the left-hand navigation. In the text box, type `CloudFormation` to display only your CloudFormation stacks.

1. Composite recovery points will be displayed in the Recovery points pane. The plus sign (\$1) to the left of each recovery point ID can be clicked to expand each composite recovery point, showing all nested recovery points contained in the composite. You can click the radial circle button to the left of any recovery point to copy it.

1. Once it is selected, click the **Copy** button in the top-right corner of the pane.

When you copy a composite recovery point, nested recovery points that don’t support copy functionality won’t end up in the copied stack. The composite recovery point will have a status of `Partial`.

## Frequently Asked Questions
<a name="cfnfaq"></a>

1. *"What is included as part of the application backup?"*

   As part of each backup of an application defined using CloudFormation, the template, the processed value of each parameter in the template, and the nested resources supported by AWS Backup are backed up. A nested resource is backed up in the same way as an individual resource not part of a CloudFormation stack is backed up. Note that values of parameters marked as `no-echo` will not be backed up.

   

1. *"Can I back up my CloudFormation stack that has nested stacks?"*

   Yes. Your CloudFormation stacks which contain nested stacks can be in your backup.

   

1. *"Does a `Partial` status mean the creation of my backup failed?"*

   No. A partial status indicates that some of the recovery points were backed up, while some were not. There are three conditions to check if you were expecting a `Completed` backup result:

   1. Does your CloudFormation stack contain resources currently unsupported by AWS Backup? For a list of supported resources, see [ Supported AWS resources and third-party applications](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html#supported-resources) in our Developer Guide.

   1. One or more of the backup jobs belonging to resources within the stack were not successful and the job has to be rerun.

   1. A nested recovery point was deleted or disassociated from the composite recovery point.

   

1. *"How do I exclude resources in my CloudFormation stack backup?"*

   When you back up your CloudFormation stack, you can exclude resources from being part of the backup. In the console, during the [create a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) and [update a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/updating-a-backup-plan.html) processes, there is an [assign resources](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html) step. In this step, there is a **Resource selection** section. If you choose **include specific resource types** and have included CloudFormation as a resource to backup, you can **exclude specific resource IDs from the selected resource types**. You can also use tags to exclude resources within the stack.

   Using CLI, you can use
   + `NotResources` in your backup plan to exclude a specific resource from your CloudFormation stacks.
   + `StringNotLike` to exclude items through tags.

   

1. *"What types of backups are supported for nested resources?"*

   Backups of nested resources may be either full or incremental backups, depending on which kind of backup is supported by AWS Backup for these resources. For more information, see [ How incremental backups work](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup.html#how-incremental-backup-works). However, note that PITR (point-in-time restore) is [not supported](backup-feature-availability.md#features-by-resource) for Amazon S3 and Amazon RDS nested resources.

   

1. *"Are change sets that are part of the CloudFormation stack backed up?"*

   No. Change sets are not backed up as part of CloudFormation stack backup.

   

1. *"How does the status of the CloudFormation stack impact the backup?"*

   The status of the CloudFormation stack may impact the backup. A stack with a status that includes `COMPLETE` can be backed up, such as statuses `CREATE_COMPLETE`, `ROLLBACK_COMPLETE`, `UPDATE_COMPLETE`, `UPDATE_ROLLBACK_COMPLETE`, `IMPORT_COMPLETE`, or `IMPORT_ROLLBACK_COMPLETE`.

   In the case where an upload of a new template fails and the stack move to the status of `ROLLBACK_COMPLETE`, the new template will be backed up but backups of the nested resources will be based on the rolled-back resources.

   

1. *"How do application stack lifecycles differ from other recovery point lifecycles?" *

   Nested recovery point lifecycles are determined by the backup plan to which they belong. The composite recovery point is determined by the longest lifecycle of all nested recovery points. When the last remaining nested recovery point within a composite recovery point is deleted or disassociated, the composite recovery point will also be deleted.

   

1. *“How are tags of a CloudFormation copied to recovery points?”*

   Yes. Those tags will be copied to each respective nested recovery point.

1. *“Is there an order for deleting composite and nested recovery points (backups)?”*

   Yes. Some backups must be deleted before others can be deleted. Composite backups which contain nested recovery points cannot be deleted until all recovery points within the composite have been deleted. Once a composite recovery point is no longer contains nested recovery points, you can delete it manually. Otherwise, it will be deleted in accordance with its backup plan lifecycle.

   

## Restore applications within a stack
<a name="restore-app-stack"></a>

See [ How to restore application stack backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-application-stacks.html) for information on restoring nested recovery points.

# Amazon Aurora DSQL backups
<a name="backup-aurora"></a>

You can use AWS Backup to create backups of your Amazon Aurora DSQL single-Region and multi-Region clusters. Amazon Aurora DSQL cluster backups are always full backups.

Backup creation for Amazon Aurora DSQL clusters use the standard creation backup process. For more information, see the following:
+ [Creating an on-demand backup using AWS Backup](recov-point-create-on-demand-backup.md)
+ [Create a backup plan](creating-a-backup-plan.md)

To use AWS Backup to create backups of your Amazon Aurora DSQL clusters, you must enable protection for Aurora DSQL. For more information, see [Service Opt-in](getting-started.md#service-opt-in).

When you backup a multi-Region cluster, consider the following items:
+ A multi-Region cluster backup requires a separate backup for each Region within the cluster; a backup in one Region doesn't create a recovery point for all Regions in a multi-Region cluster.
+ As a best practice, AWS Backup recommends you create a recovery point in one Region and copy it to another related Region. For [multi-Region restore](restore-auroradsql.md#restore-auroradsql-multiregion), you need a recovery point in one supported Region, and a copy of that recovery point in another Region within the same Regional triplet.

  The following supported triplets are available. Where there are more than Regions, choose three in the same grouping.
  + US East (N. Virginia); US East (Ohio); US West (N. California)
  + Europe (Ireland); Europe (London); Europe (Paris); Europe (Frankfurt)
  + Asia Pacific (Tokyo); Asia Pacific (Seoul); Asia Pacific (Osaka)

AWS Backup recommends that you add the backup copy rule to the backup plan. If you do not add the copy rule to the backup plan, you must manually copy the backup to the required Region in which to perform the restore, which will increase your Recovery Time Objective (RTO) times.

For information about restoring an Aurora DSQL recovery point (backup), see [Amazon Aurora DSQL restore](restore-auroradsql.md).

# Advanced DynamoDB backup
<a name="advanced-ddb-backup"></a>

AWS Backup supports additional, advanced features for your Amazon DynamoDB data protection needs.

Customers who started using AWS Backup after November 2021 have advanced DynamoDB backup features enabled by default. Specifically, advanced DynamoDB backup features are enabled by default to customers who have not created a backup vault prior to November 21, 2021.

It's best practice for existing AWS Backup customers to enable advanced features for DynamoDB. There is no difference in warm backup storage pricing after you enable advanced features. You can potentially save money by moving backups to cold storage and optimize your costs by using cost allocation tags. You can also start taking advantage of AWS Backup's cross-Region and cross-account copy and security features.

**Topics**
+ [Benefits of advanced DDB backup](#advanced-ddb-backup-benefits)
+ [Considerations for Advanced DynamoDB backup](#advanced-ddb-considerations)
+ [Enabling advanced DynamoDB backup using the console](#advanced-ddb-backup-enable-console)
+ [Enabling advanced DynamoDB backup programmatically](#advanced-ddb-backup-enable-cli)
+ [Editing an advanced DynamoDB backup](#advanced-ddb-backup-edit)
+ [Restoring an advanced DynamoDB backup](#advanced-ddb-backup-restore)
+ [Deleting an advanced DynamoDB backup](#advanced-ddb-backup-delete)
+ [Other benefits of full AWS Backup management when you enable advanced DynamoDB backup](#advanced-ddb-backup-other-benefits)

## Benefits of advanced DDB backup
<a name="advanced-ddb-backup-benefits"></a>

After you enable AWS Backup's advanced features in your AWS Region, you unlock the following features for all new for DynamoDB table backups you create:
+ Cost savings and optimization:
  + [Tiering backups to cold storage](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_Lifecycle.html) to reduce storage costs
  + [ Cost allocation tagging for use with Cost Explorer](https://docs.aws.amazon.com/aws-backup/latest/devguide/metering-and-billing.html#cost-allocation-tags)
+ Additional copy options:
  + [Cross-Region copy](https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-region-backup.html)
  + [Cross-account copy](https://docs.aws.amazon.com/aws-backup/latest/devguide/create-cross-account-backup.html#prereq-cab)
+ Security:
  + Backups inherit tags from their source DynamoDB tables, allowing you to use those tags to set permissions and [ service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html).

## Considerations for Advanced DynamoDB backup
<a name="advanced-ddb-considerations"></a>

**Opting in**

Backups, including those of Advanced DDB resources, can be created by a backup plan, an on-demand backup, or through a backup policy. Backups created by a plan or on-demand will automatically opt-in your account to allow backups of Advanced DDB resources.

If your backup job is created by a backup policy, you need to manually opt-in to Advanced DynamoDB backups, either through the [Backup console](assigning-resources-console.md) or through [CLI](assigning-resources-json.md).

**Custom policies and roles**

If you use a custom role or policy instead of AWS Backup's default service role, you must add or use the following permissions policies (or add their equivalent permissions) to your custom role:
+ `AWSBackupServiceRolePolicyForBackup` to perform advanced DynamoDB backup.
+ `AWSBackupServiceRolePolicyForRestores` to restore advanced DynamoDB backups.

To learn more about AWS-managed policies and view examples of customer-managed policies, see [Managed policies for AWS Backup](security-iam-awsmanpol.md).

## Enabling advanced DynamoDB backup using the console
<a name="advanced-ddb-backup-enable-console"></a>

You can enable AWS Backup advanced features for DynamoDB backups using either the AWS Backup or DynamoDB console.

**To enable advanced DynamoDB backup features from the AWS Backup console:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation menu, choose **Settings**.

1. Under the **Supported services** section, verify that **DynamoDB** is **Enabled**.

   If it is not, choose **Opt-in** and enable DynamoDB as an AWS Backup supported service.

1. Under the **Advanced features for DynamoDB backups** section, choose **Enable**.

1. Choose **Enable features**.

For how to enable AWS Backup advanced features using the DynamoDB console, see [ Enabling AWS Backup features](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CreateBackupAWS.html#CreateBackupAWS_enabling) in the *Amazon DynamoDB User Guide*.

## Enabling advanced DynamoDB backup programmatically
<a name="advanced-ddb-backup-enable-cli"></a>

You can also enable AWS Backup advanced features for DynamoDB backups using the AWS Command Line Interface (CLI). You enable advanced DynamoDB backups when you set both of the following values to `true`:

**To programmatically enable AWS Backup advanced features for DynamoDB backups:**

1. Check if you already enabled AWS Backup advanced features for DynamoDB using the following command:

   ```
   $ aws backup describe-region-settings
   ```

   If `"DynamoDB":true` under both `"ResourceTypeManagementPreference"` and `"ResourceTypeOptInPreference"`, you have already enabled advanced DynamoDB backup.

   If, like the following output, you have at least one instance of `"DynamoDB":false`, you have not yet enabled advanced DynamoDB backup, proceed to the next step.

   ```
   {
     "ResourceTypeManagementPreference":{
       "DynamoDB":false,
       "EFS":true
     }
     "ResourceTypeOptInPreference":{
       "Aurora":true,
       "DocumentDB":false,
       "DynamoDB":false,
       "EBS":true,
       "EC2":true,
       "EFS":true,
       "FSx":true,
       "Neptune":false,
       "RDS":true,
       "Storage Gateway":true
     }
   }
   ```

1. Use the following [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_UpdateRegionSettings.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_UpdateRegionSettings.html) operation to set both `"ResourceTypeManagementPreference"` and `"ResourceTypeOptInPreference"` to `"DynamoDB":true`:

   ```
   aws backup update-region-settings \ 
                 --resource-type-opt-in-preference DynamoDB=true \
                 --resource-type-management-preference DynamoDB=true
   ```

## Editing an advanced DynamoDB backup
<a name="advanced-ddb-backup-edit"></a>

When you create a DynamoDB backup after you enable AWS Backup advanced features, you can use AWS Backup to:
+ Copy a backup across Regions
+ Copy a backup across accounts
+ Change when AWS Backup tiers a backup to cold storage
+ Tag the backup

To use those advanced features on an existing backup, see [ Editing a backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/editing-a-backup.html).

If you later disable AWS Backup advanced features for DynamoDB, you can continue to perform those operations to DynamoDB backups that you created during the period of time when you enabled advanced features.

## Restoring an advanced DynamoDB backup
<a name="advanced-ddb-backup-restore"></a>

You can restore DynamoDB backups taken with AWS Backup advanced features enabled in the same way you restore DynamoDB backups taken prior to enabling AWS Backup advanced features. You can perform a restore using either AWS Backup or DynamoDB.

You can specify how to encrypt your newly-restored table with the following options:
+ When you restore in the same Region as your original table, you can optionally specify an encryption key for your restored table. If you do not specify an encryption key, AWS Backup will automatically encrypt your restored table using the same key that encrypted your original table.
+ When you restore in a different Region than your original table, you must specify an encryption key.

 To restore using AWS Backup, see [Restore a Amazon DynamoDB table](restoring-dynamodb.md).

To restore using DynamoDB, see [Restoring a DynamoDB table from a backup](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Restore.Tutorial.html) in the *Amazon DynamoDB User Guide*.

## Deleting an advanced DynamoDB backup
<a name="advanced-ddb-backup-delete"></a>

You cannot delete backups created using these advanced features in DynamoDB. You must use AWS Backup to delete backups to maintain global consistency throughout your AWS environment.

To delete a DynamoDB backup, see [Backup deletion](deleting-backups.md).

## Other benefits of full AWS Backup management when you enable advanced DynamoDB backup
<a name="advanced-ddb-backup-other-benefits"></a>

When you enable AWS Backup advanced features for DynamoDB, you give full management of your DynamoDB backups to AWS Backup. Doing so gives you the following, additional benefits:

**Encryption**

AWS Backup automatically encrypts the backups with the KMS key of your destination AWS Backup vault. Previously, they were encrypted using the same encryption method of your source DynamoDB table. This increases the number of defenses you can use to safeguard your data. See [Encryption for backups in AWS Backup](encryption.md) for more information.

**Amazon Resource Name (ARN)**

Each backup ARN’s service namespace is `awsbackup`. Previously, the service namespace was `dynamodb`. Put another way, the beginning of each ARN will change from `arn:aws:dynamodb` to `arn:aws:backup`. See [ARNs for AWS Backup](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsbackup.html#awsbackup-resources-for-iam-policies) in the *Service Authorization Reference*.

With this change, you or your backup administrator can create access policies for backups using the `awsbackup` service namespace that now apply to DynamoDB backups created after you enable advanced features. By using the `awsbackup` service namespace, you can also apply policies to other backups taken by AWS Backup. See [Access control](access-control.md) for more information.

**Location of charges on billing statement**

Charges for backups (including storage, data transfers, restores, and early deletion) appear under “Backup” in your AWS bill. Previously, charges appeared under “DynamoDB” in your bill.

This change ensures that you can use AWS Backup billing to centrally monitor your backup costs. See [Metering, costs, and billing for AWS BackupMetering, costs, and billing](metering-and-billing.md) for more information.

# Amazon EBS and AWS Backup
<a name="multi-volume-crash-consistent"></a>

The backup process for Amazon EBS resources is similar to the steps used to back up other resources types:
+ [Create an on-demand backup](recov-point-create-on-demand-backup.md)
+ [Create a scheduled backup](creating-a-backup-plan.md)

Resource-specific information is noted in the following sections.

## Amazon EBS Archive Tier for cold storage
<a name="ebs-archive-tier"></a>

EBS is one of the resource that supports a transition of backups to cold storage. For more information, see [Lifecycle and storage tiers](plan-options-and-configuration.md#backup-lifecycle).

## Amazon EBS multi-volume, crash-consistent backups
<a name="ebs-multi-volume"></a>

By default, AWS Backup creates crash-consistent backups of Amazon EBS volumes that are attached to an Amazon EC2 instance. Crash consistency means that the snapshots for every Amazon EBS volume attached to the same Amazon EC2 instance are taken at the exact same moment. You no longer have to stop your instances or coordinate between multiple Amazon EBS volumes to ensure crash-consistency of your application state.

Since multi-volume, crash-consistent snapshots are a default AWS Backup functionality, you don’t need to do anything different to use this feature.

The role used to create an EBS snapshot recovery point is associated with that snapshot. This same role must be used to delete recovery points created by it or to transition recovery points of it to an archive tier.

## Amazon EBS Snapshot Lock and AWS Backup
<a name="ebs-snapshotlock"></a>

AWS Backup managed Amazon EBS snapshots and snapshots associated with a AWS Backup managed Amazon EC2 AMI which have Amazon EBS Snapshot Lock applied may not be deleted as part of the recovery point lifecycle if the snapshot lock duration exceeds the backup lifecycle. Instead, these recovery points will have the status of `EXPIRED`. These recovery points can be [deleted manually](https://docs.aws.amazon.com/aws-backup/latest/devguide/deleting-backups.html#deleting-backups-manually) if you choose to first remove the Amazon EBS snapshot lock.

## Restoring Amazon EBS resources
<a name="ebs-restore-link"></a>

To restore your Amazon EBS volumes, follow the steps in [Restoring an Amazon EBS volume](restoring-ebs.md).

# Amazon Relational Database Service backups
<a name="rds-backup"></a>

## Amazon RDS and AWS Backup
<a name="rds-backup-differences"></a>

When you consider the options to back up your Amazon RDS instances and clusters, it's important to clarify which kind of backup you want to create and use. Several AWS resources, including Amazon RDS, offer their own native backup solutions.

Amazon RDS gives the option of making [automated backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ManagingAutomatedBackups.html) and [manual backups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ManagingManualBackups.html). Recovery points created by AWS Backup are classified differently depending on the backup type:
+ **Periodic snapshots** created by AWS Backup are considered manual backups in Amazon RDS. These are snapshot-based backups taken according to your backup plan schedule.
+ **Continuous backups** created by AWS Backup are considered automated backups in Amazon RDS. These enable point-in-time restore (PITR) by maintaining transaction logs alongside automated snapshots.

This distinction is important because manual and automated backups have different retention behaviors and lifecycle management in Amazon RDS.

When you use AWS Backup to [create a backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html#create-backup-plan-console) (recovery point) of an Amazon RDS instance, AWS Backup checks if you have previously used Amazon RDS to create an automated backup. If an automated backup exists, AWS Backup creates a incremental snapshot copy (`copy-db-snapshot` operation). If no backup exists, AWS Backup creates a snapshot of the instance you indicate, instead of a copy (`create-db-snapshot` operation).

The first snapshot made by AWS Backup, created by either operation, will result in 1 full snapshot. All subsequent *copies* of this will be incremental backups, as long as the full backup exists.

When using cross account or cross Region copies, incremental snapshot copy jobs process faster than full snapshot copy jobs. Keeping a previous snapshot copy until the new copy job is complete may reduce the copy job duration. If you choose to copy snapshots from RDS database instances, it is important to note that deleting previous copies first will cause full snapshot copies to be made (instead of incremental). For more information on optimizing copying, see [Incremental snapshot copying](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html#USER_CopySnapshot.Incremental) in the *Amazon RDS User Guide*.

**Important**  
When a AWS Backup backup plan is scheduled to create multiple daily snapshots of an Amazon RDS instance, and when one of those scheduled [AWS Backup Start Backup window](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html#plan-options-and-configuration) coincides with the [Amazon RDS Backup window](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ManagingAutomatedBackups.html#USER_WorkingWithAutomatedBackups.BackupWindow), the data lineage of the backups can branch off into non-identical backups, creating unplanned and conflicting backups. To prevent this, ensure your AWS Backup backup plan or Amazon RDS window do not coincide in their times.

### Considerations
<a name="rds-backup-considerations"></a>

RDS Custom for SQL Server and RDS Custom for Oracle are not currently supported by AWS Backup.

AWS Backup does not support backup and restore of RDS on Outposts.

## Amazon RDS continuous backups and point in time restore
<a name="rds-backup-continuous"></a>

Continuous backups involve using AWS Backup to create a full backup of your Amazon RDS resource, then capturing all changes through a transaction log. You can achieve a greater granularity by rewinding to the point in time you desire to restore to instead of choosing a previous snapshot taken at fixed time intervals.

See [continuous backups and PITR supported services](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html#point-in-time-recovery-supported-services) and [managing continuous backup settings](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html#point-in-time-recovery-managing) for more information.

## Amazon RDS Multi-Availability Zone backups
<a name="rds-multiaz"></a>

AWS Backup backs up and supports Amazon RDS for MySQL and for PostgreSQL Multi-AZ (Availability Zone) deployment options with one primary and two readable standby database instances.

For a list of Regions where Multi-Availability Zone backups are available, see the Amazon RDS Multi-AZ column in [Supported services by AWS Region](backup-feature-availability.md#supported-services-by-region).

The Multi-AZ deployment option optimizes write transactions and is ideal when your workloads require additional read capacity, lower write transaction latency, more resilience from network jitter (which impacts the consistency of write transaction latency), and high availability and durability.

To create a Multi-AZ cluster, you can choose either MySQL or PostgreSQL as the engine type.

In the AWS Backup console, there are three deployment options:
+ **Multi-AZ DB cluster:** Creates a DB cluster with a primary DB instances and two readable standby DB instances, which each DB instance in a different Availability Zone. Provides high availability, data redundancy, and increases capacity to server-ready workloads.
+ **Multi-AZ DB instance:** Creates a primary DB instance and a standby DB instance in a different Availability Zone. This provides high availability and data redundancy, but the standby DB instance doesn’t support connections for read workloads.
+ **Single DB instance: **Creates a single DB instance with no standby DB instances.

**Backup behavior with instances and clusters**
+ [ Point-in-Time Recovery](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) (PITR) can support instances, but not clusters.
+ Copying a Multi-AZ DB cluster snapshot is not supported.
+ The Amazon Resource Name (ARN) for an RDS recovery point depends on whether an instance or cluster is used:

  An RDS instance ARN: `arn:aws:rds:region: account:db:name`

  An RDS Multi-Availability Cluster: `arn:aws:rds:region:account:cluster:name`

For more information, consult [ Multi-AZ DB cluster deployments](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) in the *Amazon RDS User Guide*.

For more information on [ Creating a Multi-AZ DB cluster snapshot](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateMultiAZDBClusterSnapshot.html), see the Amazon RDS User Guide.

## Amazon Aurora Global Databases
<a name="rds-aurora-global"></a>

AWS recommends maintaining backups in every Region where your global database is deployed.

# Amazon Redshift backups
<a name="redshift-backups"></a>

Amazon Redshift is a fully managed, scalable cloud data warehouse that accelerates your time to insights with fast, easy, and secure analytics. You can use AWS Backup to protect your data warehouses with immutable backups, separate access policies, and centralized organizational governance of backup and restore jobs.

An Amazon Redshift data warehouse is a collection of computing resources called nodes, which are organized into a group called a cluster. AWS Backup can backup these clusters.

For information on [Amazon Redshift](https://docs.aws.amazon.com/redshift/index.html) , see the [ Amazon Redshift Getting Started Guide](https://docs.aws.amazon.com/redshift/latest/gsg/index.html), the [Amazon Redshift Database Developer Guide](https://docs.aws.amazon.com/redshift/latest/dg/index.html), and the [Amazon Redshift Cluster Management Guide](https://docs.aws.amazon.com/redshift/latest/mgmt/index.html).

## Back up Amazon Redshift provisioned clusters
<a name="backupredshift"></a>

You can protect your Amazon Redshift clusters using the AWS Backup console or programmatically using API or CLI. These clusters can be backed up on a regular schedule as part of a backup plan, or they can be backed up as needed via on-demand backup.

You can restore a single table (also known as item-level restore) or an entire cluster. Note that tables cannot be backed up by themselves; tables are backed up as part of a cluster when the cluster is backed up.

Using AWS Backup allows you to view your resources in a centralized way; however, if Amazon Redshift is the only resource you use, you can continue to use the automated snapshot scheduler in Amazon Redshift. Note that you cannot continue to manage manual snapshot settings using Amazon Redshift if you choose to manage these via AWS Backup.

You can backup Amazon Redshift clusters either through the AWS Backup console or using the AWS CLI.

There are two ways to use the AWS Backup console to backup a Amazon Redshift cluster: on demand or as part of a backup plan.

### Create on-demand Amazon Redshift backups
<a name="ondemandredshiftbackups"></a>

See [ Creating an on-demand backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) type page for more information.

To create a manual snapshot, leave the continuous backup checkbox unchecked when you create a backup plan that includes Amazon Redshift resources.

### Create scheduled Amazon Redshift backups in a backup plan
<a name="scheduledredshiftbackups"></a>

Your scheduled backups can include Amazon Redshift clusters if they are a protected resource. To opt into protecting Amazon Redshift clusters:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Using the navigation pane, choose **Protected resources**.

1. Toggle Amazon Redshift to **On**.

1. See [ Assigning resources to the console](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html#assigning-resources-console) to include Amazon Redshift clusters in an existing or new plan.

Under **Manage Backup plans**, you can choose to [create a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) and include Amazon Redshift clusters, or you can [update an existing one](https://docs.aws.amazon.com/aws-backup/latest/devguide/updating-a-backup-plan.html) to include Amazon Redshift clusters. When adding the resource type *Amazon Redshift*, you can choose to add **All Amazon Redshift clusters**, or check the boxes next to the clusters you wish to include in your backup plan.

### Back up programmatically
<a name="redshiftbackupapi"></a>

You can also define your backup plan in a JSON document and provide it using the AWS Backup console or AWS CLI. See [ Creating backup plans using a JSON document and the AWS Backup CLI](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html#create-backup-plan-cli) for information on how to create a backup plan programatically.

You can do the following operations using API:
+ Start a backup job
+ Describe a backup job
+ Get recovery point metadata
**Note**  
`BackupSizeInBytes` metadata is supported for the following resource types: Amazon EBS volumes, Amazon EFS file systems, Amazon RDS databases, DynamoDB tables, Amazon EC2 instances, Amazon FSx file systems, and Amazon S3 buckets. This field provides the size of the backup in bytes and is available through the `DescribeRecoveryPoint` API and AWS Backup console. For unsupported resource types, this field will not be populated.
+ List recovery points by resources
+ List tags for the recovery point

### View Amazon Redshift cluster backups
<a name="viewredshiftbackups"></a>

To view and modify your Amazon Redshift table backups within the console:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Choose **Backup vaults**. Then, click on the backup vault name that contains your Amazon Redshift clusters.

1. The backup vault will display a summary and a list of backups. You can click on the link in the column **Recovery point ID**.

1. To delete one or more recovery points, check the box(es) you wish to delete. Under the button **Actions**, you can select **Delete**.

### Restore a Amazon Redshift cluster
<a name="w2aac17c19c31c11c11c11"></a>

See how to [Restore a Amazon Redshift cluster](https://docs.aws.amazon.com/aws-backup/latest/devguide/redshift-restores.html) for more information.

# Amazon Redshift Serverless backups
<a name="redshift-serverless-backups"></a>

## Overview
<a name="redshift-serverless-backups-overview"></a>

AWS Backup offers full backup management of your Amazon Redshift Serverless namespaces. Through AWS Backup, you can schedule and restore Redshift Serverless manual snapshots through the console or through CLI.

Redshift Serverless data protection through AWS Backup provides several options for backing up and restoring your data warehouses. You can create a scheduled or on-demand snapshot of your namespace. Then, you can choose to restore all the databases in that snapshot to a Amazon Redshift provisioned cluster or a Serverless namespace. Alternatively, you can restore a single table.

Redshift Serverless offers both automated and manual snapshots. Currently, AWS Backup can be used to manage manual snapshots but not automated ones.

## Backup options for Redshift Serverless
<a name="redshift-serverless-backups-options"></a>

You can use the AWS Backup console or CLI to create backups on demand or as part of a backup plan.

### Create on-demand backup
<a name="redshift-serverless-backups-on-demand"></a>

You can create on-demand backups of Redshift Serverless namespaces through the following steps:

------
#### [ Console ]

1. Open the [AWS Backup console](https://console.aws.amazon.com//backup).

1. On the dashboard, choose **Create an on-demand backup**.

1. Choose **Redshift Serverless** in the resource type dropdown menu.

1. Select the namespace you plan to back up.

1. Ensure **Create backup now** is selected.

1. Specify the retention period for the backup.

1. Choose an existing backup vault or create a new one.

1. Select the IAM role to use for the backup.

1. Optionally, add tags to the backup. To assign a tag to your on-demand backup, expand **Tags added to recovery points**, choose **Add new tag**, and enter a tag key and tag value.

1. Select **Create on-demand backup** to begin the backup job.

1. Once the job is initiated, the console will show the Jobs screen where you can see a list of your backup jobs and their statuses.

------
#### [ AWS CLI ]

Use the **start-backup-job** command.

**Required parameters**
+ `BackupVaultName`
+ `IamRoleArn`
+ `ResourceArn`

**Optional parameters**
+ `CompleteWindowMinutes`
+ `IdempotencyToken`
+ `Lifecyle`
+ `StartWindowMinutes`

**Example**  
The following example creates an on-demand backup of a Redshift Serverless namespace.  

```
aws backup start-backup-job \
    --backup-vault-name sample-vault \
    --iam-role-arn arn:aws:iam::account:role/service-role/AWSBackupDefaultServiceRole \
    --resource-arn arn:aws:redshift-serverless:region:account:namespace/namespace-name-UUID
```

------

### Create scheduled Redshift Serverless backups in a backup plan
<a name="redshift-serverless-backups-scheduled"></a>

You can create a new backup plan for their Redshift Serverless namespaces through the AWS Backup console or through CLI, or you can add Redshift Serverless to an existing backup plan.

Your scheduled backups can include Redshift Serverless namespaces if they are a protected resource. To opt into protecting Redshift Serverless in the AWS Backup console, complete the following steps:

------
#### [ Console ]

To opt into protecting Redshift Serverless in the AWS Backup console, complete the following steps:

1. Open the [AWS Backup console](https://console.aws.amazon.com//backup).

1. Using the navigation pane, choose **Protected resources**.

1. Toggle **Amazon Redshift Serverless** to **On**.

1. See [Select AWS services to backup](assigning-resources.md) to include Redshift Serverless namespaces in an existing or new plan. When you add the resource type *Redshift Serverless*, you can choose to add **All Amazon Redshift namespaces**, or check the boxes next to the namespaces you wish to back up.

Under **Manage Backup plans**, you can:
+ [Create a backup plan](creating-a-backup-plan.md) and include Redshift Serverless;
+ [Update](updating-a-backup-plan.md) an existing backup plan to include Redshift Serverless.

------
#### [ AWS CLI ]

See [Create backup plans using the AWS CLI](creating-a-backup-plan.md#create-backup-plan-cli) for guidance to use **create-backup-plan**.

If you want to alter an existing plan to include your Serverless resources, use the command **update-backup-plan**.

The ARN (Amazon Resource Name) for Serverless resources to include in "BackupSelection": \$1 "Resources" has the following format:

```
arn:aws:redshift-serverless:Region:account:snapshot/a12bc34d-567e-890f-123g-h4ijk56l78m9
```

------

See [Amazon Redshift Serverless restore](redshift-serverless-restore.md) for information to restore data from a snapshot to a Serverless namespace.

# Amazon EKS backups
<a name="eks-backups"></a>

An Amazon Elastic Kubernetes Service (Amazon EKS) cluster consists of multiple resources that you can back up as a single unit. When you back up an Amazon EKS cluster, AWS Backup creates a composite recovery point that includes both EKS cluster state and persistent volume backups.

When an Amazon EKS cluster is backed up, recovery points are created for the Amazon EKS cluster state and persistent volumes supported by AWS Backup. These recovery points are grouped together within an overarching recovery point called a **composite**.

There are two distinct components of an Amazon EKS backup:
+ *Amazon EKS Cluster State:* This is a backup of the Amazon EKS cluster state. See Amazon EKS backup terminology below for what is included.
+ *Persistent Storage:* This is a backup of persistent storage (Amazon EBS, Amazon S3, Amazon Elastic File System) attached to the Amazon EKS cluster via Persistent Volume Claims and [supported by EKS Add Ons CSI Driver](https://docs.aws.amazon.com/eks/latest/userguide/storage.html).

## Amazon EKS backup terminology
<a name="eks-backup-overview"></a>

The following terms are used throughout the Amazon EKS backup documentation. For Amazon EKS Specific terminology, please refer to [Amazon EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html).

## EKS Backup Terminology
<a name="eks-backup-terminology"></a>
+ **Composite recovery point** – A recovery point used to group nested recovery points together for an Amazon EKS cluster backup.
+ **Nested recovery point** – A recovery point of a resource that is part of an Amazon EKS cluster and is backed up as part of the composite recovery point.
+ **EKS Cluster State** – The Kubernetes manifests (YAML or JSON files) that define the desired state of Kubernetes resources in your cluster. This includes Kubernetes resources and deployments such as: secrets, config maps, stateful sets, DaemonSets, storage classes, storage maps, replica sets, persistent volume claims, custom resource definitions, roles, and role bindings.
+ **Amazon EKS Cluster Configuration Child Recovery Point** – Contains Amazon EKS cluster state.
+ **Persistent Volume Child Recovery Points** – Contains persistent volume backups for supported storage types (EBS, S3, EFS) [supported by EKS Add Ons CSI Driver](https://docs.aws.amazon.com/eks/latest/userguide/storage.html).

## Amazon EKS backup structure
<a name="eks-backup-creation"></a>

**Amazon EKS backups include the following components:**
+ Amazon EKS Cluster State
+ Persistent Storage: Backups of supported storage types including Amazon EBS, Amazon EFS, and Amazon S3

**Amazon EKS Backups will not include the following components:**
+ Container images from external repositories (ECR, Docker)
+ EKS cluster infrastructure components (e.g. VPCs, Subnets)
+ Auto-generated EKS resources like nodes, auto-generated pods, events, leases, and jobs.

**EKS backup setup and prerequisites ("Before you backup")**
+ **EKS Cluster Settings:**
  + EKS Cluster [authorization mode](https://docs.aws.amazon.com/eks/latest/userguide/setting-up-access-entries.html) set to API or API\$1AND\$1CONFIG\$1MAP for AWS Backup to create [Access Entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) to access the EKS cluster.
+ **Permissions:**
  + AWS Backup's managed policy [AWSBackupServiceRolePolicyForBackup](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#AWSBackupServiceRolePolicyForBackup) contains the required permissions to backup your Amazon EKS cluster and EBS and EFS persistent storage
  + If your EKS Cluster contains an S3 bucket you will need to ensure the following policies and prerequisites for your S3 bucket are added and enabled as documented:
    + [AWSBackupServiceRolePolicyForS3Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#AWSBackupServiceRolePolicyForS3Backup)
    + [Prerequisites for S3 Backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html#s3-backup-prerequisites)
+ **Encryption:**
  + Amazon EKS child recovery points will be encrypted with the Amazon KMS key set of the target Backup Vault
  + Persistent Storage recovery points will be encrypted as per the current support for each storage class: EBS Snapshots, S3 Backups, EFS Backups. [See Encryption for backups in AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/encryption.html)

## Create an Amazon EKS backup
<a name="eks-backups-options"></a>

The process of a backup creation is called a backup job. An Amazon EKS cluster backup job has a status. When a backup job has finished, it has the status of Completed. This signifies a recovery point (a backup) has been created.

### Creating an on-demand Amazon EKS backup
<a name="eks-backups-on-demand"></a>

------
#### [ Console ]

To create an on-demand backup of your Amazon EKS cluster:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**.

1. Under **Resource type**, select **Amazon EKS**.

1. Select the checkbox next to the Amazon EKS cluster you want to back up.

1. Choose **Create on-demand backup**.

1. Configure your backup settings, including backup window, transition to cold storage, and retention period.

1. Choose **Create on-demand backup**.

------
#### [ AWS CLI ]

To create an on-demand backup of your Amazon EKS cluster using the AWS CLI:

Use the **start-backup-job** command:

```
aws backup start-backup-job \
    --backup-vault-name my-backup-vault \
    --resource-arn arn:aws:eks:us-west-2:123456789012:cluster/my-cluster \
    --iam-role-arn arn:aws:iam::123456789012:role/AWSBackupDefaultServiceRole \
    --region us-west-2
```

Optionally, specify additional parameters such as lifecycle settings:

```
aws backup start-backup-job \
    --backup-vault-name my-backup-vault \
    --resource-arn arn:aws:eks:us-west-2:123456789012:cluster/my-cluster \
    --iam-role-arn arn:aws:iam::123456789012:role/AWSBackupDefaultServiceRole \
    --lifecycle MoveToColdStorageAfterDays=30,DeleteAfterDays=365 \
    --region us-west-2
```

Monitor the backup job status:

```
aws backup describe-backup-job \
    --backup-job-id backup-job-id \
    --region us-west-2
```

------

## Amazon EKS backup ARN format
<a name="eks-recovery-points"></a>

Composite Recovery Point arn:*partition*:backup:*region*:*accountId*:recovery-point:composite:eks/*cluster-name*-*timestamp*

Child Recovery Point arn:*partition*:backup:*region*:*accountId*:recovery-point:eks/*cluster-name*-*timestamp*

### Amazon EKS recovery points
<a name="eks-recovery-point-status"></a>

#### Recovery point status
<a name="eks-recovery-point-status-details"></a>

When the backup job of an Amazon EKS cluster is finished (the job status is `Completed`), a backup of the cluster has been created. This backup is also known as a composite recovery point. A composite recovery point can have one of the following statuses: `Completed`, `Failed`, or `Partial`.

Each Amazon EKS backup creates a parent backup job for the composite recovery point and child backup jobs for each child recovery point (cluster configuration and persistent volumes).
+ A completed backup job means your entire Amazon EKS cluster and the resources within it are protected by AWS Backup.
+ A failed status indicates that the backup job was unsuccessful; you should create the backup again once the issue that caused the failure is corrected.
+ A `Partial` status means that not all the resources in the cluster were backed up. This may happen if one or more of the backup jobs belonging to resources within the cluster (nested resources) have statuses other than `Completed`. You can manually create an on-demand backup to rerun any resources that resulted in a status other than `Completed`.
+ A `Completed with issues` status means that not all the resources in the cluster were backed up. This can happen when we fail to backup some Kubernetes objects in the cluster. You can subscribe to **Notification Events** for failed objects for backup. For more information, see [Notification options with AWS Backup.](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-notifications.html)

Each nested resource within the composite recovery point has its own individual recovery point, each with its own status (either `Completed` or `Failed`). Nested recovery points with a status of `Completed` can be restored.

AWS Backup supports lifecycle transitions to cold storage for persistent volume recovery points. You can subscribe to notifications to receive alerts on backup job status.

## Manage recovery points
<a name="eks-manage-recovery-points"></a>

Composite recovery points (backups) can be copied; persistent volume child recovery points can be copied, deleted, disassociated, or restored. The Amazon EKS cluster state child recovery point cannot be copied, deleted, or disassociated as it maintains a 1:1 relationship with its parent composite recovery point.

A composite recovery point which contains nested backups cannot be deleted. After the nested recovery points within a composite recovery point have been deleted or disassociated, you can manually delete the composite recovery point manually or let it remain until the backup plan lifecycle deletes it.

### Delete a recovery point
<a name="eks-delete-recovery-point"></a>

You can delete a recovery point using the console or using the AWS CLI.

To delete recovery points using the console:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Click on Protected Resources in the left-hand navigation. In the text box, type EKS to display only your Amazon EKS clusters.

1. Composite recovery points will be displayed in the Recovery points pane. The plus sign (\$1) to the left of each recovery point ID can be clicked to expand each composite recovery point, showing all nested recovery points contained in the composite. You can check the box to the left of any recovery point to include it in your selection of recovery points you wish to delete.

1. Click the Delete button.

When you use the console to delete one or more composite recovery points, a warning box will pop up. This warning box requires you to confirm your intention to delete the composite recovery points, including nested recovery points within composite stacks.

To delete recovery points using API, use the DeleteRecoveryPoint command.

When you use API with the AWS Command Line Interface you must delete all nested recovery points prior to deleting a composite point.

### Disassociate a nested recovery point from composite recovery point
<a name="eks-disassociate-recovery-point"></a>

You can disassociate a nested recovery point from a composite recovery point (for example, you wish to keep the nested recovery point but delete the composite recovery point). Both recovery points will remain, but they will no longer be connected; that is, actions that occur on the composite recovery point will no longer apply to the nested recovery point once it has been disassociated. The Amazon EKS cluster state child recovery point cannot be disassociated as it maintains a 1:1 relationship with its parent composite recovery point.

You can disassociate the recovery point using the console, or you can call the API DisassociateRecoveryPointFromParent.

## Copy a recovery point
<a name="eks-copy-recovery-point"></a>

You can copy a composite recovery point, or you can copy a nested recovery point if the resource supports [cross-account and cross-Region copy](backup-feature-availability.md#features-by-resource).

To copy recovery points using the AWS Backup console:

Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Click on **Vaults** in the left-hand navigation, and go to the vault that contains the recovery point you want to copy. In the text box, type `EKS` to display only your recovery points for Amazon EKS clusters.

1. Both composite and nested recovery points will be displayed under the Recovery point ID pane. Note you cannot select and copy a nested EKS recovery point.

1. The arrow sign to the left of each composite recovery point ID can be clicked to expand, showing all nested recovery points contained in the composite. You can click the square checkbox to the left of any recovery point to copy it.

1. Once it is selected, click the **Actions** dropdown in the top-right corner of the pane and click **Copy**.

Amazon EKS backups support all copy types:
+ Same region/account
+ Cross account
+ Cross region
+ Opt-in regions

## Limitations
<a name="eks-limitations"></a>
+ Persistent volumes using a CSI Driver via CSI migration, in-tree storage plugins or ACK controllers are not supported. Note that the annotation `volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com` is metadata indicating which provisioner could manage the volume, not that the volume uses CSI. The actual provisioner is determined by the storageClass.
+ Amazon S3 buckets with specific prefixes attached to CSI Driver MountPoints cannot be backed up. Only Amazon S3 buckets as targets are supported, not specific prefixes.
+ Amazon S3 bucket backups as part of an EKS cluster backup will only support snapshot backups.
+ Backups of EFS file systems in a cross-account are not supported via EKS Backups.
+ Amazon FSx via CSI driver is not supported via EKS Backups.
+ AWS Backup does not support Amazon EKS on AWS Outposts.
+ Subject to [backup and restore quotas](aws-backup-limits.html).

## Backup Jobs Completed with Issues
<a name="eks-backup-jobs-completed-with-issues"></a>

When backing up an Amazon EKS cluster, some Kubernetes objects may fail to be retrieved. In this case, the backup job will complete with a `Completed with issues` status rather than failing entirely, with the following status message:
+ Some Kubernetes Objects failed to be backed up. To get notified of these failures, [enable SNS event notifications](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-notifications.html).

The following Kubernetes object types may be skipped during a backup job due to [Amazon EKS Metrics Server Add On](https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html) unavailability issues resulting in a 503 Service Unavailable error. See here for [troubleshooting guidance](https://repost.aws/knowledge-center/eks-resolve-http-503-errors-kubernetes).
+ `metrics.k8s.io`
+ `custom.metrics.k8s.io`
+ `external.metrics.k8s.io`
+ `metrics.eks.amazonaws.com`

## Frequently Asked Questions
<a name="eks-faq"></a>

1. *"What is included as part of the Amazon EKS backup?"*

   As part of each backup of an Amazon EKS cluster, the Amazon EKS cluster state and persistent volumes supported by AWS Backup are backed up. The Amazon EKS cluster state includes details like cluster name, IAM role, Amazon VPC configuration, network settings, logging, encryption, add-ons, access entries, managed node groups, Fargate profiles, pod identity associations, and Kubernetes manifest files.

1. *"Does a `Partial` status mean the creation of my backup failed?"*

   No. A partial status indicates that some of the recovery points were backed up, while some were not. There are two conditions to check if you were expecting a `Completed` backup result:

   1. One or more of the backup jobs belonging to resources within the cluster were not successful and the job has to be rerun.

   1. A nested recovery point was deleted or disassociated from the composite recovery point.

1. *"Do I need to have an agent or Amazon EKS Add-on installed on my Amazon EKS cluster before backup?"*

   No. AWS Backup does not require any agents or add-ons to be installed on your Amazon EKS cluster. The only pre-requisite is to have your EKS Cluster's [authorization mode](https://docs.aws.amazon.com/eks/latest/userguide/setting-up-access-entries.html) set to API or API\$1AND\$1CONFIG\$1MAP for AWS Backup to create [Access Entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) to access the EKS cluster.

1. *"Does Amazon EKS Backups include Amazon EKS infrastructure components or Amazon ECR images?"*

   No. Amazon EKS backups focus on the EKS cluster state and application workloads, not the underlying infrastructure components or container images.

1. *"Can I lifecycle my EKS Composite Recovery Point to cold storage?"*

   You can transition to cold storage for underlying child recovery points that support cold storage tiers. See the [AWS Backup feature availability matrix](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource) for full list of support.

1. *"Are my EKS backups incremental?"*

   AWS Backup will take incremental backups of each child recovery point where supported today, this includes EBS volumes, EFS Filesystems and S3 buckets. The EKS cluster state child recovery point will be a full backup. See the [AWS Backup feature availability matrix](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource).

1. *"Can I create an index and search my EKS backups?"*

   No, however you can create on-demand indexes and search persistent volumes where the underlying storage type supports this capability through AWS Backup. See the [AWS Backup feature availability matrix](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource).

# SAP HANA backup on Amazon EC2
<a name="backup-saphana"></a>

**Note**  
[Supported services by AWS Region](backup-feature-availability.md#supported-services-by-region) contains the currently supported Regions where SAP HANA database backups on Amazon EC2 instances are available.

AWS Backup supports backups and restores of SAP HANA databases on Amazon EC2 instances.

**Topics**
+ [Overview of SAP HANA databases with AWS Backup](#saphanaoverview)
+ [Prerequisites for backing up SAP HANA databases through AWS Backup](#saphanaprerequisites)
+ [SAP HANA backup operations in the AWS Backup console](#saphanabackupconsole)
+ [View SAP HANA database backups](#saphanaviewbackup)
+ [Use AWS CLI for SAP HANA databases with AWS Backup](#saphanaapicli)
+ [Troubleshooting backups of SAP HANA databases](#saphanatroubleshooting)
+ [Glossary of SAP HANA terms when using AWS Backup](#saphanaglossary)
+ [AWS Backup support of SAP HANA databases on EC2 instances release notes](#saphanareleasenotes)

## Overview of SAP HANA databases with AWS Backup
<a name="saphanaoverview"></a>

In addition to the ability to create backups and to restore databases, AWS Backup integration with Amazon EC2 Systems Manager for SAP allows customers to identify and tag SAP HANA databases.

AWS Backup is integrated with AWS Backint Agent to perform SAP HANA backups and restores. For more information, see [AWS Backint](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-sap-hana.html).

When you take backups of SAP HANA, your snapshots and on-demand backups are full backups. However, you can achieve incremental backups by enabling continuous backups for point-in-time recovery (PITR).

## Prerequisites for backing up SAP HANA databases through AWS Backup
<a name="saphanaprerequisites"></a>

Several prerequisites must be completed before backup and restore activities can be performed. Note you will need administrative access to your SAP HANA database and permissions to create new IAM roles and policies in your AWS account to perform these steps.

Complete [ these prerequisites at Amazon EC2 Systems Manager](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html).

1. [ Set up required permissions for Amazon EC2 instance running SAP HANA database](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#ec2-permissions)

1. [ Register credentials in AWS Secrets Manager](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#register-secrets)

1. [ Install AWS Backint and AWS Systems Manager for SAP Agents](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-installing-configuring.html)

1. [ Verify SSM Agent](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#verify-ssm-agent)

1. [ Verify parameters](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#verification)

1. [ Register SAP HANA database](https://docs.aws.amazon.com/ssm-sap/latest/userguide/get-started.html#register-database)

It is best practice to register each HANA instance only once. Multiple registrations can result in multiple ARNs for the same database. Maintaining a single ARN and registration simplifies backup plan creation and maintenance and can also help reduce unplanned duplication of backups.

## SAP HANA backup operations in the AWS Backup console
<a name="saphanabackupconsole"></a>

Once the prerequisites and SSM for SAP setups are complete, you can back up and restore your SAP HANA on EC2 databases.

### Opt in to protect SAP HANA resources
<a name="saphanaenableoptin"></a>

To use AWS Backup to protect your SAP HANA databases, SAP HANA must be toggled on as one of the protected resources. To opt in:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation pane, choose **Settings**.

1. Under **Service opt-in**, select **Configure resources**.

1. Opt in to **SAP HANA on Amazon EC2.**.

1. Click **Confirm**.

Service opt-in for SAP HANA on Amazon EC2 will now be enabled.

### Create a scheduled backup of SAP HANA databases
<a name="saphanascheduledbackup"></a>

You can [ edit an existing backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/updating-a-backup-plan.html) and add SAP HANA resources to it, or you can [create a new backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) just for SAP HANA resources.

If you choose to create a new backup plan, you will have three options:

1. **Option 1: Start with a template**

   1. Choose a backup plan template.

   1. Specify a backup plan name.

   1. Click **Create plan**.

1. **Option 2: Build a new plan**

   1. Specify a backup plan name.

   1. Optionally specify tags to add to backup plan.

   1. Specify the backup rule configuration.

      1. Specify a backup rule name.

      1. Select an existing vault or create a new backup vault. This is where your backups are stored.

      1. Specify a backup frequency.

      1. Specify a backup window.

         *Note transition to cold storage is currently unsupported*.

      1. Specify the retention period.

         *Copy to destination is currently unsupported*

      1. (*Optional*) Specify tags to add to recovery points.

   1. Click **Create plan**.

1. **Option 3: Define a plan using JSON**

   1. Specify the JSON for your backup plan by either modifying the JSON expression of an existing backup plan or creating a new expression.

   1. Specify a backup plan name.

   1. Click **Validate JSON**.

   Once the backup plan is created successfully, you can assign resources to the backup plan in the next step.

Whichever plan you use, ensure you [ assign resources](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html). You can choose which SAP HANA databases to assign, including system and tenant databases. You also have the option to exclude specific resource IDs.

### Create an on-demand backup of SAP HANA databases
<a name="saphanaondemandbackup"></a>

You can [ create a full on-demand backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) that runs immediately after creation. Note that on-demand backups of SAP HANA databases on Amazon EC2 instances are full backups; incremental backups are not supported.

Your on-demand backup is now created. It will begin backing up your specified resources. The console will transition you to the **Backup jobs** page where you can view the job progress. Take note of the backup job ID from the blue banner at the top of your screen, as you will need it to easily find the status of your backup job. When the backup is completed, the status will progress to `Completed`. Backups can take up to several hours.

Refresh the **Backup jobs list** to see the status change. You can also search for and click on your **backup job ID** to view detailed job status.

### Continuous backups of SAP HANA databases
<a name="saphanacontinuousbackup"></a>

You can make [continuous backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) , which can be used with point-in-time restore (PITR) (note that on-demand backups preserve resources in the state in which they are taken; whereas PITR uses continuous backups which record changes over a period of time).

With continuous backups, you can restore your SAP HANA database on an EC2 instance by rewinding it back to a specific time that you choose, within 1 second of precision (going back a maximum of 35 days). Continuous backup works by first creating a full backup of your resource, and then constantly backing up your resource’s transaction logs. PITR restore works by accessing your full backup and replaying the transaction log to the time that you tell AWS Backup to recover.

You can opt in to continuous backups when you create a backup plan in AWS Backup using the AWS Backup console or the API.

**To enable continuous backups using the console**

1. Sign in to the AWS Management Console, and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup plans**, and then choose **Create Backup plan**.

1. Under **Backup rules**, choose **Add Backup rule**.

1. In the **Backup rule configuration** section, select **Enable continuous backups for supported resources**.

After you disable [ PITR (point-in-time restore)](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) for SAP HANA database backups, logs will continue to be sent to AWS Backup until the recovery point expires (status equals `EXPIRED)`. You can change to an alternative log backup location in SAP HANA to stop the transmission of logs to AWS Backup.

A continuous recovery point with a status of `STOPPED` indicates that a continuous recovery point has been interrupted; that is, the logs transmitted from SAP HANA to AWS Backup that show the incremental changes to a database have a gap. The recovery points that occur within this timeframe gap have a status of `STOPPED.`.

For issues you may encounter during restore jobs of continuous backups (recovery points), see the [ SAP HANA Restore troubleshooting](https://docs.aws.amazon.com/aws-backup/latest/devguide/saphana-restore.html#saphanarestoretroubleshooting) section of this guide.

## View SAP HANA database backups
<a name="saphanaviewbackup"></a>

**View the status of backup and restore jobs:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Jobs**.

1. Choose backup jobs, restore jobs or copy jobs to see the list of your jobs.

1. Search for and click on your job ID to view detailed job statuses.

**View all recovery points in a vault:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Backup vaults**.

1. Search for and click on a backup vault to view all the recovery points within the vault.

**View details of protected resources:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the navigation pane, choose **Protected resources**.

1. You may also filter by resource type to view all backups of that resource type.

## Use AWS CLI for SAP HANA databases with AWS Backup
<a name="saphanaapicli"></a>

Each action within the Backup console has a corresponding API call.

To programmatically configure and manage AWS Backup and its resources, use the API call [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartBackupJob.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartBackupJob.html) to backup an SAP HANA database on an EC2 instance.

Use `start-backup-job` as the CLI command.

## Troubleshooting backups of SAP HANA databases
<a name="saphanatroubleshooting"></a>

If you encounter errors during your workflow, consult the following example errors and suggested resolutions:

**Python prerequisites**
+ **Error: Zypper error related to Python version** since SSM for SAP and AWS Backup require Python 3.6 but SUSE 12 SP5 by default supports Python 3.4.

  **Resolution:** Install multiple versions of Python on SUSE12 SP5 by doing the following steps:

  1. Run an update-alternatives command to create a symlink for Python 3 in '/usr/local/bin/' instead of directly using '/usr/bin/python3'. This commands will set Python 3.4 as the default version. The command is: `# sudo update-alternatives —install /usr/local/bin/python3 python3 /usr/bin/python3.4 5` 

  1. Add Python 3.6 to alternatives configuration by running the following command: `# sudo update-alternatives —install /usr/local/bin/python3 python3 /usr/bin/python3.6 2`

  1. Change the alternative configuration to Python 3.6 by running the following command: `# sudo update-alternatives —config python3`

     The following output should be displayed:

     ```
     There are 2 choices for the alternative python3 (providing /usr/local/bin/python3).
      Selection Path Priority Status
     * 0 /usr/bin/python3.4 5 auto mode
      1 /usr/bin/python3.4 5 manual mode
      2 /usr/bin/python3.6 2 manual mode
     Press enter to keep the current choice[*], or type selection number:
     ```

  1. Enter the number corresponding to Python 3.6.

  1. Check the Python version and confirm Python 3.6 is being used.

  1. (*Optional, but recommended*) Verify Zypper commands work as expected.

**Amazon EC2 Systems Manager for SAP discovery and registration**
+ **Error: SSM for SAP failed to discover workload** due to blocked access to public endpoint for AWS Secrets Manager and SSM.

  **Resolution:** Test if endpoints are reachable from your SAP HANA database. If they cannot be reached, you can create Amazon VPC endpoints for AWS Secrets Manager and SSM for SAP.

  1. Test access to Secrets Manager from Amazon EC2 host for HANA DB by running the following the command: `aws secretsmanager get-secret-value —secret-id hanaeccsbx_hbx_database_awsbkp` . If the command fails to return a value, the firewall is blocking access to Secrets Manager service endpoint. The log will stop at the step “Retrieving secrets from Secrets Manager”.

  1. Test connectivity to SSM for SAP endpoint by running the command `aws ssm-sap list-registration` . If the command fails to return a value, the firewall is blocking access to the SSM for SAP endpoint.

     Example error: `Connection was closed before we received a valid response from endpoint URL: “https://ssm-sap.us-west-2.amazonaws.com/register-application"`.

  There are two options to proceed if the endpoints are not reachable.
  + Open firewall ports to allow access to public service endpoint for Secrets Manager and SSM for SAP; or,
  + Create VPC endpoints for Secrets Manager and SSM for SAP, then:
    + Ensure Amazon VPC is enabled for DNSSupport and DNSHostname.
    + Ensure your VPC endpoint has enabled Allow Private DNS Name.
    + If the SSM for SAP discovery completed successfully, the log will show the host is discovered.
+ **Error: AWS Backup and Backint connection fails due to blocked access to AWS Backup service public endpoints.** `aws-backint-agent.log` can show errors similar to this: `time="2024-01-03T11:39:15-08:00" level=error msg="Storage configuration validation failed: missing backup data plane Id"` or `level=fatal msg="Error performing backup missing backup data plane Id`. Also, the AWS Backup console can show `Fatal Error: An internal error occured.`

  **Resolution: **Open firewall ports to allow access to public service endpoints (HTTPS). After this option is used, DNS will resolve requests to AWS services through public IP addresses.
+ **Error: SSM for SAP registration fails due to HANA password containing special characters.** Example errors can include `Error connecting to database HBX/HBX when validating its credentials.` or `Discovery failed because credentials for HBX/SYSTEMDB either not provided or cannot be validated.` after testing a connection using `hdbsql` for `systemdb` and `tenantdb` that was tested from HANA database Amazon EC2 instance.

  In the AWS Backupconsole on the Jobs page, the backup job details can show a status of `FAILED` with the error `Miscellaneous: b’* 10: authentication failed SQLSTATE: 28000\n’`.

  **Resolution: **Ensure your password does not have special characters, such as \$1.
+ **Error: `b’* 447: backup could not be completed: [110507] Backint exited with exit code 1 instead of 0. console output: time...`**

  **Resolution:** The AWS BackInt Agent for SAP HANA installation might not have completed successfully. Retry the process to deploy the [AWS Backint Agent](https://docs.aws.amazon.com/sap/latest/sap-hana/aws-backint-agent-sap-hana.html) and [Amazon EC2 Systems Manager Agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) on your SAP application server.
+ **Error: Console does not match log files after registration.**

  The discovery log shows failed registration when trying to connect to HANA DB due to the password containing special characters, though the SSM for SAP Application Manager for SAP console displays successful registration. it does not confirm that registration was successful. If the console shows successful registration but the logs do not, backups will fail.

  **Confirm the registration status:**

  1. Log into the [SSM console](https://console.aws.amazon.com//systems-manager)

  1. Select **Run Command** from the left side navigation.

  1. Under text field **Command history**, input `Instance ID:Equal:`, with the value equal to the instance you used for registration. This will filter command history. 

  1. Use the command id column to find commands with status `Failed`. Then, find the document name of **AWSSystemsManagerSAP-Discovery**.

  1. In the AWS CLI, run the command `aws ssm-sap register-application status`. If returned value shows `Error`, the registration was unsuccessful.

  **Resolution: **Ensure your HANA password does not have special characters (such as ‘\$1’).

**Creating a backup of an SAP HANA database**
+ **Error: AWS Backup console displays message “Fatal Error” when an on-demand backup for SystemDB or TenantDB is created.** This occurs because the public endpoint cannot be accessed. This is caused by a client side firewall that blocks access to this endpoint.

  `aws-backint-agent.log` can show errors such as `level=error msg="Storage configuration validation failed: missing backup data plane Id"` or `level=fatal msg="Error performing backup missing backup data plane Id."`

  **Resolution: ** Open firewall access to public endpoint .
+ **Error: ** `Database cannot be backed up while it is stopped`.

  **Resolution:** Ensure the database to be backed up is active. Database data and logs can be backed up only while the database is online.
+ **Error: ** `Getting backup metadata failed. Check the SSM document execution for more details.`

  **Resolution:** Ensure the database to be backed up is active. Database data and logs can be backed up only while the database is online.

**Monitoring backup logs**
+ **Error: ** `Encountered an issue with log backups, please check SAP HANA for details.`

  **Resolution:** Check SAP HANA to ensure log backups are being sent to AWS Backup from SAP HANA.
+ **Error: ** `One or more log backup attempts failed for recovery point.`

  **Resolution:** Check SAP HANA for details. Ensure log backups are being sent to AWS Backup from SAP HANA.
+ **Error: ** `Unable to determine the status of log backups for recovery point.`

  **Resolution:** Check SAP HANA for details. Ensure log backups are being sent to AWS Backup from SAP HANA.
+ **Error:** `Log backups for recovery point %s were interrupted due to a restore operation on the database.`

  **Resolution:** Wait for the restore job to complete. The log backups should resume.

## Glossary of SAP HANA terms when using AWS Backup
<a name="saphanaglossary"></a>

**Data Backup Types:** SAP HANA supports two types of data backups: Full and INC (incremental). AWS Backup optimizes which type is used during each backup operation.

**Catalog Backups:** SAP HANA maintains its own manifest called a *catalog*. AWS Backup interacts with this catalog. Each new backup will create an entry in the catalog.

**Continuous Log Backup (Transaction Logs)**: For Point in Time Recovery (PITR) functions, SAP HANA tracks all transactions since the most recent backup. 

**System Copy:** A restore job in which the restore target database is different from the source database from which the recovery point was created.

**Destructive Restore:** A destructive restore is a type of restore job during which a restored database deletes or overwrites the source or existing database.

**FULL: **A full backup is a backup of a complete database.

**INC: **An incremental backup is a backup of all changes to an SAP HANA database since the previous backup.

## AWS Backup support of SAP HANA databases on EC2 instances release notes
<a name="saphanareleasenotes"></a>

Certain functionalities are not supported at this time:
+ Continuous backups (which use transaction logs) cannot be copied to other Regions or accounts. Snapshot backups can be copied to supported Regions and accounts from full backups.
+ Backup Audit Manager and reporting are not currently supported.
+ [Supported services by AWS Region](backup-feature-availability.md#supported-services-by-region) contains the currently supported Regions for SAP HANA database backups on Amazon EC2 instances.

# Amazon S3 backups
<a name="s3-backups"></a>

## Overview
<a name="s3-backup-overview"></a>

AWS Backup supports centralized backup and restore of applications storing data in S3 alone or alongside other AWS services for database, storage, and compute. Many [features are available for S3 backups](backup-feature-availability.md#features-by-resource), including Backup Audit Manager.

You can use a single backup policy in AWS Backup to centrally automate the creation of backups of your application data. AWS Backup automatically organizes backups across different AWS services and third-party applications in one centralized, encrypted location (known as a [ backup vault](https://docs.aws.amazon.com/aws-backup/latest/devguide/vaults.html)) so that you can manage backups of your entire application through a centralized experience. For S3, you can create continuous backups and restore your application data stored in S3 and restore the backups to a point-in-time with a single click.

## Backup tiering
<a name="s3-backup-tiering"></a>

Amazon S3 is the only resource that supports backup tiering to a lower cost warm storage tier. For more information, see [Backup tiering](backup-tiering.md). 

## Prerequisites for S3 backups
<a name="s3-backup-prerequisites"></a>

### Permissions and policies for Amazon S3 backup and restore
<a name="one-time-permissions-setup"></a>

To backup, copy, and restore S3 resources, you must have the correct policies in your role. To add these policies, go to [AWS managed policies](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#aws-managed-policies). Add the [AWSBackupServiceRolePolicyForS3Backup](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForS3Backup.html) and [AWSBackupServiceRolePolicyForS3Restore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSBackupServiceRolePolicyForS3Restore.html) to the roles that you intend to use to backup and restore S3 buckets.

If you do not have sufficient permission, please request the manager of your organization's administrative (admin) account to add the policies to the intended roles.

For more information, please see [Managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html) in the *IAM User Guide*.

### Backups and versioning
<a name="s3-backup-versioning"></a>

You must [ enable S3 Versioning on your S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/manage-versioning-examples.html) to use AWS Backup for Amazon S3.

We recommend that you [ set a lifecycle expiration period](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html) for your S3 versions.

All objects (including all versions) in the bucket when the backup begins will be stored in the recovery point (completed backup). These can include the current version of each object, older versions, delete markers, and objects pending lifecycle actions.

The storage cost will be calculated for all objects in the backup, including objects scheduled for deletion (objects that will expire). You can use CLI or scripts to remove the inclusion of objects scheduled for expiration.

To learn more about setting up S3 lifecycle policies, follow the instructions [on this page](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-expire-general-considerations.html).

### Considerations for Amazon S3 backups
<a name="S3-backup-considerations"></a>

The following points should be considered when you backup S3 resources:
+ **Focused object metadata support** – AWS Backup supports the following metadata: tags, access control lists (ACLs), user-defined metadata, original creation date, and version ID. You may also restore all backed-up data and metadata except original creation date, version ID, storage class, and e-tags.
+ When you restore an S3 object, AWS Backup applies a checksum value, even if the original object did not use the checksum feature.
+  An S3 object key name can be made up of most UTF-8 encodable strings. The following Unicode characters are allowed: `#x9` \$1 `#xA` \$1 `#xD` \$1 `#x20 to #xD7FF` \$1 `#xE000 to #xFFFD` \$1 `#x10000 to #x10FFFF` .

  Object key names that include characters not in this list might be excluded from backups.
+ **Cold storage transition** – Use AWS Backup lifecycle management policy to define the timeline for backup expiration. Cold storage transition of S3 backups is not supported.
+ For periodic backups, AWS Backup makes a best effort to track all changes to your object metadata. However, if you update a tag or ACL multiple times within 1 minute, AWS Backup might not capture all intermediate states.
+ AWS Backup does not offer support for backups of [ SSE-C-encrypted](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html) objects. AWS Backup also does not support backups of bucket configurations, including bucket policies, settings, names, or access points.
+ AWS Backup does not support backups of S3 on AWS Outposts.
+ **CloudTrail logging** – If you log data read events, you must have CloudTrail logs delivered to a different target bucket. If you save CloudTrail logs in the bucket that they log, there is an infinite loop, which can cause unexpected charges.

  For more information, see [Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) in the *CloudTrail User Guide*.
+ **Server access logging** – If you enable server access logging, you must have the logs delivered to a different target bucket. If you save these logs in the bucket that they log, there is an infinite loop. For more information, see [Enabling Amazon S3 server access logging](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html).

## Supported bucket types, quantities, and object sizes
<a name="bucket-types-and-quotas"></a>

AWS Backup supports backup and restore operations for S3 objects of any size, up to the maximum object size supported by Amazon S3.

AWS Backup supports backup and restore of general purpose S3 buckets. Directory buckets are not supported at this time.

The upper limit of a quantity of a resource (known as a quota), such as a bucket, allowed in an AWS account depends on the service. [Amazon S3 quotas](https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html) are different from [AWS Backup quotas](aws-backup-limits.md).

In each AWS account, you can create backups for up to 100 buckets by default. You are able to request a quota increase up to 1,000 buckets.

Accounts with excess of 1,000 buckets are subject to quota limits; when requests exceed the quota, it may result in failed jobs. It is a best practice to limit an account to 1,000 buckets.

## Supported S3 Storage Classes
<a name="supported-s3-classes"></a>



AWS Backup allows you to backup your S3 data stored in the following [S3 Storage Classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html):
+ S3 Standard
+ S3 Standard - Infrequent Access (IA)
+ S3 One Zone-IA
+ S3 Glacier Instant Retrieval
+ S3 Intelligent-Tiering (S3 INT)

Backups of an object in the storage class [S3 Intelligent-Tiering (INT)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-dynamic-data-access) access those objects. This access triggers S3 Intelligent-Tiering to automatically move those objects to Frequent Access.

Backups that access Infrequent Access tiers, including S3 Standard - Infrequent Access (IA) and S3 One Zone-IA classes, move under the S3 storage charge of Frequent Access (applies to Infrequent Access or Archive Instant Access tiers).

The archived storage classes S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive are not supported.

For more information about storage pricing for Amazon S3, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

## S3 backup types
<a name="s3-backup-types"></a>

With AWS Backup, you can create the following types of backups of your S3 buckets, including object data, tags, Access Control Lists (ACLs), and user-defined metadata:
+ **Continuous backups** allow you to restore to any point in time within the last 35 days. Continuous backups for an S3 bucket should only be configured in one backup plan.

  See [Point-in-Time Recovery](https://docs.aws.amazon.com/aws-backup/latest/devguide/point-in-time-recovery.html) for a list of supported services and instructions on how to use AWS Backup to take continuous backups.
+ **Periodic backups** use snapshots of your data to allow you to retain data for your specified duration up to 99 years. You can schedule periodic backups in frequencies such as 1 hour, 12 hours, 1 day, 1 week, or 1 month. AWS Backup takes periodic backups during the backup window you define in your [backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/about-backup-plans.html).

  See [Creating a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) to understand how AWS Backup applies your backup plan to your resources.

Cross-account and cross-Region copies are available for S3 backups, but copies of continuous backups do not have point-in-time restore capabilities.

Continuous and periodic backups of S3 buckets must both reside in the same backup vault.

AWS Backup for S3 relies on receiving S3 events through Amazon EventBridge. If this setting is disabled in S3 bucket notification settings, continuous backups will stop for those buckets with the setting turned off. For more information, see [Using EventBridge](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventBridge.html).

For both backup types, the first backup is a full backup, while subsequent backups are incremental at object-level.

## Compare S3 backup types
<a name="compare-s3-backup-types"></a>

Your backup strategy for S3 resources can involve just continuous backups, just periodic (snapshot) backups, or a combination of both. The information below can help you choose what works best for your organization:

Continuous backups only:
+ After the first full backup of your existing data is complete, changes in your S3 bucket data are tracked as they occur.
+ The tracked changes allow you to use PITR (point-in-time restore) for the retention period of the continuous backup. To perform a restore job, you choose the point in time to which you wish to restore.
+ The retention period of each continuous backup has a maximum of 35 days.
+ For backup plans you create through CLI, advanced backup settings for Amazon S3 (which include the option to include tags and ACLs in the backup) are turned on by default. You may exclude these in the backup options. See [Advanced Amazon S3 backup settings](#s3-advanced-backup-settings) for an example of the syntax.

Periodic (snapshot) backups only, scheduled or on-demand:
+ AWS Backup scans the entire S3 bucket, retrieves each object’s ACL and tags, and initiates a Head request for every object that was in the prior snapshot but was not found in the snapshot being created.
+ The backup is point-in-time consistent. 
+ The backup date and time recorded is the time at which AWS Backup completes the traversal of the bucket, not at the time which a backup job was created.
+ The first backup of a bucket is a full backup. Each subsequent backup is incremental, representing the change in data since the last snapshot.
+ The snapshot made by the periodic backup can have a retention period of up to 99 years.

Continuous backups combined with periodic/snapshot backups:
+ After the first full backup of your existing data (each bucket) is complete, changes in your bucket are tracked as they occur.
+ You can perform a point-in-time restore from a continuous recovery point.
+ Snapshots are point-in-time consistent.
+ Snapshots are taken directly from the continuous recovery point, eliminating the need to rescan a bucket to allow for faster processes.
+ Snapshots and continuous recovery points share data lineage; storage of data between snapshot and continuous recovery points is not duplicated.
+ When advanced Amazon S3 backup settings, such as including tags and ACLs in a backup, are changed for a `continuous` recovery point, AWS Backup stops that recovery point and creates a new one with the updated setting(s).

When a continuous backup job is running for an S3 bucket, you can still initiate periodic (snapshot) backup jobs. However, the following behavior applies:
+ Snapshot backup jobs will use the same backup options (ACLs and object tags settings) as the existing continuous backup.
+ If you specify different backup options for a snapshot job than what the continuous backup uses, the snapshot job will still use the continuous backup's settings and complete with a "Completed with issues" status.

  When this occurs, you'll see the following status message: `"Periodic/snapshot backup for bucket <bucket name> has different backup options than the continuous backup. When using continuous backups along with snapshot backups for the same bucket, the snapshot will use the same settings for backing up ACLs and Object tags as the continuous backup."`

The following table shows when a full scan is required when changing BackupOptions for existing continuous recovery points:


**Full scan behavior when BackupOptions is modified**  

| Previous BackupOptions | New BackupOptions | Full scan | 
| --- | --- | --- | 
| backupACLs and backupObjectTags enabled | backupACLs and backupObjectTags disabled | No | 
| backupACLs and backupObjectTags enabled | backupACLs enabled; backupObjectTags disabled | No | 
| backupACLs and backupObjectTags enabled | backupACLs disabled; backupObjectTags enabled | No | 
| backupACLs and backupObjectTags disabled | backupACLs and backupObjectTags enabled | Yes | 
| backupACLs enabled; backupObjectTags disabled | backupACLs and backupObjectTags enabled | Yes | 
| backupACLs disabled; backupObjectTags enabled | backupACLs and backupObjectTags enabled | Yes | 

## S3 backup completion windows
<a name="s3-completion-windows"></a>

The table below shows sample buckets of various sizes to help you guide estimates of the completion time of the initial full backup of an S3 bucket. Backup times will vary with the size, content, configuration, and settings of each bucket.


| Bucket size | Number of objects | Estimated time to complete initial backup | 
| --- | --- | --- | 
| 425 GB (gigabytes) | 135 million | 31 hours | 
| 800 TB (terabytes) | 670 million | 38 hours | 
| 6 PB (petabytes) | 5 billion | 100 hours | 
| 370 TB (terabytes) | 7.5 billion | 180 hours | 

## Best practices and cost considerations for S3 backups
<a name="bestpractices-costoptimization"></a>

### Large bucket best practices
<a name="bucket-size-best-practices"></a>

For buckets with more than 300 million objects:
+ For buckets with greater than 300 million objects, the backup rate can reach up to 17,000 objects per second during the initial full backup of the bucket (incremental backups will have a different speed); buckets containing fewer than 300 million objects back up at a rate close to 1,000 objects per second.
+ Continuous backups are recommended.
+ If backup lifecycle is planned for more than 35 days, you can also enable snapshot backups for the bucket in the same vault in which your continuous backups are stored.

### Backup strategy optimization
<a name="backup-strategy-optimization"></a>
+ For accounts which make backups at least daily or more frequently, cost benefits can be realized by using continuous backups if the data within the backups has minimal changes between backups.
+ Larger buckets that do not change frequently can benefit from continuous backups, since this can result in lower costs when scans of the whole bucket along with multiple requests per objects don't need to be performed on pre-existing objects (objects that are unchanged from the previous backup).
+ Buckets that contain more than 100 million objects and that have a small delete rate compared to the overall backup size might realize cost benefits with a backup plan that contains both a continuous backup with a retention period of 2 days along with snapshots of a longer retention.
+ Periodic (snapshot) backup time aligns with the start of the backup process when a bucket scan is not needed. Scans are not needed in a bucket that contains both continuous backup and snapshots since in these cases snapshots are taken from a continuous recovery point.

### Object lifecycle and delete markers
<a name="object-lifecycle-considerations"></a>
+ S3 lifecycle policies have an optional feature called **Delete expired object delete markers**. When this feature is left off, delete markers, sometimes in the millions, expire with no cleanup plan. When buckets without this feature are backed up, two issues impact time and cost:
  + Delete markers are backed up, just like objects. Backup time and restore time can be impacted depending on the ratio of objects to delete markers.
  + Each object and marker that is backed up has a minimum charge. Each delete marker is charged the same as a 128KiB object.

### Storage class cost considerations
<a name="storage-class-considerations"></a>
+ For each object in a single S3-GIR (Amazon S3 Glacier Instant Retrieval), AWS Backup performs multiple calls, which will result in retrieval charges when a backup is conducted.

  Similar retrieval costs apply to buckets with objects in S3-IA and S3 One Zone-IA storage classes.

### AWS service cost optimization
<a name="aws-service-cost-optimization"></a>
+ Using features of AWS KMS, CloudTrail, Amazon CloudWatch, and Amazon GuardDuty as part of your backup strategy can result in additional costs beyond S3 bucket data storage. See the following for information on adjusting these features:
  + [Reducing the cost of SSE-KMS with Amazon S3 Bucket keys](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-key.html) in the *Amazon S3 User Guide*.
  + You can reduce CloudTrail costs by excluding AWS KMS events and by disabling S3 data events:
    + **Exclude AWS KMS events: **In the *CloudTrail User Guide*, [Creating a trail in the console (basic event selectors)](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html#creating-a-trail-in-the-console) allows the option to exclude AWS KMS events to filter these events out of your trail (default setting includes all KMS events):
      + The option to log or exclude KMS events is available only if you log management events on your trail. If you choose not to log management events, KMS events are not logged, and you cannot change KMS event logging settings.
      + AWS KMS actions such as `Encrypt`, `Decrypt`, and `GenerateDataKey` typically generate a large volume (more than 99%) of events. These actions are now logged as **Read** events. Low-volume, relevant KMS actions such as `Disable`, `Delete`, and `ScheduleKey` (which typically account for less than 0.5% of KMS event volume) are logged as **Write** events.
      + To exclude high-volume events like `Encrypt`, `Decrypt`, and `GenerateDataKey`, but still log relevant events such as `Disable`, `Delete`, and `ScheduleKey`, choose to log **Write** management events, and clear the check box for **Exclude AWS KMS events**.
    + **Disable S3 data events:** By default, trails and event data stores do not log data events. Disable S3 data events before your initial backup to reduce costs.
  + To reduce CloudWatch costs, you can stop sending CloudTrail events to CloudWatch Logs when you update a trail to disable CloudWatch Logs settings.
  + [Estimating GuardDuty usage cost](https://docs.aws.amazon.com/guardduty/latest/ug/monitoring_costs.html) in the *Amazon GuardDuty User Guide*.

## S3 backup messages
<a name="s3-backup-messages"></a>

When a backup job completes or fails, you may see the following message. The following table can help you determine the possible cause of the status message.


| Scenario | Job Status | Message | Example | 
| --- | --- | --- | --- | 
| All objects failed to be backed up for a snapshot or initial continuous backup | `FAILED` | "No objects were backed up from the source bucket **BucketName**. To get notified of these failures, enable SNS event notifications." | Backup role does not have the permission to get object version ACL. Consequently, none of the objects are backed up. | 
| All objects failed to be backed up for a subsequent continuous backup. | `COMPLETED` | "No objects were backed up from the source bucket **BucketName**. To get notified of these failures, enable SNS event notifications." |  | 

## Advanced Amazon S3 backup settings
<a name="s3-advanced-backup-settings"></a>

AWS Backup provides advanced settings to control what metadata is included in your Amazon S3 backups. You can optionally exclude Access Control Lists (ACLs) and object tags, which can be helpful if your objects are set up without ACLs and object tags. In other words, if you do not use ACLs or objects tags for your S3 resources, you may find it beneficial to exclude them from your backups.

### Configuring backup of ACLs and object tags
<a name="s3-backup-configuration"></a>

You can configure ACL and object tag backup options either through the AWS Backup console or through the AWS CLI.

------
#### [ Console ]

**Configure ACL and tag options using the console**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup/](https://console.aws.amazon.com/backup/home).

1. In the navigation pane, choose **Backup plans**, then choose **Create backup plan**.

1. In your backup plan settings, expand **Advanced backup settings**.

1. For Amazon S3 resources, configure the following options:
   + **Back up ACLs**: Select the check box to include ACLs or leave it unselected to exclude them.

     **Backup up object tags**: Select the check box to include object tags in your backup.

1. Complete the backup plan configuration and choose **Create plan**.

------
#### [ AWS CLI ]

You can selectively include or exclude Access Control Lists (ACLs) and object tags from your Amazon S3 backups using the following backup options:

BackupACLs  
Controls whether object ACLs are included in the backup. Set to `disabled` to exclude ACLs. Default: `enabled`

BackupObjectTags  
Controls whether object tags are included in the backup. Set to `disabled` to exclude tags. Default: `enabled`

Configure ACL and tag options using the AWS CLI

To configure ACL and object tag backup options using the AWS CLI, use the `update-backup-plan` command with advanced backup settings:

```
aws backup update-backup-plan \
    --backup-plan-id "your-backup-plan-id" \
    --backup-plan '{
        "BackupPlanName": "MyS3BackupPlan",
        "Rules": [{
            "RuleName": "MyS3BackupRule",
            "TargetBackupVaultName": "MyBackupVault",
            "ScheduleExpression": "cron(0 2 ? * * *)",
            "Lifecycle": {
                "DeleteAfterDays": 30
            },
            "RecoveryPointTags": {},
            "CopyActions": [],
            "EnableContinuousBackup": false
        }],
        "AdvancedBackupSettings": [{
            "ResourceType": "S3",
            "BackupOptions": {
                "BackupACLs": "disabled",
                "BackupObjectTags": "disabled"
            }
        }]
    }'
```

The `BackupOptions` parameters control metadata inclusion:
+ `"BackupACLs": "disabled"` - Excludes ACLs from backups
+ `"BackupObjectTags": "disabled"` - Excludes object tags from backups
+ `"BackupACLs": "enabled"` - Includes ACLs in backups (default)
+ `"BackupObjectTags": "enabled"` - Includes object tags in backups (default)

------

# Amazon Timestream backups
<a name="timestream-backup"></a>

Amazon Timestream is a scalable time series database that allows storage and analysis of up to trillions of time series data points daily. Timestream is optimized for cost and time savings by keeping recent data in memory and by storing historical data in a cost-optimized storage tier in accordance with your policies.

A Timestream database has tables. These tables contain records, and each record is a single data point in a time series. A time series is a sequence of records recorded over a time interval, such as a stock price, usage level of memory of an Amazon EC2 instance, or a temperature reading. AWS Backup can centrally backup and restore Timestream tables. You can copy these table backups to other accounts and several other AWS Regions within the same organization.

Timestream does not currently offer native backup and restore services, so using AWS Backup to create secure copies of your Timestream tables can add an extra layer of security and resilience to your resources.

## Back up Timestream tables
<a name="backuptimestream"></a>

You can backup Timestream tables either through the AWS Backup console or using the AWS CLI.

There are two ways to use the AWS Backup console to backup a Timestream table: on demand or as part of a backup plan. 

### Create on-demand Timestream backups
<a name="ondemandtimestreambackups"></a>

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Using the navigation pane, choose **Protected resources**, and then **Create on-demand backup**.

1. On the **Create on-demand backup** page, choose Amazon Timestream.

1. Choose **Resource type** Timestream, and then choose the table name you want to back up.

1. In Backup window, ensure that **Create backup now** is selected. This initiates a backup immediately and enables you to see your cluster sooner on the **Protected resources** page.

1. In the drop down menu **Transition to cold storage**, you can set your transition settings.

1. In **Retention Period**, you can choose how long to retain your backup.

1. Choose an existing backup vault or create a new backup vault. Choosing **Create new backup vault** opens a new page to create a vault and then returns you to the **Create on-demand backup page** when you are finished.

1. Under **IAM role**, choose **Default role** (if the AWS Backup default role is not present in your account, it will be created for you with the correct permissions).

1. *Optionally,* tags can be added to your recovery point. If you want to assign one or more tags to your on-demand backup, enter a **key** and optional **value**, and choose **Add tag**.

1. Choose **Create on-demand backup**. This takes you to the **Jobs** page, where you will see a list of jobs.

1. Choose the **Backup job ID** for the cluster to see the details of that job. It will display a status of `Completed`, `In Progress`, or `Failed`. You can click the refresh button to update the displayed status.

### Create scheduled Timestream backups in a backup plan
<a name="scheduledtimestreambackups"></a>

Your scheduled backups can include Timestream tables if they are a protected resource. To opt into protecting Amazon Timestream tables:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Using the navigation pane, choose **Protected resources**.

1. Toggle Amazon Timestream to **On**.

1. See [ Assigning resources to the console](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html#assigning-resources-console) to include Timestream tables in an existing or new plan.

Under **Manage Backup plans**, you can choose to [create a backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) and include Timestream tables, or you can [update an existing one](https://docs.aws.amazon.com/aws-backup/latest/devguide/updating-a-backup-plan.html) to include Timestream tables. When adding the resource type *Timestream*, you can choose to add **All Timestream tables**, or check the boxes next to the tables you wish to add under **Select specific resource types**.

The first backup made of Timestream tables will be a full backup. Subsequent backups will be incremental backups.

After you’ve created or modified your backup plan, navigate to Backup plans in the left navigation. The backup plan you specified should display your clusters under **Resource Assignments**.

### Backing up programmatically
<a name="timestreambackupapi"></a>

You can use the operation name `start-backup-job`. Include the following parameters:

```
aws backup start-backup-job \
--backup-vault-name backup-vault-name \
--resource-arn arn:aws:timestream:region:account:database/database-name/table/table-name \
--iam-role-arn arn:aws:iam::account:role/role-name \
--region AWS Region \
--endpoint-url URL
```

## View Timestream table backups
<a name="viewtimestreambackups"></a>

To view and modify your Timestream table backups within the console:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Choose **Backup vaults**. Then, click on the backup vault name that contains your Timestream tables.

1. The backup vault will display a summary and a list of backups.

   1. You can click on the link in the column **Recovery point ID**, or

   1. You can check the box to the left of the recovery point ID and click **Actions** to delete the recovery points that are no longer needed.

## Restore a Timestream table
<a name="w2aac17c19c41c13"></a>

See how to [restore a Timestream table](https://docs.aws.amazon.com/aws-backup/latest/devguide/timestream-restore.html) 

# Virtual machine backups
<a name="vm-backups"></a>

AWS Backup supports centralized and automated data protection for on-premises VMware virtual machines (VMs) along with VMs in the VMware Cloud™ (VMC) on AWS and VMware Cloud™ (VMC) on AWS Outposts. You can back up from your on-premises and VMC virtual machines to AWS Backup. Then, you can restore from AWS Backup to on-premises VMs, VMs in the VMC, or the VMC on AWS Outposts.

AWS Backup also provides you with fully-managed, AWS-native VM backup management capabilities, such as VM discovery, backup scheduling, retention management, a low-cost storage tier, cross-Region and cross-account copy, support for AWS Backup Vault Lock and AWS Backup Audit Manager, encryption that is independent from source data, and backup access policies. For a full list of capabilities and details, see the [Feature availability by resource](backup-feature-availability.md#features-by-resource) table.

You can use AWS Backup to protect your virtual machines on [VMware Cloud™ on AWS Outposts](https://aws.amazon.com/vmware/aws-services/). AWS Backup stores your VM backups in the AWS Region to which your VMware Cloud™ on AWS Outposts is connected. You can use AWS Backup to protect your VMware Cloud™ on AWS Backup VMs when you’re using VMware Cloud™ on AWS Outposts to meet your low-latency and local data-processing needs for your application data. Based on your data residency requirements, you may choose AWS Backup to store backups of your application data in the parent AWS Region to which your AWS Outposts is connected.

## Supported VMs
<a name="supported-vms"></a>

AWS Backup can back up and restore virtual machines managed by a VMware vCenter.

**Currently supported:**
+ vSphere 8, 7.0, and 6.7
+ Virtual disk sizes that are multiples of 1 KiB
+ NFS, VMFS, and VSAN datastores on premises and in VMC on AWS
+ SCSI Hot-Add and Network Block Device Secure Sockets Layer (NBDSSL) transport modes for copying data from source VMs to AWS for on-premises VMware
+ Hot-Add mode to protect VMs on VMware Cloud on AWS

**Not currently supported:**
+ RDM (raw disk mapping) disks or NVMe controllers and their disks
+ Independent-persistent and independent-non persistent disk modes

## Backup consistency
<a name="backup-consistency"></a>

AWS Backup, by default, captures application-consistent backups of VMs using the VMware Tools quiescence setting on the VM. Your backups are application consistent if your applications are compatible with VMware Tools. If the quiescence capability is not available, AWS Backup captures crash-consistent backups. Validate that your backups meet your organization’s needs by testing your restores.

## Backup gateway
<a name="backup-gateway"></a>

Backup gateway is downloadable AWS Backup software that you deploy to your VMware infrastructure to connect your VMware VMs to AWS Backup. The gateway connects to your VM management server to discover VMs, discovers your VMs, encrypts data, and efficiently transfers data to AWS Backup. The following diagram illustrates how Backup gateway connects to your VMs:

![\[A backup gateway is an OVF template the connects your VMware environment to AWS Backup.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/Horizon.png)


To download the Backup gateway software, follow the procedure for [Working with gateways](working-with-gateways.md).

### Download VM software
<a name="download-vm-software"></a>

Backup gateway is distributed as an OVF (Open Virtualization Format) template that you deploy to your VMware infrastructure. The gateway software connects your VMware VMs to AWS Backup by discovering VMs, encrypting data, and efficiently transferring data to AWS Backup.

To obtain the OVF template, use the AWS Backup console:

1. Sign in to the AWS Management Console and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation pane, under **External resources**, choose **Gateways**.

1. Choose **Create gateway**.

1. In the **Set up gateway** section, download the OVF template and deploy it to your VMware environment.

For information on VPC (Virtual Private Cloud) endpoints, see [AWS Backup and AWS PrivateLink connectivity](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-network.html#backup-privatelink).

Backup gateway comes with its own API which is separately maintained from the AWS Backup API. To view a list of Backup gateway API actions, see [Backup gateway actions](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_Operations_AWS_Backup_Gateway.html). To view a list of Backup gateway API data types, see [Backup gateway data types](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_Types_AWS_Backup_Gateway.html).

## Endpoints
<a name="backup-gateway-endpoints"></a>

Existing users who currently use a public endpoint and who wish to switch to a VPC (Virtual Private Cloud) endpoint can [ create a new gateway with a VPC endpoint](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-gateways.html#create-gateway) using [AWS PrivateLink](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-network.html#backup-privatelink), associate the existing hypervisor to the gateway, and then [ delete the gateway](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-gateways.html#edit-gateway) containing the public endpoint.

# Configure your infrastructure to use Backup gateway
<a name="configure-infrastructure-bgw"></a>

Backup gateway requires the following network, firewall, and hardware configurations to back up and restore your virtual machines.

## Network configuration
<a name="bgw-network-configuration"></a>

Backup gateway requires certain ports to be allowed for its operation. Allow the following ports:

1. **TCP 443 Outbound**
   + Source: Backup gateway
   + Destination: AWS
   + Use: Allows Backup gateway to communicate with AWS.

1. **TCP 80 Inbound**
   + Source: The host you use to connect to the AWS Management Console
   + Destination: Backup gateway
   + Use: By local systems to obtain the Backup gateway activation key. Port 80 is only used during activation of Backup gateway. AWS Backup does not require port 80 to be publicly accessible. The required level of access to port 80 depends on your network configuration. If you activate your gateway from the AWS Management Console, the host from which you connect to the console must have access to your gateway's port 80.

1. **UDP 53 Outbound**
   + Source: Backup gateway
   + Destination: Domain Name Service (DNS) server
   + Use: Allows Backup gateway to communicate with the DNS.

1. **TCP 22 Outbound**
   + Source: Backup gateway
   + Destination: Support
   + Use: Allows Support to access your gateway to help you with issues. You don't need to open this port for the normal operation of your gateway, but you must open it for troubleshooting.

1. **UDP 123 Outbound**
   + Source: NTP client
   + Destination: NTP server
   + Use: Used by local systems to synchronize virtual machine time to the host time.

1. **TCP 443 Outbound**
   + Source: Backup gateway
   + Destination: VMware vCenter
   + Use: Allows Backup gateway to communicate with VMware vCenter.

1. **TCP 443 Outbound**
   + Source: Backup gateway
   + Destination: ESXi hosts
   + Use: Allows Backup gateway to communicate with ESXi hosts.

1. **TCP 902 Outbound**
   + Source: Backup gateway
   + Destination: VMware ESXi hosts
   + Use: Used for data transfer via Backup gateway.

The above ports are necessary for Backup gateway. See [Create a VPC endpoint](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-network.html#backup-privatelink) for more information on how to configure Amazon VPC endpoints for AWS Backup.

## Firewall configuration
<a name="bgw-firewall-configuration"></a>

Backup gateway requires access to the following service endpoints to communicate with Amazon Web Services. If you use a firewall or router to filter or limit network traffic, you must configure your firewall and router to allow these service endpoints for outbound communication to AWS. Use of an HTTP proxy in between Backup gateway and service points is not supported.

**Endpoint types**

**Standard endpoints**: Support IPv4 traffic between your gateway appliance and AWS.

The following service endpoints are required by all gateways for control path (`anon-cp`, `client-cp`, `proxy-app`) and data path (`dp-1`) operations.

```
anon-cp.backup-gateway.region.amazonaws.com:443  
client-cp.backup-gateway.region.amazonaws.com:443  
proxy-app.backup-gateway.region.amazonaws.com:443  
dp-1.backup-gateway.region.amazonaws.com:443
```

**Dual-stack endpoints**: Support both IPv4 and IPv6 traffic between your gateway appliance and AWS.

The following dual-stack service endpoints are required by all gateways for control path (activation, controlplane, proxy) and data path (dataplane) operations.

```
activation-backup-gateway.region.api.aws:443  
controlplane-backup-gateway.region.api.aws:443  
proxy-backup-gateway.region.api.aws:443  
dataplane-backup-gateway.region.api.aws:443
```

## Configure your gateway for multiple NICs in VMware
<a name="bgw-multinic"></a>

You can maintain separate networks for your internal and external traffic by attaching multiple virtual network interface connections (NICs) to your gateway and then directing internal traffic (gateway to hypervisor) and external traffic (gateway to AWS) separately.

By default, virtual machines connected to AWS Backup gateway have one network adapter (`eth0`). This network includes the hypervisor, the virtual machines, and network gateway (Backup gateway) which communicates with the broader Internet.

Here is an example of a setup with multiple virtual network interfaces:

```
            eth0:
            - IP: 10.0.3.83
            - routes: 10.0.3.0/24
            
            eth1:
            - IP: 10.0.0.241
            - routes: 10.0.0.0/24
            - default gateway: 10.0.0.1
```
+ In this example, the connection is to a hypervisor with IP `10.0.3.123`, the gateway will use `eth0` as the hypervisor IP is part of the `10.0.3.0/24` block
+ To connect to a hypervisor with IP `10.0.0.234`, the gateway will use `eth1`
+ To connect to an IP outside of the local networks (ex. `34.193.121.211`), the gateway will fall back to the default gateway, `10.0.0.1`, which is in the `10.0.0.0/24` block and thus go through `eth1`

The first sequence to add an additional network adapter occurs in the vSphere client:

1. In the VMware vSphere client, open the context menu (with a right-click) for your gateway virtual machine, and choose **Edit Settings**. 

1. On the **Virtual Hardware** tab of the **Virtual Machine Properties** dialog box, open the **Add New Device** menu, and select **Network Adapter** to add a new network adapter.

1. 

   1. Expand the **New Networ**k details to configure the new adapter.

   1. Ensure that **Connect At Power On** is selected.

   1. For **Adapter Type**, see Network Adapter Types in the [ ESXi and vCenter Server Documentation](https://docs.vmware.com/en/VMware-vSphere/index.html).

1. Click **Okay** to save the new network adapter settings.

The next sequence of steps to configure an additional adapter occurs in the AWS Backup gateway console (note this is not the same interface as the AWS management console where backups and other services are managed).

Once the new NIC is added to the gateway VM, you need to
+ Go to `Command Prompt` and turn on the new adapters
+ Configure static IPs for each new NIC
+ Set the preferred NIC as the default

To do these:

1. In the VMware vSphere client, select your gateway virtual machine and **Launch Web Console** to access the Backup gateway local console.

   1.  For more information on accessing a local console, see [ Accessing the Gateway Local Console with VMware ESXi](https://docs.aws.amazon.com/storagegateway/latest/tgw/accessing-local-console.html#MaintenanceConsoleWindowVMware-common) 

1. Exit Command Prompt and go to Network Configuration > Configure Static IP and follow the setup instructions to update the routing table.

   1. Assign a static IP within the network adapter’s subnet.

   1. Set up a network mask.

   1. Enter the IP address of the default gateway. This is the network gateway that connects to all traffic outside of the local network.

1. Select **Set Default Adapter** to designate the adapter that will be connected to the cloud as the default device.

1. All IP addresses for the gateway can be displayed in both the local console and on the VM summary page in VMware vSphere.

## VMware permissions
<a name="bgw-vmware-permissions"></a>

This section lists the minimum VMware permissions required to use AWS Backup gateway. These permissions are necessary for Backup gateway to discover, backup, and restore virtual machines.

To use Backup gateway with VMware Cloud™ on AWS or VMware Cloud™ on AWS Outposts, you must use the default admin user `cloudadmin@vmc.local` or assign the CloudAdmin role to your dedicated user.

To use Backup gateway with VMware on-premises virtual machines, create a dedicated user with the permissions listed below.

**Global**
+ Disable methods
+ Enable methods
+ Licenses
+ Log event
+ Manage custom attributes
+ Set custom attributes

**vSphere Tagging**
+ Assign or Unassign vSphere Tag

**DataStore**
+ Allocate space
+ Browse datastore
+ Configure datastore (for vSAN datastore)
+ Low level file operations
+ Update virtual machine files

**Host**
+ Configuration
  + Advanced settings
  + Storage partition configuration

**Folder**
+ Create folder

**Network**
+ Assign network

**dvPort Group**
+ Create
+ Delete

**Resource**
+ Assign virtual machine to resource pool

**Virtual Machine**
+ Change Configuration
  + Acquire disk lease
  + Add existing disk
  + Add new disk
  + Advanced configuration
  + Change settings
  + Configure raw device
  + Modify device settings
  + Remove disk
  + Set annotation
  + Toggle disk change tracking
+ Edit Inventory
  + Create from existing
  + Create new
  + Register
  + Remove
  + Unregister
+ Interaction
  + Power Off
  + Power On
+ Provisioning
  + Allow disk access
  + Allow read-only disk access
  + Allow virtual machine download
+ Snapshot Management
  + Create snapshot
  + Remove Snapshot
  + Revert to snapshot

# Working with gateways
<a name="working-with-gateways"></a>

To back up and restore your virtual machines (VMs) using AWS Backup, you must first install a Backup gateway. A gateway is software in the form of an OVF (Open Virtualization Format) template that connects Amazon Web Services Backup to your hypervisor, allowing it to automatically detect your virtual machines, and enables you to back up and restore them.

A single gateway can run up to 4 backup or restore jobs at once. To run more than 4 jobs at once, create more gateways and associate them with your hypervisor.

## Creating a gateway
<a name="create-gateway"></a>

You can create a backup gateway using two approaches:
+ **Console method (standard)**: Creates gateways through the AWS Backup console with automatic activation
+ **Manual method**: Creates gateways using gateway VM's local console by obtaining activation keys and using AWS CLI commands

Both methods require downloading and deploying the OVF template first (see [Download VM software](vm-backups.md#download-vm-software)).

Both methods allow gateway to communicate over IPv6, which requires gateway appliance version 2.x\$1 and additional firewall configuration on [dual-stack endpoints](https://docs.aws.amazon.com/aws-backup/latest/devguide/configure-infrastructure-bgw.html#bgw-firewall-configuration).

**Important**  
**IPv6 hypervisor requirement:** If your gateway is activated through IPv6, you **must** create a hypervisor with an IPv6 address. For example, use `2607:fda8:1001:210::252` instead of `10.0.0.252`. If you associate an IPv6 gateway with an IPv4 hypervisor, backup and restore jobs will likely fail.

### Console method
<a name="create-gateway-console"></a>

**To create a gateway:**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation pane, under the **External resources** section, choose **Gateways**.

1. Choose **Create gateway**.

1. In the **Set up gateway** section, follow these instructions to download and deploy the OVF template.

#### Downloading VMware software
<a name="downloading-vmware-software"></a>

**Connecting the hypervisor**

Gateways connect AWS Backup to your hypervisor so you can create and store backups of your virtual machines. To set up your gateway on VMware ESXi, download the [OVF template](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-AE61948B-C2EE-436E-BAFB-3C7209088552.html). The download may take about 10 minutes.

After it is complete, proceed with the following steps:

1. Connect to your virtual machine hypervisor using VMware vSphere.

1. Right-click a parent object of a virtual machine and select *Deploy OVF Template.*  
![\[The Deploy OVF Template menu item.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-deploy-ovf-template-20.png)

1. Choose **Local file**, and upload the **aws-appliance-latest.ova** file you downloaded.  
![\[The Local file option on the Select an OVF template panel.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-select-ovf-template-50.png)

1. Follow the deployment wizard steps to deploy it. On the **Select storage** page, select virtual disk format **Thick Provision Lazy Zeroed**.  
![\[The Thick Provision Lazy Zeroed option on the Select storage panel.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-thick-provision-lazy-70.png)

1. After deploying the OVF, right-click the gateway and choose **Edit Settings**.

    ![\[The Edit Settings menu item.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-edit-settings-30.png) 

   1. Under **VM Options**, go to **VM Tools**.

   1. Ensure that for **Synchronize Time with Host**, **Synchronize at start up and resume** is selected.  
![\[The Synchronize at startup and resume VM option.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-synchronize-time-60.png)

1. Turn on the virtual machine by selecting “Power On” from the **Actions** menu.  
![\[The Power On menu item.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-power-on-vm-40.png)

1. Copy the IP address from the VM summary and enter it below.  
![\[The IP Addresses field on the Summary page.\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/images/gateway-user-copy-ip-address-10.png)

Once the VMWare software is downloaded, complete the following steps:

1. In the **Gateway connection** section, type in the **IP address** of the gateway.

   1. To find this IP address, go to the vSphere Client.

   1. Select your gateway under the **Summary** tab.

   1. Copy the **IP address** and paste it in the AWS Backup console text bar.

1. In the **Gateway settings** section,

   1. Type in a **Gateway name**.

   1. Verify the AWS Region.

   1. Choose whether the endpoint is publicly accessible or hosted with your virtual private cloud (VPC).
      + If **publicly accessible** is selected, choose the IP version (IPv4 or IPv6) for gateway connectivity.
      + If **VPC** is selected, enter the VPC endpoint DNS Name. For more information, see [Create a VPC endpoint](https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-network.html#backup-privatelink).

1. *[Optional]* In the **Gateway tags** section, you can assign tags by inputting the **key** and *optional* **value**. To add more than one tag, click **Add another tag**.

1. To complete the process, click **Create gateway**, which takes you to the gateway detail page.

### Manual gateway creation
<a name="create-gateway-manual"></a>

#### Getting an activation key
<a name="bgw-activation-key"></a>

To receive an activation key for your gateway, make a web request to the gateway virtual machine (VM) or use the gateway local console. The gateway VM returns a response that contains the activation key, which is then passed as one of the parameters for the `CreateGateway` API to specify the configuration of your gateway. 

**Tip**  
Gateway activation keys expire in 30 minutes if unused.

**Getting an activation key using web request**

The following examples show you how to get an activation key using HTTP request. You can either use a web browser or Linux curl or equivalent command using the following URLs.

**Note**  
Replace the highlighted variables with actual values for your gateway. Acceptable values are as follows:  
*gateway\$1ip\$1address* - The IPv4 address of your gateway, for example `172.31.29.201`
*region\$1code* - The Region where you want to activate your gateway. See [Regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints) in the *AWS General Reference Guide*. If this parameter is not specified, or if the value provided is misspelled or doesn't match a valid region, the command will default to the `us-east-1` region.

IPv4:

```
curl "http://gateway_ip_address/?activationRegion=region_code&gatewayType=BACKUP_VM&endpointType=DUALSTACK&ipVersion=ipv4&no_redirect"
```

IPv6:

```
curl "http://gateway_ip_address/?activationRegion=region_code&gatewayType=BACKUP_VM&endpointType=DUALSTACK&ipVersion=ipv6&no_redirect"
```

**Getting an activation key using local console**

The following examples show you how to get an activation key using gateway host's local console

1. Log in to your virtual machine console. 

1. From the **AWS Appliance Activation - Configuration** main menu, select `0` to choose **Get activation key**

1. Select `2` **Backup Gateway** for gateway family option

1. Enter the AWS Region where you want to activate your gateway

1. For network type, enter `1` for Public or `2` for VPC endpoint

1. For endpoint type, enter `1` for standard endpoint or `2` for dual-stack endpoint

   1. For dual-stack endpoint, select `1` for IPv4 or `2` for IPv6

1. Activation key will be populated automatically

#### Creating the gateway
<a name="bgw-create-gateway"></a>

Use the AWS CLI to create the gateway after obtaining an activation key:

1. Obtain activation key using curl commands or local console method

1. Create gateway using AWS CLI, for more information, see [CreateGateway](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_CreateGateway.html) in the *Backup gateway API Reference*.

   ```
   aws backup-gateway create-gateway \
                       --region region_code \
                       --activation-key activation_key \
                       --gateway-display-name gateway_name \
                       --gateway-type BACKUP_VM
   ```

1. Verify gateway appears in AWS Backup console under **External Resources** → **Gateways**

## Editing or deleting a gateway
<a name="edit-gateway"></a>

**To edit or delete a gateway:**

1. In the left navigation pane, under the **External resources** section, choose **Gateways**.

1. In the **Gateways** section, choose a gateway by its **Gateway name**.

1. To edit the gateway name, choose **Edit**.

1. To delete the gateway, choose **Delete**, then choose **Delete gateway**.

   You cannot reactivate a deleted gateway. If you want to connect to the hypervisor again, follow the procedure in [Creating a gateway](#create-gateway) .

1. To connect to a hypervisor, in the **Connected hypervisor** section, choose **Connect**.

   Each gateway connects to a single hypervisor. However, you can connect multiple gateways to the same hypervisor to increase the bandwidth between them beyond that of the first gateway.

1. To assign, edit, or manage tags, in the **Tags** section, choose **Manage tags**.

## Backup gateway Bandwidth Throttling
<a name="backup-gateway-bandwidth-throttling"></a>

**Note**  
This feature will be available on new gateways deployed after December 15, 2022. For existing gateways, this new capability will be available through an automatic software update on or before January 30, 2023. To update the gateway to the latest version manually, use AWS CLI command [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_UpdateGatewaySoftwareNow.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_UpdateGatewaySoftwareNow.html).

You can limit the upload throughput from your gateway to AWS Backup to control the amount of network bandwidth the gateway uses. By default, an activated gateway has no rate limits.

You can configure a bandwidth rate-limit schedule using the AWS Backup Console or using API through the AWS CLI ([https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutBandwidthRateLimitSchedule.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutBandwidthRateLimitSchedule.html)). When you use a bandwidth rate limit schedule, you can configure limits to change automatically throughout the day or week.

Bandwidth rate limiting works by balancing the throughput of all data being uploaded, averaged over each second. While it is possible for uploads to cross the bandwidth rate limit briefly for any given micro- or millisecond, this does not typically result in large spikes over longer periods of time.

You can add up to a maximum of 20 intervals. The maximum value for the upload rate is 8,000,000 Mbps.

### View and edit the bandwidth rate-limit schedule for your gateway using the AWS Backup console.
<a name="backup-gateway-view-edit-bandwidth-rate-limit-schedule"></a>

This section describes how to view and edit the bandwidth rate limit schedule for your gateway.

**To view and edit the bandwidth rate limit schedule**

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the left navigation pane, choose **Gateways**. In the Gateways pane, gateways are displayed by name. Click the radio button adjacent to the gateway name you want to manage.

1. Once you select a radio button, the drop-down menu **Action** is available to click. Click **Actions**, then click **Edit bandwidth rate limit schedule**. The current schedule is displayed. By default, a new or unedited gateway has no defined bandwidth rate limits.
**Note**  
You can also click **Manage schedule** in the gateway details page to navigate to the Edit bandwidth page.

1. *(Optional)* Choose **Add interval** to add a new configurable interval to the schedule. For each interval, input the following information:

   1. **Days of week** — Select the recurring day or days on which you want the interval to apply. When chosen, the days will display below the drop-down menu. You can remove them by clicking the **X** next to the day.

   1. **Start time** — Enter the start time for the bandwidth interval, using the *HH:MM* 24-hour format. Time is rendered in Universal Coordinated Time (UTC).

      Note: Your bandwidth-rate-limit interval begins at the start of the specified minute.

   1. **End time** — Enter the end time for the bandwidth interval, using the *HH:MM* 24-hour format. Time is rendered in Universal Coordinated Time (UTC).
**Important**  
The bandwidth-rate-limit interval ends at the end of the minute specified. To schedule an interval that ends at the end of an hour, enter `59`. To schedule consecutive continuous intervals, transitioning at the start of the hour, with no interruption between the intervals, enter `59` for the end minute of the first interval. Enter `00` for the start minute of the succeeding interval. 

   1. **Upload rate** — Enter the upload rate limit, in megabits per second (Mbps). The minimum value is 102 megabytes per second (Mbps).

1. *(Optional)* Repeat the previous step as desired until your bandwidth rate-limit schedule is complete. If you need to delete an interval from your schedule, choose **Remove**.
**Important**  
Bandwidth rate-limit intervals cannot overlap. The start time of an interval must occur after the end time of a preceding interval and before the start time of a following interval; its end time must occur before the start time of the following interval.

1. When you are finished, click the **Save changes** button.

### View and edit the bandwidth rate-limit schedule for your gateway using AWS CLI.
<a name="backup-gateway-view-edit-bandwidth-rate-limit-schedule-cli"></a>

The [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetBandwidthRateLimitSchedule.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetBandwidthRateLimitSchedule.html) action can be used to view the bandwidth throttle schedule for a specified gateway. If there is no schedule set, the schedule will be an empty list of intervals. Here is an example using the AWS CLI to fetch the bandwidth schedule of a gateway:

```
aws backup-gateway get-bandwidth-rate-limit-schedule --gateway-arn "arn:aws:backup-gateway:region:account-id:gateway/bgw-gw id"
```

To edit a gateway’s bandwidth throttle schedule, you can use the [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutBandwidthRateLimitSchedule.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutBandwidthRateLimitSchedule.html) action. Note that you can only update a gateway’s schedule as a whole, rather than modifying, adding, or removing individual intervals. Calling this action will overwrite the gateway’s previous bandwidth throttle schedule.

```
aws backup-gateway put-bandwidth-rate-limit-schedule --gateway-arn "arn:aws:backup-gateway:region:account-id:gateway/gw-id" --bandwidth-rate-limit-intervals ...
```

# Working with hypervisors
<a name="working-with-hypervisors"></a>

After you finish [Creating a gateway](working-with-gateways.md#create-gateway), you can connect it to a hypervisor to enable AWS Backup to work with the virtual machines managed by that hypervisor. For example, the hypervisor for VMware VMs is VMware vCenter Server. Ensure your hypervisor is configured with the [necessary permissions for AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/configure-infrastructure-bgw.html#bgw-vmware-permissions). 

## Adding a hypervisor
<a name="add-hypervisor"></a>

**To add a hypervisor:**

1. In the left navigation pane, under the **External resources** section, choose **Hypervisors**.

1. Choose **Add hypervisor**.

1. In the **Hypervisor settings** section, type in a **Hypervisor name**.

1. For **vCenter server host**, use the dropdown menu to select either **IP address** or **FQDN** (fully-qualified domain name). Type in the corresponding value.

1. To allow AWS Backup to discover the virtual machines on the hypervisor, enter the hypervisor’s **Username** and **Password**.

1. Encrypt your password. You can [ specify this encryption](https://docs.aws.amazon.com/aws-backup/latest/devguide/bgw-hypervisor-encryption-page.html) by selecting a specific service-managed KMS key or a customer-managed KMS key using the dropdown menu or choose **Create KMS key**. If you do not select a specific key, AWS Backup will encrypt your password using a service-owned key.

1. In the **Connecting gateway** section, use the dropdown list to specify which Gateway to connect to your hypervisor.

1. Choose **Test gateway connection** to verify your previous inputs.

1. *Optionally*, in the **Hypervisor tags** section, you can assign tags to the hypervisor by choosing **Add new tag**.

1. *Optional* [https://docs.aws.amazon.com/aws-backup/latest/devguide/backing-up-vms.html#backup-gateway-vmwaretags](https://docs.aws.amazon.com/aws-backup/latest/devguide/backing-up-vms.html#backup-gateway-vmwaretags): You can add up to 10 VMware tags you currently use on your virtual machines to generate AWS tags.

1. In the **Log group setting** panel, you may choose to integrate with [ Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) to maintain logs of your hypervisor (standard [CloudWatch Logs pricing](https://aws.amazon.com/cloudwatch/pricing/) will apply based on usage). Each hypervisor can belong to one log group.

   1. If you have not yet created a log group, select the **Create a new log group** radio button. The hypervisor you are editing will be associated with this log group.

   1. If you have previously created a log group for a different hypervisor, you can use that log group for this hypervisor. Select **Use an existing log group**.

   1. If you do not want CloudWatch logging, select **Deactivate logging**. 

1. Choose **Add hypervisor**, which takes you to its detail page.

**Tip**  
You can use Amazon CloudWatch Logs (see step 11 above) to obtain information about your hypervisor, including error monitoring, network connection between the gateway and the hypervisor, and network configuration information. For information about CloudWatch log groups, see [ Working with Log Groups and Log Streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) in the *Amazon CloudWatch User Guide*.

## Viewing virtual machines managed by a hypervisor
<a name="view-vms-by-hypervisor"></a>

**To view virtual machines on a hypervisor:**

1. In the left navigation pane, under the **External resources** section, choose **Hypervisors**.

1. In the **Hypervisors** section, choose a hypervisor by its **Hypervisor name** to go to its detail page.

1. In the section under **Hypervisor summary**, choose the **Virtual machines** tab.

1. In the **Connected virtual machines** section, a list of virtual machines populates automatically.

## Viewing gateways connected to a hypervisor
<a name="view-gateways-by-hypervisor"></a>

**To view gateways connected to the hypervisor:**

1. Choose the **Gateways** tab.

1. In the **Connected gateways** section, a list of gateways populates automatically.

## Connecting a hypervisor to additional gateways
<a name="add-more-gateways"></a>

Your backup and restore speeds might be limited by the bandwidth of the connection between your gateway and hypervisor. You can increase these speeds by connecting one or more additional gateways to your hypervisor. You can do this in the **Connected gateways** section as follows:

1. Choose **Connect**.

1. Select another gateway using the dropdown menu. Alternatively, choose **Create gateway** to create a new gateway.

1. Choose **Connect**.

## Editing a hypervisor configuration
<a name="edit-hypervisor"></a>

If you do not use the **Test gateway connection** feature, you might add a hypervisor with an incorrect username or password. In that case, the hypervisor’s connection status is always `Pending`. Alternatively, you might rotate the username or password to access your hypervisor. Update this information using the following procedure:

**To edit an already-added hypervisor:**

1. In the left navigation pane, under the **External resources** section, choose **Hypervisors**.

1. In the **Hypervisors** section, choose a hypervisor by its **Hypervisor name** to go to its detail page.

1. Choose **Edit**.

1. The top panel is named **Hypervisor settings**.

   1. Under **vCenter server host**, you can also edit the FQDN (Fully-Qualified Domain Name) or the IP address.

   1. *Optionally,* enter the hypervisor’s **Username** and **Password**.

1. In the **Log group setting** panel, you may choose to integrate with [ Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) to maintain logs of your hypervisor (standard [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/) will apply based on usage). Each hypervisor can belong to one log group.

   1. If you have not yet created a log group, select the **Create a new log group** radio button. The hypervisor you are editing will be associated with this log group.

   1. If you have previously created a log group for a different hypervisor, you can use that log group for this hypervisor. Select **Use an existing log group**.

   1. If you do not want CloudWatch logging, select **Deactivate logging**. 

**Tip**  
You can use Amazon CloudWatch Logs (see step 5 above) to obtain information about your hypervisor, including error monitoring, network connection between the gateway and the hypervisor, and network configuration information. For information about CloudWatch log groups, see [ Working with Log Groups and Log Streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) in the *Amazon CloudWatch User Guide*.

To update a hypervisor programmatically, use the CLI command [ update-hypervisor](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/backup-gateway/update-hypervisor.html) and [ UpdateHypervisor](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_UpdateHypervisor.html) API call.

## Deleting a hypervisor configuration
<a name="delete-hypervisor"></a>

If you need to remove an already-added hypervisor, remove the hypervisor configuration and add another. This remove operation applies to the configuration to connect to the hypervisor. It does not delete the hypervisor.

**To delete the configuration to connect to an already-added hypervisor:**

1. In the left navigation pane, under the **External resources** section, choose **Hypervisors**.

1. In the **Hypervisors** section, choose a hypervisor by its **Hypervisor name** to go to its detail page.

1. Choose **Remove**, then choose **Remove hypervisor**.

1. Optional: replace the removed hypervisor configuration using the procedure for [Adding a hypervisor](#add-hypervisor).

## Understanding hypervisor status
<a name="understand-hypervisor-status"></a>

The following describes each of the possible hypevisor statuses and, if applicable, remediation steps. The `ONLINE` status is the normal status of the hypervisor. A hypervisor should have this status all or most of the time it’s in use for backup and recovery of VMs managed by the hypervisor.


**Hypervisor statuses**  

| Status | Meaning and remediation | 
| --- | --- | 
| ONLINE |  You added a hypervisor to AWS Backup, associated with it a gateway, and can connect with that gateway over your network to perform backup and recovery of virtual machines managed by the hypervisor. You can perform [on-demand and scheduled backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/backing-up-vms.html) of those virtual machines at any time.  | 
| PENDING |  You added a hypervisor to AWS Backup but: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html) To change a hypervisor status from `PENDING` to `ONLINE`, [create a gateway](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-gateways.html#create-gateway) and [connect your hypervisor to that gateway](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html#add-more-gateways).  | 
| OFFLINE |  You added a hypervisor to AWS Backup and associated it with a gateway, but the gateway cannot connect to the hypervisor over your network. To change a hypervisor status from `OFFLINE` to `ONLINE`, verify the correctness of your [network configuration](https://docs.aws.amazon.com/aws-backup/latest/devguide/configure-infrastructure-bgw.html#bgw-network-configuration). If the issue persists, verify that your hypervisor’s IP address or fully-qualified domain name is correct. If they are incorrect, [add your hypervisor again using the correct information and test your gateway connection](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html#add-hypervisor).   | 
| ERROR |  You added a hypervisor to AWS Backup and associated it with a gateway, but the gateway cannot communicate with the hypervisor. To change a hypervisor status from `ERROR` to `ONLINE`, verify that hypervisor’s username and password are correct. If they are incorrect, [edit your hypervisor configuration](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-hypervisors.html#edit-hypervisor).  | 

**Next steps**

To back up virtual machines on your hypervisor, see [Backing up virtual machines](backing-up-vms.md).

# Backing up virtual machines
<a name="backing-up-vms"></a>

After [Adding a hypervisor](working-with-hypervisors.md#add-hypervisor), Backup gateway automatically lists your virtual machines. You can view your virtual machines by choosing either **Hypervisors** or **Virtual machines** in the left navigation pane.
+ Choose **Hypervisors** to view only the virtual machines managed by a specific hypervisor. With this view, you can work with one virtual machine at a time.
+ Choose **Virtual machines** to view all the virtual machines across all the hypervisors you added to your AWS account. With this view, you can work with some or all your virtual machines across multiple hypervisors.

Regardless of which view you choose, to perform a backup operation on a specific virtual machine, choose its **VM name** to open its detail page. The VM detail page is the starting point for the following procedures.

## Creating an on-demand backup of a virtual machine
<a name="create-on-demand-backup-vm"></a>

An [on-demand](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html) backup is a one-time, full backup you manually initiate. You can use on-demand backups to test AWS Backup’s backup and restore capabilities.

**To create an on-demand backup of a virtual machine:**

1. Choose **Create on-demand backup**.

1. [Configure your on-demand backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/recov-point-create-on-demand-backup.html).

1. Choose **Create on-demand backup**.

1. Check when your backup job has the status `Completed`. In the left navigation menu, choose **Jobs**.

1. Choose the **Backup Job ID** to view backup job information such as the **Backup size** and time elapsed between the **Creation date** and **Completion date**.

## Incremental VM backups
<a name="vm-incrementalbackups"></a>

Newer VMware versions contain a feature called [Changed Block Tracking](https://kb.vmware.com/s/article/1020128), which keeps track of the storage blocks of virtual machines as they change over time. When you use AWS Backup to back up a virtual machine, AWS Backup attempts to use the CBT data if it is available. AWS Backup uses CBT data to speed up the backup process; without CBT data, backup jobs are often slower and use more hypervisor resources. The backup can still be successfully completed even when the CBT data is not valid or available. For example, the CBT data might not be valid or might be unavailable if the virtual machine or ESXi host experiences a hard shutdown.

On the occasions CBT data is invalid or unavailable, the backup status will read `Successful` with a message. In these cases, the message will indicate that, in the absence of CBT data, AWS Backup used its own proprietary change detection mechanism to complete the backup instead of VMware's CBT data. Subsequent backups will reattempt to use CBT data, and in most cases the CBT data will be successfully valid and available. If the issue persists, see [ VMware Troubleshooting](https://docs.aws.amazon.com/aws-backup/latest/devguide/vm-troubleshooting.html) for steps to remedy.

For CBT to function correctly, the following must be true:
+ Host needs to be ESXi 4.0 or later
+ The VM owning the disks must have hardware version 7 or later
+ CBT must be enabled for the virtual machine (it is enabled by default)

To verify if a virtual disk has CBT enabled:

1. Open the vSphere Client and select a powered-off virtual machine.

1. Right-click the virtual machine and navigate to **Edit Settings** > **Options** > **Advanced/General** > **Configuration Parameters**.

1. The option `ctkEnabled` needs to equal `True`.

## Automating virtual machine backup by assigning resources to a backup plan
<a name="automate-vm-backup"></a>

A [backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/about-backup-plans.html) is a user-defined data protection policy that automates data protection across many AWS services and third-party applications. You first create your backup plan by specifying its backup frequency, retention period, lifecycle policy, and many other options. To create a backup plan, see Getting started tutorial.

After you create your backup plan, you assign AWS Backup-supported resources, including virtual machines, to that backup plan. AWS Backup offers [many ways to assign resources](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html), including assigning all the resources in your account, including or excluding single specific resources, or adding resources with certain tags. 

In addition to its existing resource assignment features, AWS Backup support for virtual machines introduces several new features to help you quickly assign virtual machines to backup plans. From the **Virtual machines** page, you can assign tags to multiple virtual machines or use the new **Assign resources to plan** feature. Use these features to assign your virtual machines already discovered by AWS Backup gateway.

If you anticipate discovering and assigning additional virtual machines in the future, and would like to automate the resource assignment step to include those future virtual machines, use the new **Create group assignment** feature.

## VMware Tags
<a name="backup-gateway-vmwaretags"></a>

[https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_VmwareTag.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_VmwareTag.html) are key-value pairs you can use to manage, to filter, and to search for your resources.

A VMware tag is composed of a **category** and a **tag name**. VMware tags are used to group virtual machines. A tag name is a label assigned to a virtual machine. A category is a collection of tag names.

In AWS tags, you can use characters among UTF-8 letters, numbers, spaces, and special characters `+ - = . _ : /` .

If you use tags on your virtual machines, you can add up to 10 matching tags in AWS Backup to help with organization. You can map up to 10 VMware tags to AWS tags. In the [AWS Backup console](https://console.aws.amazon.com/backup/), these can be found in **External resources > Virtual Machines > AWS tags** or **VMware tags**.

### VMware tag mapping
<a name="vmware-tag-mapping"></a>

If you use tags on your virtual machines, you can add up to 10 matching tags in AWS Backup for additional clarity and organization. Mappings apply to any virtual machine on the hypervisor.

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the console, go to **edit Hypervisor** (Click **External resources**, then **Hypervisors**, then click the Hypervisor name, then click **Manage mappings**).

1. The last pane, **VMware tag mapping**, contains four textbox fields into which you can enter your VMware tag information into corresponding AWS tags. The four fields are **Vmware tag category**, **VMware tag name**, **AWS tag key**, and **AWS tag value** (*example: Category = OS; Tag name = Windows; AWS tag key = OS-Windows, and AWS tag value = Windows)*. 

1. After you have entered your preferred values, click **Add mapping**. If you make an error, you can click **Remove** to delete entered information.

1. After adding mapping(s), specify the IAM role you intend to use to apply these AWS tags to the VMware virtual machines.

   The policy [https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#aws-managed-policies](https://docs.aws.amazon.com/aws-backup/latest/devguide/security-iam-awsmanpol.html#aws-managed-policies) contains needed permissions. You can attach this policy to the role you are using (or have an administrator attached it) or you can create a custom policy for the role being used.

1. Lastly, click **Add hypervisor** or **Save**.

The IAM role trust relationship should be modified to add the backup-gateway.amazonaws.com and backup.amazonaws.com services. Without this service, you will likely experience an error when you map tags. To edit the trust relationship for an existing role,

1. Log into the [IAM console](https://console.aws.amazon.com/iamv2/home?region=us-west-2#/home).

1. In the navigation pane of the console, choose **Roles**.

1. Choose the name of the role you wish to modify, then select the **Trust relationships** tab on the details page.

1. Under **Policy Document, paste the following:**

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "Service": [
             "backup.amazonaws.com",
             "backup-gateway.amazonaws.com"
           ]
         },
         "Action": "sts:AssumeRole"
       }
     ]
   }
   ```

------

1. Choose **Update Trust Policy**.

See [Editing the trust relationship for an existing role](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/edit_trust.html) in the *AWS Directory Service Administration Guide* for more detail.

### View VMware tag mappings
<a name="w2aac17c19c43c23c15c23"></a>

In the [AWS Backup console](https://console.aws.amazon.com/backup/), click on **External Resources**, then click on **Hypervisors**, then click on the Hypervisor name link to view properties for the selected hypervisor. Under the summary pane, there are four tabs, the last of which is **VMware tag mappings**. Note if you do not yet have mappings, "No VMware tag mappings." will be displayed.

From here, you can sync the metadata of virtual machines discovered by the hypervisor, you can copy mappings to your hypervisor(s), you can add AWS tags mapped to teh VMware tags to the backup selection of a backup plan, or you can manage mappings.

In the console, to see which tags are applied to a selected virtual machine, click **Virtual machines**, then the virtual machine name, then **AWS tags** or **VMware tags**. You can view the tags associated with this virtual machine, and additionally you can manage the tags.

### Assign virtual machines to plan using VMware tag mappings
<a name="w2aac17c19c43c23c15c31"></a>

To assign virtual machines to a backup plan using mapped tags, do the following:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. In the console go to VMware tag mappings on the hypervisor details page (click **External resources**, then click **Hypervisors** then click on the hypervisor name).

1. Select the checkbox next to multiple mapped tags to assign those tags to the same backup plan.

1. Click **Add to resource assignment**.

1. Choose an existing **Backup plan** from the dropdown list. Alternatively, you can choose **Create backup plan** to create a new backup plan.

1. Click **Confirm**. This opens the **Assign resources** page with **Refine selection using tags** fields with values pre-populated.

### VMware tags using the AWS CLI
<a name="w2aac17c19c43c23c15c37"></a>

AWS Backup uses the API call [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutHypervisorPropertyMappings.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_PutHypervisorPropertyMappings.html) to map hypervisor entity properties in on-premise to properties in AWS.

In the AWS CLI, use the operation `put-hypervisor-property-mappings`:

```
aws backup-gateway put-hypervisor-property-mappings \
--hypervisor-arn arn:aws:backup-gateway:region:account:hypervisor/hypervisorId \
--vmware-to-aws-tag-mappings list of VMware to AWS tag mappings \
--iam-role-arn arn:aws:iam::account:role/roleName \
--region AWSRegion 
--endpoint-url URL
```

Here is an example:

```
aws backup-gateway put-hypervisor-property-mappings \
--hypervisor-arn arn:aws:backup-gateway:us-east-1:123456789012:hypervisor/hype-12345 \
--vmware-to-aws-tag-mappings VmwareCategory=OS,VmwareTagName=Windows,AwsTagKey=OS-Windows,AwsTagValue=Windows \
--iam-role-arn arn:aws:iam::123456789012:role/SyncRole \
--region us-east-1
```

You can also use [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetHypervisorPropertyMappings.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetHypervisorPropertyMappings.html) to assist with property mappings information. In the AWS CLI, use the operation `get-hypervisor-property-mappings`. Here is an example template: 

```
aws backup-gateway get-hypervisor-property-mappings --hypervisor-arn HypervisorARN 
--region AWSRegion
```

Here is an example:

```
aws backup-gateway get-hypervisor-property-mappings \
--hypervisor-arn arn:aws:backup-gateway:us-east-1:123456789012:hypervisor/hype-12345 \
--region us-east-1
```

### Sync metadata of virtual machines discovered by the hypervisor in AWS using API, CLI, or SDK
<a name="w2aac17c19c43c23c15c57"></a>

You can sync the metadata of virtual machines. When you do, the VMware tags present on the virtual machine that are part of the mappings will be synched. Also, AWS tags mapped to the VMware tags present on the virtual machine will be applied to the AWS Virtual Machine resource.

AWS Backup uses the API call [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_StartVirtualMachinesMetadataSync.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_StartVirtualMachinesMetadataSync.html) to sync the metadata of the virtual machines discovered by the hypervisor. To sync metadata of virtual machines discovered by the hypervisor using AWS CLI, use the operation `start-virtual-machines-metadata-sync`.

Example template:

```
aws backup-gateway start-virtual-machines-metadata-sync \
--hypervisor-arn Hypervisor ARN 
--region AWSRegion
```

Example:

```
aws backup-gateway start-virtual-machines-metadata-sync \
--hypervisor-arn arn:aws:backup-gateway:us-east-1:123456789012:hypervisor/hype-12345 \
--region us-east-1
```

You can also use [https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetHypervisor.html](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_GetHypervisor.html) to assist with hypervisor information, such as host, state, status of latest metadata sync, and also to retrieve the last successful metadata sync time. In the AWS CLI, use the operation `get-hypervisor`.

Example template:

```
aws backup-gateway get-hypervisor \
--hypervisor-arn Hypervisor ARN 
--region AWSRegion
```

Example:

```
aws backup-gateway get-hypervisor \
--hypervisor-arn arn:aws:backup-gateway:us-east-1:123456789012:hypervisor/hype-12345 \
--region us-east-1
```

For more information, see API documentation [VmwareTag](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_VmwareTag.html) and [ VmwareToAwsTagMapping](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_VmwareToAwsTagMapping.html).

This feature will be available on new gateways deployed after December 15, 2022. For existing gateways, this new capability will be available through an automatic software update on or before January 30, 2023. To update the gateway to the latest version manually, use AWS CLI command [ UpdateGatewaySoftwareNow](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BGW_UpdateGatewaySoftwareNow.html).

Example:

```
aws backup-gateway update-gateway-software-now \
--gateway-arn arn:aws:backup-gateway:us-east-1:123456789012:gateway/bgw-12345 \
--region us-east-1
```

## Assigning virtual machines using tags
<a name="assign-vms-tags"></a>

You can assign your virtual machines currently discovered by AWS Backup, along with other AWS Backup resources, by assigning them a tag that you have already assigned to one of your existing backup plans. You can also create a [new backup plan](https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html) and a new [tag-based resource assignment](https://docs.aws.amazon.com/aws-backup/latest/devguide/assigning-resources.html). Backup plans check for newly-assigned resources each time they run a backup job.

**To tag multiple virtual machines with the same tag:**

1. In the left navigation pane, choose **Virtual machines**.

1. Select the checkbox next to **VM name** to choose all your virtual machines. Alternatively, select the checkbox next to the VM names you want to tag.

1. Choose **Add tags**.

1. Type in a tag **Key**.

1. Recommended: type in a tag **Value**.

1. Choose **Confirm**.

## Assigning virtual machines using the Assign resources to plan feature
<a name="assign-vms-to-plan"></a>

You can assign virtual machines currently discovered by AWS Backup to an existing or new backup plan using the **Assign resources to plan** feature.

**To assign virtual machines using the Assign resources to plan feature:**

1. In the left navigation pane, choose **Virtual machines**.

1. Select the checkbox next to **VM name** to choose all your virtual machines. Alternatively, select the checkbox next to multiple VM names to assign them to the same backup plan.

1. Choose **Assignments**, then choose **Assign resources to plan**.

1. Type in a **Resource assignment name**.

1. Choose a resource assignment **IAM role** to create backups and manage recovery points. If you do not have a specific IAM role to use, we recommend the **Default role** which has the correct permissions.

1. In the **Backup plan** section, choose an existing **Backup plan** from the dropdown list. Alternatively, choose **Create backup plan** to create a new backup plan.

1. Choose **Assign resources**.

1. Optional: Verify your virtual machines are assigned to a backup plan by choosing **View Backup plan**. Then, in the **Resource assignments** section, choose the resource assignment **Name**.

## Assigning virtual machines using the Create group assignment feature
<a name="assign-vms-group-assignment"></a>

Unlike the preceding two resource assignment features for virtual machines, the **Create group assignment** feature not only assigns virtual machines currently discovered by AWS Backup, but also virtual machines discovered in the future in a folder or hypervisor you define.

Also, you do not need to select any checkboxes to use the **Create group assignment** feature.

**To assign virtual machines using the Assign resources to plan feature:**

1. In the left navigation pane, choose **Virtual machines**.

1. Choose **Assignments**, then choose **Create group assignment**.

1. Type in a **Resource assignment name**.

1. Choose a resource assignment **IAM role** to create backups and manage recovery points. If you do not have a specific IAM role to use, we recommend the **Default role** which has the correct permissions.

1. In the **Resource group** section, select the **Group type** dropdown menu. Your options are **Folder** or **Hypervisor**.

   1. Choose **Folder** to assign all the virtual machines in a folder on a hypervisor. Select a folder **Group name**, such as `datacenter/vm`, using the dropdown menu. You can also choose to include **Subfolders**.
**Note**  
To make Folder-based assignments, during the discovery process, AWS Backup tags virtual machines with the folder it finds them in during the discovery process. If you later move a virtual machine to a different folder, AWS Backup cannot update the tag for you due to AWS tagging best practices. This assignment method might result in continuing to take backups of virtual machines you moved out of your assigned folder.

   1. Choose **Hypervisor** to assign all the virtual machines managed by a hypervisor. Select a hypervisor ID **Group name** using the dropdown menu.

1. In the **Backup plan** section, choose an existing **Backup plan** from the dropdown list. Alternatively, choose **Create backup plan** to create a new backup plan.

1. Choose **Create group assignment**.

1. Optional: verify your virtual machines are assigned to a backup plan by choosing **View Backup plan**. In the **Resource assignments** section, choose the resource assignment **Name**.

**Next steps**

To restore a virtual machine, see [Restore a virtual machine using AWS Backup](restoring-vm.md).

# Information about third-party source components for Backup gateway
<a name="bgw-third-party-source"></a>

In this section, you can find information about third party tools and licenses that we depend on to deliver Backup gateway functionality.

The source code for certain third-party source software components that are included with the Backup gateway software is available for download at the following locations:
+ For gateways deployed on VMware ESXi, download [ sources.tgz](https://s3.amazonaws.com/aws-storage-gateway-terms/bgw_backup_vm/third-party-sources.tgz).

This product includes software developed by the OpenSSL project for use in the OpenSSL Toolkit ([https://www.openssl.org/](https://www.openssl.org/)).

This product includes software developed by VMware® vSphere Software Development Kit ([https://www.vmware.com](https://www.vmware.com)).

For the relevant licenses for all dependent third-party tools, see [Third-Party Licenses](https://s3.amazonaws.com/aws-storage-gateway-terms/bgw_backup_vm/third-party-licenses.txt).

## Open-source components for AWS Appliance
<a name="aws-appliance-open-source"></a>

Several third-party tools and licenses are used to deliver functionality for Backup gateway.

Use the following links to download source code for certain open-source software components that are included with AWS Appliance software:
+ For gateways deployed on VMware ESXi, download [sources.tar](https://s3.amazonaws.com/aws-storage-gateway-terms/sources.tar)

This product includes software developed by the OpenSSL project for use in the OpenSSL Toolkit ([https://www.openssl.org/](https://www.openssl.org)). For the relevant licenses for all dependent third-party tools, see [Third-Party Licenses](https://s3.amazonaws.com/aws-storage-gateway-terms/THIRD_PARTY_LICENSES.txt).

# Troubleshoot VM issues
<a name="vm-troubleshooting"></a>

## Incremental Backups / CBT issues and messages
<a name="w2aac17c19c43c27b3"></a>

**Failure message:** `"The VMware Change Block Tracking (CBT) data was invalid during this backup, but the incremental backup was successfully completed with our proprietary change detection mechanism."`

If this message continues, [reset CBT](https://knowledge.broadcom.com/external/article?legacyId=1020128) as directed by VMware.

**Message notes CBT was not turned on or was unavailable:** *"VMware Change Block Tracking (CBT) was not available for this virtual machine, but the incremental backup was successfully completed with our proprietary change mechanism."*

Check to make sure CBT is turned on. To verify if a virtual disk has CBT enabled:

1. Open the vSphere Client and select a powered-off virtual machine.

1. Right-click the virtual machine and navigate to **Edit Settings** > **Options** > **Advanced/General** > **Configuration Parameters**.

1. The option `ctkEnabled` needs to equal `True`.

If it is turned on, ensure you are using up-to-date VMware features. The host must be ESXi 4.0 or later and the virtual machine owning the disks to be tracked must be hardware version 7 or later.

If CBT is turned on (enabled) and the software and hardware are up to date, turn off the virtual machine and then turn it back on again. Ensure that CBT is turned on. Then, perform the backup again.

## VMware backup failure
<a name="w2aac17c19c43c27b5"></a>

When a VMware backup fails, it may be related to one of the following:

**Failure message:** `"Failed to process backup data. Aborted backup job."` or `"Error opening disk on the virtual machine"`.

**Possible causes:** This error may occur because of a configuration issue; or, the VMware version or disk isn't supported.

**Remedy 1:** Ensure your infrastructure is configured to use a gateway and ensure all required ports are open.

1. Access the [backup gateway console](https://docs.aws.amazon.com/storagegateway/latest/tgw/accessing-local-console.html#MaintenanceConsoleWindowVMware-common). Note this is different from the AWS Backup console.

1. On the **Backup gateway configuration** page enter option **3** to test the network connectivity.

1. If the network test is successful, enter **X**.

1. Return to the Backup gateway configuration page.

1. Enter **7** to access the command prompt.

1. Run the following commands to verify network connectivity:

   `ncport -d ESXi Host-p 902`

   `ncport -d ESXi Host-p 443`

**Remedy 2:** Use [Supported VMs](vm-backups.md#supported-vms) versions.

**Remedy 3:** If a gateway appliance is configured with incorrect DNS servers, then the backup fails. To verify the DNS configuration, complete the following steps:

1. Access the [backup gateway console](https://docs.aws.amazon.com/storagegateway/latest/tgw/accessing-local-console.html#MaintenanceConsoleWindowVMware-common).

1. On the **Backup gateway configuration** page enter option **2** to navigate to the network configuration.

1. In **Network configuration**, enter **7** to view the DNS configuration.

1. Review the DNS server IP addresses. If the DNS server IP address are incorrect then exist the prompt to return to **Network Configuration**.

1. In **Network Configuration**, enter **6** to edit the DNS configuration.

1. Enter the correct DNS server IP addresses. Then, enter **X** to complete your network configuration.

To obtain more information about your hypervisor, such as errors and network configuration and connection, see [Editing a hypervisor configuration](working-with-hypervisors.md#edit-hypervisor) to configure the hypervisor to integrate with Amazon CloudWatch Logs.

## Backup failures from network connection issues
<a name="w2aac17c19c43c27b7"></a>

**Failure message: **`"Failed to upload backup during data ingestion. Aborted backup job."` or `"Cloud network request timed out during data ingestion"`.

**Possible causes:** This error can occur if the network connection is insufficient to handle data uploads. If network bandwidth is low, the link between the VM and AWS Backup can become congested and cause backups to fail.

Required network bandwidth depends on several factors, including the size of the VM, the incremental data generated for each VM backup, the backup window, and restore requirements.

**Remedy:** Best practices and recommendations include having a minimum bandwidth of 1000 Mbps upload bandwidth for on-premises VMs connected to AWS Backup. Once the bandwidth is confirmed, retry the backup job.

## Aborted backup job
<a name="w2aac17c19c43c27b9"></a>

**Failure message:** `"Failed to create backup during snapshot creation. Aborted backup job."`

**Possible cause:** The VMware host where the gateway appliance resides may have an issue.

**Remedy:** Check the configuration of your VMware host and review the it for issues. For additional information, see [Editing a hypervisor configuration](working-with-hypervisors.md#edit-hypervisor).

## No available gateways
<a name="w2aac17c19c43c27c11"></a>

**Failure message:** `"No gateways available to work on job."`

**Possible cause:** all connected gateways are busy with other jobs. Each gateway has a limit of four concurrent jobs (backup or restore).

For **remedies**, see the next section for steps on increasing number of gateways and steps to increase backup plan window time.

## VMware backup job failure
<a name="w2aac17c19c43c27c13"></a>

**Failure message: **`"Abort signal detected"`

**Possible causes:**
+ **Low Network Bandwidth**: Insufficient network bandwidth can impede the completion of backups within the completion window. When the backup job requires more bandwidth than available, it can result in failure and trigger the "Abort Signal Detected" error.
+ **Inadequate Number of Backup Gateways**: If the number of backup gateways is not sufficient to handle the backup rotation for all the configured VMs, the backup job may fail. This can occur when the backup plan's window for completing backups is too short or the number of backup gateways are not enough.
+ Backup Plan completion window is too small.

**Remedies:**

**Increase bandwidth:** Consider increasing the network capacity between AWS and the on-premises environment. This step will provide more bandwidth for the backup process, allowing data to transfer smoothly without triggering the error. It is recommended you have at least 100-Mbps bandwidth to AWS to backup on-premises VMware VMs using AWS Backup.

If a bandwidth rate limit is configured for the backup gateway, it can restrict the flow of data and lead to backup failures. Increasing the bandwidth rate limit to ensure sufficient data transfer capacity may help reduce failures. This adjustment can mitigate the occurrence of the "Abort Signal Detected" error. For more information, see [Backup gateway Bandwidth Throttling](working-with-gateways.md#backup-gateway-bandwidth-throttling).

**Increase the number of Backup gateways:** A single backup gateway can process up to 4 backup and restore jobs at a time. Additional jobs will queue and wait for the gateway to free up until the backup start window passed. If the backup window passes and the queued jobs have not started, those backup jobs will fail with "abort signal detected". You can increase the number of backup gateways to alleviate the number of failed jobs. See [Working with gateways](working-with-gateways.md) for more detail.

**Increase backup plan window time:** You can increase the **complete within duration** of the backup window in your backup plan. See [Backup plan options and configuration](plan-options-and-configuration.md) for more detail.

For help resolving these issues, see [AWS Knowledge Center](https://repost.aws/knowledge-center/backup-troubleshoot-vmware-backups).

# Create Windows VSS backups
<a name="windows-backups"></a>

With AWS Backup, you can back up and restore VSS (Volume Shadow Copy Service)-enabled Windows applications running on Amazon EC2 instances. If the application has VSS writer registered with Windows VSS, then AWS Backup creates a snapshot that will be consistent for that application.

You can perform consistent restores, while using the same managed backup service that is used to protect other AWS resources. With application-consistent Windows backups on EC2, you get the same consistency settings and application awareness as traditional backup tools.

**Note**  
AWS Backup only supports application-consistent backups of resources running on Amazon EC2, specifically backup scenarios where application data can be restored by replacing an existing instance with a new instance created from the backup. Not all instance types or applications are supported for Windows VSS backups. 

For more information, see [Create VSS based snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-vss-snaps.html) in the *Amazon EC2 User Guide*.

To back up and restore VSS-enabled Windows resources running Amazon EC2, follow these steps to complete the required prerequisite tasks. For instructions, see [ Prerequisites to create Windows VSS based EBS snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/application-consistent-snapshots-prereqs.html) in the *Amazon EC2 User Guide*.

1. Download, install, and configure the SSM agent in AWS Systems Manager. This step is required. For instructions, see [Working with SSM agent on EC2 instances for Windows Server](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-windows.html) in the *AWS Systems Manager User Guide*.

1. Add an IAM policy to the IAM role and attach the role to the Amazon EC2 instance before you take the Windows VSS (Volume Shadow Copy Service) backup. For instructions, see [Use an IAM managed policy to grant permissions for VSS based snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vss-iam-reqs.html) in the *Amazon EC2 User Guide*. For an example of the IAM policy, see [Managed policies for AWS Backup](security-iam-awsmanpol.md).

1. [ Download and install VSS components](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/application-consistent-snapshots-getting-started.html) to the Windows on Amazon EC2 instance

1. Enable VSS in AWS Backup:

   1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

   1. On the dashboard, choose the type of backup you want to create, either **Create an on-demand backup** or **Manage Backup plans**. Provide the information needed for your backup type.

   1. When you're assigning resources, choose **EC2**. Windows VSS backup is currently supported for EC2 instances only. 

   1. In the **Advanced settings** section, choose **Windows VSS**. This enables you to take application-consistent Windows VSS backups. 

   1. Create your backup.

A backup job with a status of `Completed` does not guarantee that the VSS portion is successful; VSS inclusion is made on a best-effort basis. Proceed with the following steps to determine if a backup is application-consistent, crash-consistent, or failed:

1. Open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup).

1. Under **My account** in the left navigation, click **Jobs**.

1. A status of `Completed` indicates a successful job that is application-consistent (VSS).

   A status of `Completed with issues` indicates that the VSS operation has failed, so only a crash-consistent backup has been successful. This status will also have a popover message `"Windows VSS Backup Job Error encountered, trying for regular backup"`. 

   If the backup was unsuccessful, the status will be `Failed`.

1. To view additional details of the backup job, click on the individual job. For example, the details may read `Windows VSS Backup attempt failed because of timeout on VSS enabled snapshot creation`.

VSS-enabled backups with a target that is non-Windows or non-VSS component Windows that is successful job will be crash-consistent without VSS.

## Unsupported Amazon EC2 instances
<a name="unsupported-vss-instances"></a>

The following Amazon EC2 instance types are not supported for VSS-enabled Windows backups because they are small instances and might not take the backup successfully.
+ t3.nano
+ t3.micro
+ t3a.nano
+ t3a.micro
+ t2.nano
+ t2.micro