

For similar capabilities to Amazon Timestream for LiveAnalytics, consider Amazon Timestream for InfluxDB. It offers simplified data ingestion and single-digit millisecond query response times for real-time analytics. Learn more [here](https://docs.aws.amazon.com//timestream/latest/developerguide/timestream-for-influxdb.html).

# Working with AWS Backup
<a name="backups"></a>

The data protection functionality in Amazon Timestream for LiveAnalytics is a fully managed solution to help you meet your regulatory compliance and business continuity requirements. The functionality is enabled through native integration with AWS Backup, a unified backup service designed to simplify the creation, migration, restoration, and deletion of backups, while providing improved reporting and auditing. Through integration with AWS Backup, you can use a fully managed, policy-driven centralized data protection solution to create immutable backups and centrally manage data protection of your application data spanning Timestream and other AWS services supported by AWS Backup.

To use the functionality, you must [opt-in](https://docs.aws.amazon.com/aws-backup/latest/devguide/service-opt-in.html) to allow AWS Backup to protect your Timestream resources. Opt-in choices apply to the specific account and AWS Region, so you might have to opt in to multiple Regions using the same account. For more information on AWS Backup, see the [AWS Backup Developer Guide](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html).

Data Protection functionality available through AWS Backup includes the following.

**Scheduled backups**—You can set up regularly scheduled backups of your Timestream for LiveAnalytics tables using backup plans.

**Cross-account and cross-Region copying**—You can automatically copy your backups to another backup vault in a different AWS Region or account, which allows you to support your data protection requirements.

**Cold storage tiering**—You can configure your backups to implement life cycle rules to delete or transition backups to colder storage. This can help you optimize your backup costs.

**Tags**—You can automatically tag your backups for billing and cost allocation purposes.

**Encryption**—Your backup data is stored in the AWS Backup vault. This allows you to encrypt and secure your backups by using an AWS KMS key that is independent from your Timestream for LiveAnalytics table encryption key.

**Secure backups using the WORM model**—You can use AWS Backup Vault Lock to enable a write-once-read-many (WORM) setting for your backups. With AWS Backup Vault Lock, you can add an additional layer of defense that protects backups from inadvertent or malicious delete operations, changes to backup retention periods, and updates to lifecycle settings. To learn more, see [AWS Backup Vault Lock](https://docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html).

The data protection functionality is available in all regions To learn more about the functionality, see the [AWS Backup Developer Guide](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html).

# Backing up and restoring Timestream tables: How it works
<a name="backups-how-it-works"></a>

You can create backups of your Amazon Timestream tables. This section provides an overview of what happens during the backup and restore process.

**Topics**
+ [

## Backups
](#backups-backups)
+ [

## Restores
](#backups-restores)

## Backups
<a name="backups-backups"></a>

You can use the on-demand backup feature to create full backups of your Amazon Timestream for LiveAnalytics tables. This section provides an overview of what happens during the backup and restore process.

You can create a backup of your Timestream data at a table granularity. You can initiate a backup of the selected table using either Timestream console, or AWS Backup console, SDK, or CLI. The backup is created asynchronously and all the data in the table until the backup initiation time is included in the backup. However, there is a possibility that some of the data ingested into the table while the backup is in progress might also be included in the backup. To protect your data, you can either create a one-time on-demand backup or schedule a recurring backup of your table.

While a backup is in progress, you cannot do the following.
+ Pause or cancel the backup operation.
+ Delete the source table of the backup.
+ Disable backups on a table if a backup for that table is in progress.

Once configured, AWS Backup provides automated backup schedules, retention management, and lifecycle management, removing the need for custom scripts and manual processes. For more information, see the [AWS Backup Developer Guide](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html)

All Timestream for LiveAnalytics backups are incremental in nature, implying that the first backup of a table is a full backup and every subsequent backup of the same table is an incremental backup, copying only the changes to the data since the last backup. As the data in Timestream for LiveAnalytics is stored in a collection of partitions, all the partitions that changed either due to ingesting new data or updates to the existing data since the last backup are copied during subsequent backups. 

If you are using Timestream for LiveAnalytics console, the backups created for all the resources in the account are listed in the **Backups** tab. Additionally, the backups are also listed in the **Table** details.

## Restores
<a name="backups-restores"></a>

You can restore a table from the Timestream for LiveAnalytics console, or AWS Backup console, SDK, or AWS CLI. You can either restore the entire data from your backup, or configure the table retention settings to restore select data. When you initiate a restore, you can configure the following table settings.
+ Database Name
+ Table Name
+ Memory store retention
+ Magnetic store retention
+ Enable Magnetic storage writes
+ S3 error logs location (optional)
+ IAM role that AWS Backup will assume when restoring the backup

The preceding configurations are independent of the source table. To restore all the data in your backup, we recommend that you configure the new table settings such that the sum of memory store retention period and magnetic store retention period is greater than the difference between the oldest timestamp and now. When you select a backup that is incremental to restore, all data (incremental \$1 underlying full data) is restored. Upon successful restore, the table is in active state and you can perform ingestion and/or query operations on the restored table. However, you cannot perform these operations while the restore is in progress. Once restored, the table is similar to any other table in your account.

**Example Restore the all data from a backup**  
This example has the following assumptions.  

*Oldest timestamp*—`August 1, 2021 0:00:00`
+ *Now*—`November 9, 2022 0:00:00`
To restore all data from a backup, enter and compare values as follows.  

1. Enter **Memory store retention** and **Magnetic store retention**. For example, assume these values.
   + *Memory store retention*—12 hours
   + *Magnetic store retention*—500 days

1. Find the sum of **Memory store retention** and **Magnetic store retention**.

   ```
   12 hours + (500 * 24 hours) =
   12 hours + 12,000 hours =
   12,012 hours
   ```

1. Find the difference between **Oldest timestamp** and now.

   ```
   November 9, 2022 0:00:00 - August 1, 2021 0:00:00 =
   465 days =
   465 * 24 hours =
   11,160 hours
   ```

1. Ensure the sum of retention values in the second step is greater than difference of times in the third step. Adjust the retention times if necessary.

   ```
   12,012 > 11,160
   true
   ```

**Example Restore select data from a backup**  
This example has the following assumption.  
+ *Now*—`November 9, 2022 0:00:00`
To restore only select data from a backup, enter and compare values as follows.  

1. Determine the earliest timestamp required. For example, assume `December 4, 2021 0:00:00`.

1. Find the difference between the earliest timestamp required and now.

   ```
   November 9, 2022 0:00:00 - December 4, 2021 0:00:00 =
   340 days =
   340 * 24 hours =
   8,160 hours
   ```

1. Enter the desired value for **Memory store retention**. For example, enter 12 hours.

1. Subtract the value from the difference in the second step.

   ```
   8,160 hours - 12 hours =
   8148 hours
   ```

1. Enter that value for **Magnetic store retention**.

You can copy a backup of your Timestream for LiveAnalytics table data to a different AWS Region and then restore it in that new Region. You can copy and then restore backups between AWS commercial Regions, and AWS GovCloud (US) Regions. You pay only for the data you copy from the source Region and the data you restore to a new table in the destination Region.

Once the table is restored, you must manually set up the following on the restored table.
+ AWS Identity and Access Management (IAM) policies
+ Tags
+ Scheduled Queries

Restore times are directly related to the configuration of your tables. These include the size of your tables, the number of underlying partitions, the amount of data restored to memory store, and other variables. A best practice when planning for disaster recovery is to regularly document average restore completion times and establish how these times affect your overall Recovery Time Objective (RTO).

All backup and restore console and API actions are captured and recorded in AWS CloudTrail for logging, continuous monitoring, and auditing.

# Creating backups of Amazon Timestream tables
<a name="backups-creating"></a>

This section describes how to enable AWS Backup and create on-demand and scheduled backups for Amazon Timestream.

**Topics**
+ [

## Enabling AWS Backup to protect Timestream for LiveAnalytics data
](#backups-enabling)
+ [

## Creating on-demand backups
](#backups-on-demand)
+ [

## Scheduled backups
](#backups-scheduled)

## Enabling AWS Backup to protect Timestream for LiveAnalytics data
<a name="backups-enabling"></a>

You must enable AWS Backup to use it with Timestream for LiveAnalytics.

To enable AWS Backup in the Timestream for LiveAnalytics console, perform the following steps.

1.  Sign in to the [AWS Management Console](https://console.aws.amazon.com/timestream). 

1. A pop-up banner appears at the top of your Timestream for LiveAnalytics dashboard page to enable AWS Backup to support Timestream for LiveAnalytics data. Otherwise, from the navigation pane, choose **Backups**.

1. In the **Backup** window, you will see the banner to enable AWS Backup. Choose **Enable**.

   Data Protection through AWS Backup is now available for your Timestream for LiveAnalytics tables.

To enable through AWS Backup, refer to AWS Backup documentation to enable via console and programmatically.

If you choose to disable AWS Backup from protection your Timestream for LiveAnalytics data after those have been enabled, log in through AWS Backup console and move the toggle to the left.

 If you can’t enable or disable the AWS Backup features, your AWS admin may need to perform those actions.

## Creating on-demand backups
<a name="backups-on-demand"></a>

To create an on-demand backup of a Timestream for LiveAnalytics table, follow these steps.

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/timestream).

1. In the navigation pane on the left side of the console, choose **Backups**.

1. Choose **Create on-demand backup**.

1. Continue to select the settings in the backup window.

1. You can either create a backup now, initiates a backup immediately, or select a backup window to start the backup.

1. Select the lifecycle management policy of your backup. You can transition your backup data into cold storage where you have to retain the backup for a minimum of 90 days. You can set the required retention period for your backup You can either select an existing vault or or select **create new backup vault** to navigate to AWS Backup console and create a new backup vault <documentation link on creating a new backup vault here>

1. Select the appropriate IAM role.

1. If you want to assign one or more tags to your on-demand backup, enter a **key** and optional **value**, and choose **Add tag**.

1. Choose to create an on-demand backup. This takes you to the **Backup** page, where you will see a list of jobs.

1. Choose the **Backup job ID** for the resource that you chose to back up to see the details of that job.

## Scheduled backups
<a name="backups-scheduled"></a>

To schedule a backup, refer to [Create a scheduled backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/create-a-scheduled-backup.html).

# Restoring a backup of an Amazon Timestream table
<a name="backups-restoring"></a>

This section describes how to restore a backup of an Amazon Timestream table.

**Topics**
+ [

## Restoring a Timestream for LiveAnalytics table from AWS Backup
](#backups-restoring-from)
+ [

## Restoring a Timestream for LiveAnalytics table to another Region or account
](#backups-restoring-to)

## Restoring a Timestream for LiveAnalytics table from AWS Backup
<a name="backups-restoring-from"></a>

To restore your Timestream for LiveAnalytics table from AWS Backup using Timestream for LiveAnalytics console, follow these steps.

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/timestream).

1. In the navigation pane on the left side of the console, choose **Backups**.

1. To restore a resource, choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, choose **Restore**.

1. Enter the table configuration settings, namely **Database name** and **Table Name**. Please note, the restored table name should be different from the original source table name.

1. Configure the memory and magnetic store retention settings.

1. For **Restore role**, choose the IAM role that AWS Backup will assume for this restore.

1. Choose **Restore backup**. A message at the top of the page provides information about the restore job.

**Note**  
You are charged for restoring the entire backup irrespective of the configured memory and magnetic store retention periods. However, once the restore is completed, your restored table will only contain the data within the configured retention periods.

## Restoring a Timestream for LiveAnalytics table to another Region or account
<a name="backups-restoring-to"></a>

To restore a Timestream for LiveAnalytics table to another Region or account, you will first need to copy the backup to that new Region or account. In order to copy to another account, that account must first grant you permission. After you have copied your Timestream for LiveAnalytics backup to the new Region or account, it can be restored with the process in the previous section.

# Copying a backup of a Amazon Timestream table
<a name="backups-copying"></a>

You can make a copy of a current backup. You can copy backups to multiple AWS accounts or AWS Regions on demand or automatically as part of a scheduled backup plan. Cross-Region replication is especially valuable if you have business continuity or compliance requirements to store backups a minimum distance away from your production data.

Cross-account backups are useful for securely copying your backups to one or more AWS accounts in your organization for operational or security reasons. If your original backup is inadvertently deleted, you can copy the backup from its destination account to its source account, and then start the restore. Before you can do this, you must have two accounts that belong to the same organization in the Organizations service and required permissions for the accounts. When you copy an incremental backup into another account or Region, the associated full backup is also copied.

Copies inherit the source backup's configuration unless you specify otherwise. There is one exception. If you specify your new copy to "Never" expire. With this setting, the new copy still inherits its source expiration date. If you want your new backup copy to be permanent, either set your source backups to never expire, or specify your new copy to expire 100 years after its creation.

To copy a backup from Timestream console, follow these steps.

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/timestream).

1. In the navigation pane on the left side of the console, choose **Backups**.

1. Choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, select **Actions** and choose **Copy**.

1. Select **Continue to AWS Backup** and follow the steps for [Cross account backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-region-backup.html).

Copying on-demand and scheduled backups across accounts and Regions is not natively supported in the Timestream for LiveAnalytics console currently and you have to navigate to AWS Backup to perform the operation. 

# Deleting backups
<a name="backups-deleting"></a>

This section describes how to delete a backup of a Timestream for LiveAnalytics table.

To delete a backup from Timestream console, follow these steps.

1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/timestream).

1. In the navigation pane on the left side of the console, choose **Backups**.

1. Choose the radio button next to the recovery point ID of the resource. In the upper-right corner of the pane, select **Actions** and choose **Delete**.

1. Select **Continue to AWS Backup** and follow the steps for deleting backups at [Deleting backups](https://docs.aws.amazon.com/aws-backup/latest/devguide/deleting-backups.html).

**Note**  
When you delete a backup that is incremental, only the incremental backup is deleted and the underlying full backup is not deleted.

# Quota and limits
<a name="backups-limits"></a>

AWS Backup limits the backups to one concurrent backup per resource. Therefore, additional scheduled or on-demand backup requests for the resource are queued and will start only after the existing backup job is completed. If the backup job is not started or completed within the backup window, the request fails. For more information about AWS Backup limits, see [AWS Backup Limits](https://docs.aws.amazon.com/aws-backup/latest/devguide/aws-backup-limits.html) in the AWS Backup Developer Guide.

When creating a backup, you can execute up to four concurrent backups per account. Similarly, you can execute one concurrent restore per account. When you initiate more than four backup jobs simultaneously, only four backup jobs are initiated and the remaining jobs will be periodically retried. Once initiated, if the backup job is not completed within the configured backup window duration, the backup job fails. If the failed backup job is an on-demand backup, you can retry the backup and for scheduled backups, the job is attempted in the following schedule.