

For similar capabilities to Amazon Timestream for LiveAnalytics, consider Amazon Timestream for InfluxDB. It offers simplified data ingestion and single-digit millisecond query response times for real-time analytics. Learn more [here](https://docs.aws.amazon.com//timestream/latest/developerguide/timestream-for-influxdb.html).

# Monitoring with Amazon CloudWatch
<a name="monitoring-cloudwatch"></a>

You can monitor Timestream for LiveAnalytics using Amazon CloudWatch, which collects and processes raw data from Timestream for LiveAnalytics into readable, near-real-time metrics. It records these statistics for two weeks so that you can access historical information and gain a better perspective on how your web application or service is performing. By default, Timestream for LiveAnalytics metric data is automatically sent to CloudWatch in 1-minute or 15-minute periods. For more information, see [What Is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatch.html) in the *Amazon CloudWatch User Guide*.

**Topics**
+ [How do I use Timestream for LiveAnalytics metrics?](#how-to-use-metrics)
+ [Timestream for LiveAnalytics metrics and dimensions](metrics-dimensions.md)
+ [Creating CloudWatch alarms to monitor Timestream for LiveAnalytics](creating-alarms.md)

## How do I use Timestream for LiveAnalytics metrics?
<a name="how-to-use-metrics"></a>

The metrics reported by Timestream for LiveAnalytics provide information that you can analyze in different ways. The following list shows some common uses for the metrics. These are suggestions to get you started, not a comprehensive list.


****  

|  How can I?  |  Relevant metrics  | 
| --- | --- | 
|  How can I determine if any system errors occurred?  |  You can monitor `SystemErrors` to determine whether any requests resulted in a server error code. Typically, this metric should be equal to zero. If it isn't, you might want to investigate.  | 
|  How can I monitor the amount of data in the memory store?  |  You can monitor `MemoryCumulativeBytesMetered` over the specified time period, to monitor the amount of data stored in memory store in bytes. This metric is emitted every hour and you can track the bytes stored at an account as well as at database granularity. The memory store is metered in GB-hour (the cost of storing 1GB of data for one hour). So multiplying the hourly value of `MemoryCumulativeBytesMetered` with GB-hour pricing in your Region will give you the cost incurred per hour. Dimensions: Operation (storage), DatabaseName, Metric name  | 
|  How can I monitor the amount of data in the magnetic store?  |   You can monitor `MagneticCumulativeBytesMetered` over the specified time period, to monitor the amount of data stored in magnetic store in bytes. This metric is emitted every hour and you can track the bytes stored at an account as well as at database granularity. The memory store is metered in GB-month (the cost of storing 1GB of data for one month). So multiplying the hourly value of `MagneticCumulativeBytesMetered` with GB-month pricing in your Region will give you the cost incurred per hour. For example, if the value of `MagneticCumulativeBytesMetered` is 107374182400 bytes (100GB), then the hourly charge of 1GB of data in magnetic store = (0.03) (us-east-1 pricing) / (30.4\$124). Multiplying this value with the `MagneticCumulativeBytesMetered` in GB will give \$1\$10.004 for that hour. Dimensions: Operation (Storage), DatabaseName, Metric name  | 
|  How can I monitor the data scanned by queries?  |   You can monitor `CumulativeBytesMetered` over the specified time period, to monitor the data scanned by queries (in bytes) sent to Timestream for LiveAnalytics. This metric is emitted after the query execution and you can track the data scanned at account and database granularity. You can calculate the query cost for a particular period by multiplying the value of the metric with per GB scanned pricing in your Region. The bytes scanned by scheduled queries are accounted for in this metric. Dimensions: Operation (Query), DatabaseName, Metric name  | 
|  How can I monitor the data scanned by scheduled queries?  |   You can monitor `CumulativeBytesMetered` over the specified time period, to monitor the data scanned by scheduled queries (in bytes) executed by Timestream for LiveAnalytics. This metric is emitted after the query execution and you can track the data scanned at account and database granularity. You can calculate the query cost for a particular period by multiplying the value of the metric with per GB scanned pricing in your Region.  The bytes metered are also accounted for in the query `CumulativeBytesMetered`.  Dimensions: Operation (TriggeredScheduledQuery), DatabaseName, Metric name  | 
|  How can I monitor the number of records ingested?  |  You can monitor `NumberOfRecords` over the specified time period to monitor the number of records ingested. You can track the bytes stored at an account as well as at database granularity. You can also use this metric to monitor the writes made by Scheduled Queries when query results are written into a separate table. When using the `WriteRecords` API, the metric is emitted for each `WriteRecords` request, with the CloudWatch Operation dimension being `WriteRecords`. When using the `BatchLoad` or `ScheduledQuery` APIs, the metric is emitted at intervals determined by the service until the task completes. The CloudWatch Operation dimension for this metric is either `BatchLoad` or `ScheduledQuery`, depending on which API is used. Dimensions: Operation (WriteRecords, BatchLoad, or ScheduledQuery), DatabaseName, Metric name  | 
|  How can I monitor the cost of records ingested?  |  You can monitor `CumulativeBytesMetered` to monitor the number of bytes ingested that accrue cost. You can track the bytes stored at an account as well as at database granularity. Ingested records are metered in cumulative bytes. Multiplying the value of `CumulativeBytesMetered` by Writes pricing in your Region gives you the ingestion cost incurred. When using the `WriteRecords` API, this metric is emitted for each `WriteRecords` request, with the CloudWatch Operation dimension being `WriteRecords`. When using the `BatchLoad` or `ScheduledQuery` API, the metric is emitted at intervals determined by the service until the task completes. The CloudWatch Operation dimension for this metric is `BatchLoad` or `ScheduledQuery` depending on which API is used.. Dimensions: Operation (WriteRecords, BatchLoad, or ScheduledQuery), DatabaseName, Metric name  | 
| How can I monitor the Timestream Compute Units (TCUs) used in my account? |  You can monitor `QueryTCU` over the desired time period, to monitor the compute units provisioned in your account. This metric is emitted every 15-minutes. Units: `Count` Valid Statistics: Minimum, Maximum Metric: `ResourceCount` Dimensions: `Service: Timestream`, `Namespace:AWS/Usage`, `Resource: QueryTCU`, `Type: Resource`, `Class: OnDemand`  | 
| How can I monitor the number of provisioned Timestream Compute Units (TCUs) used in my account? |  Provisioned TCU is available only in the Asia Pacific (Mumbai) region. You can monitor `QueryTCU` to monitor the number of provisioned TCUs used for query workload in the account. This metric is emitted every minute for the during active query workload from the account. Units: `Count` Valid Statistics: Minimum, Maximum Metric: `ResourceCount` Dimensions: `Service: Timestream`, `Namespace: AWS/Usage`, `Resource: ProvisionedQueryTCU`, `Class: None`  | 
| How can I monitor the provisioned Timestream Compute Units (TCUs) used in my account? |  Provisioned TCU is available only in the Asia Pacific (Mumbai) region. You can monitor `QueryTCU` over the specified time period, to monitor the compute units consumed for query workload in the account. This metric is emitted with maximum and minimum compute units for every minute during active query workload from the account. Units: `Count` Valid Statistics: Minimum, Maximum Metric: `ResourceCount` Dimensions: `Service: Timestream`, `Namespace: AWS/Usage`, `Resource: QueryTCU`, `Class: Provisioned`  | 