Amazon Managed Service for Apache Flink was previously known as Amazon Kinesis Data Analytics for Apache Flink.
Use CloudWatch Alarms with Amazon Managed Service for Apache Flink
Using Amazon CloudWatch metric alarms, you watch a CloudWatch metric over a time period that you specify. The alarm performs one or more actions based on the value of the metric or expression relative to a threshold over a number of time periods. An example of an action is sending a notification to an Amazon Simple Notification Service (Amazon SNS) topic.
For more information about CloudWatch alarms, see Using Amazon CloudWatch Alarms.
Review recommended alarms
This section contains the recommended alarms for monitoring Managed Service for Apache Flink applications.
The table describes the recommended alarms and has the following columns:
-
Metric Expression: The metric or metric expression to test against the threshold.
-
Statistic: The statistic used to check the metric—for example, Average.
-
Threshold: Using this alarm requires you to determine a threshold that defines the limit of expected application performance. You need to determine this threshold by monitoring your application under normal conditions.
-
Description: Causes that might trigger this alarm, and possible solutions for the condition.
Metric Expression | Statistic | Threshold | Description |
---|---|---|---|
downtime > 0 |
Average | 0 | A downtime greater than zero indicates that the application has failed. If the value is larger than 0, the application is not processing any data. Recommended for all applications. The Downtime metric measures the
duration of an outage. A downtime greater than zero indicates that the
application has failed. For troubleshooting, see
Application is restarting. |
RATE (numberOfFailedCheckpoints) > 0 |
Average | 0 | This metric counts the number of failed checkpoints since the application started. Depending on the application, it can be tolerable if checkpoints fail occasionally. But if checkpoints are regularly failing, the application is likely unhealthy and needs further attention. We recommend monitoring RATE(numberOfFailedCheckpoints) to alarm on the gradient and not on absolute values. Recommended for all applications. Use this metric to monitor application health and checkpointing progress. The application saves state data to checkpoints when it's healthy. Checkpointing can fail due to timeouts if the application isn't making progress in processing the input data. For troubleshooting, see Checkpointing is timing out. |
Operator.numRecordsOutPerSecond < threshold |
Average | The minimum number of records emitted from the application during normal conditions. | Recommended for all applications. Falling below this threshold can indicate that the application isn't making expected progress on the input data. For troubleshooting, see Throughput is too slow. |
records_lag_max|millisbehindLatest > threshold |
Maximum | The maximum expected latency during normal conditions. | If the application is consuming from Kinesis or Kafka, these metrics indicate if the application is falling behind and needs to be scaled in order to keep up with the current load. This is a good generic metric that is easy to track for all kinds of applications. But it can only be used for reactive scaling, i.e., when the application has already fallen behind.
Recommended for all applications. Use the records_lag_max metric for a Kafka source, or the millisbehindLatest for a Kinesis stream source. Rising above this threshold can indicate that the application isn't making expected progress on the input data.
For troubleshooting, see Throughput is too
slow. |
lastCheckpointDuration > threshold |
Maximum | The maximum expected checkpoint duration during normal conditions. | Monitors how much data is stored in state and how long it takes to take a checkpoint. If checkpoints grow or take long, the application is continuously spending time on checkpointing and has less cycles for actual processing. At some points, checkpoints may grow too large or take so long that they fail. In addition to monitoring absolute values, customers should also considering monitoring the change rate with RATE(lastCheckpointSize) and RATE(lastCheckpointDuration) .
If the lastCheckpointDuration continuously increases, rising above this threshold can indicate that the application isn't making expected progress on the input data, or that there are problems with application health such as backpressure.
For troubleshooting, see Unbounded state growth. |
lastCheckpointSize > threshold |
Maximum | The maximum expected checkpoint size during normal conditions. | Monitors how much data is stored in state and how long it takes to take a checkpoint. If checkpoints grow or take long, the application is continuously spending time on checkpointing and has less cycles for actual processing. At some points, checkpoints may grow too large or take so long that they fail. In addition to monitoring absolute values, customers should also considering monitoring the change rate with RATE(lastCheckpointSize) and RATE(lastCheckpointDuration) .
If the lastCheckpointSize continuously increases, rising above this threshold can indicate that the application is accumulating state data. If the state data becomes too large, the application can run out of memory when recovering from a checkpoint, or recovering from a checkpoint might take too long.
For troubleshooting, see Unbounded state growth. |
heapMemoryUtilization > threshold |
Maximum | This gives a good indication of the overall resource utilization of the application and can be used for proactive scaling unless the application is I/O bound. The maximum expected heapMemoryUtilization size during normal
conditions, with a recommended value of 90 percent. |
You can use this metric to monitor the maximum memory utilization of task managers across the application. If the application reaches this threshold, you need to provision more resources. You do this by enabling automatic scaling or increasing the application parallelism. For more information about increasing resources, see Implement application scaling in Managed Service for Apache Flink. |
cpuUtilization > threshold |
Maximum | This gives a good indication of the overall resource utilization of the application and can be used for proactive scaling unless the application is I/O bound. The maximum expected cpuUtilization size during normal conditions,
with a recommended value of 80 percent. |
You can use this metric to monitor the maximum CPU utilization of task managers across the application. If the application reaches this threshold, you need to provision more resources You do this by enabling automatic scaling or increasing the application parallelism. For more information about increasing resources, see Implement application scaling in Managed Service for Apache Flink. |
threadsCount > threshold |
Maximum | The maximum expected threadsCount size during normal
conditions. |
You can use this metric to watch for thread leaks in task managers across the application. If this metric reaches this threshold, check your application code for threads being created without being closed. |
(oldGarbageCollectionTime * 100)/60_000 over 1 min period') >
threshold |
Maximum | The maximum expected oldGarbageCollectionTime duration. We
recommend setting a threshold such that typical garbage collection time is 60
percent of the specified threshold, but the correct threshold for your
application will vary. |
If this metric is continually increasing, this can indicate that there is a memory leak in task managers across the application. |
RATE(oldGarbageCollectionCount) > threshold |
Maximum | The maximum expected oldGarbageCollectionCount under normal
conditions. The correct threshold for your application will vary. |
If this metric is continually increasing, this can indicate that there is a memory leak in task managers across the application. |
Operator.currentOutputWatermark - Operator.currentInputWatermark
> threshold |
Minimum | The minimum expected watermark increment under normal conditions. The correct threshold for your application will vary. | If this metric is continually increasing, this can indicate that either the application is processing increasingly older events, or that an upstream subtask has not sent a watermark in an increasingly long time. |