

# Working with Lambda function logs
<a name="monitoring-logs"></a>

To help you troubleshoot failures, AWS Lambda automatically monitors Lambda functions on your behalf. You can view logs for Lambda functions using the Lambda console, the CloudWatch console, the AWS Command Line Interface (AWS CLI), the CloudWatch API. You can also configure Lambda to send logs to Amazon S3 and Firehose.

As long as your function's [execution role](lambda-intro-execution-role.md) has the necessary permissions, Lambda captures logs for all requests handled by your function and sends them to Amazon CloudWatch Logs, which is the default destination. You can also use the Lambda console to configure Amazon S3 or Firehose as logging destinations.
+ **CloudWatch Logs** is the default logging destination for Lambda functions. CloudWatch Logs provides real-time log viewing and analysis capabilities, with support for creating metrics and alarms based on your log data.
+ **Amazon S3** is economical for long-term storage, and services like Athena can be used to analyze logs. Latency is typically higher.
+ **Firehose** offers managed streaming of logs to various destinations. If you need to send logs to other AWS services (for example, OpenSearch Service or Redshift Data API) or third-party platforms (like Datadog, New Relic, or Splunk), Firehose simplifies that process by providing pre-built integrations. You can also stream to custom HTTP endpoints without setting up additional infrastructure.

## Choosing a service destination to send logs to
<a name="choosing-log-destination"></a>

Consider the following key factors when choosing a service a destination for function logs:
+ **Cost management varies by service.** Amazon S3 typically provides the most economical option for long-term storage, while CloudWatch Logs allows you to view logs, process logs, and set up alerts in real time. Firehose costs include both the streaming service and cost associated with what you configure it to stream to.
+ **Analysis capabilities differ across services.** CloudWatch Logs excels at real-time monitoring and integrates natively with other CloudWatch features, such as Logs Insights and Live Tail. Amazon S3 works well with analysis tools like Athena and can integrate with various services, though it may require additional setup. Firehose simplifies direct streaming to specific AWS services (like OpenSearch Service and Redshift Data API) and supported third-party platforms (such as Datadog and Splunk) by providing pre-built integrations, potentially reducing configuration work.
+ **Setup and ease of use vary by service.** CloudWatch Logs is the default log destination - it works immediately with no additional configuration and provides straightforward log viewing and analysis through the CloudWatch console. If you need logs sent to Amazon S3, you'll need to do some initial setup in the Lambda console and configure bucket permissions. If you need logs sent directly to services like OpenSearch Service or third-party analytics platforms, Firehose can simplify that process.

## Configuring log destinations
<a name="configuring-log-destinations"></a>

AWS Lambda supports multiple destinations for your function logs. This guide explains the available logging destinations and helps you choose the right option for your needs. Regardless of your chosen destination, Lambda provides options to control log format, filtering, and delivery.

Lambda supports both JSON and plain text formats for your function's logs. JSON structured logs provide enhanced searchability and enable automated analysis, while plain text logs offer simplicity and potentially reduced storage costs. You can control which logs Lambda sends to your chosen destination by configuring log levels for both system and application logs. Filtering helps you manage storage costs and makes it easier to find relevant log entries during debugging.

For detailed setup instructions for each destination, refer to the following sections:
+ [Sending Lambda function logs to CloudWatch Logs](monitoring-cloudwatchlogs.md)
+ [Sending Lambda function logs to Firehose](logging-with-firehose.md)
+ [Sending Lambda function logs to Amazon S3](logging-with-s3.md)

## Configuring advanced logging controls for Lambda functions
<a name="monitoring-cloudwatchlogs-advanced"></a>

To give you more control over how your function logs are captured, processed, and consumed, Lambda offers the following logging configuration options:
+ **Log format** - select between plain text and structured JSON format for your function’s logs.
+ **Log level** - for JSON structured logs, choose the detail level of the logs Lambda sends to CloudWatch, such as `FATAL`, `ERROR`, `WARN`, `INFO`, `DEBUG`, and `TRACE`.
+ **Log group** - choose the CloudWatch log group your function sends logs to.

To learn more about configuring advanced logging controls, refer to the following sections:
+ [Configuring JSON and plain text log formats](monitoring-cloudwatchlogs-logformat.md)
+ [Log-level filtering](monitoring-cloudwatchlogs-log-level.md)
+ [Configuring CloudWatch log groups](monitoring-cloudwatchlogs-loggroups.md)

# Configuring JSON and plain text log formats
<a name="monitoring-cloudwatchlogs-logformat"></a>

Capturing your log outputs as JSON key value pairs makes it easier to search and filter when debugging your functions. With JSON formatted logs, you can also add tags and contextual information to your logs. This can help you to perform automated analysis of large volumes of log data. Unless your development workflow relies on existing tooling that consumes Lambda logs in plain text, we recommend that you select JSON for your log format.

**Lambda Managed Instances**  
Lambda Managed Instances only support JSON log format. When you create a Managed Instances function, Lambda automatically configures the log format to JSON and you cannot change it to plain text. For more information about Managed Instances, see [Lambda Managed Instances](lambda-managed-instances.md).

For all Lambda managed runtimes, you can choose whether your function's system logs are sent to CloudWatch Logs in unstructured plain text or JSON format. System logs are the logs that Lambda generates and are sometimes known as platform event logs.

For [supported runtimes](#monitoring-cloudwatchlogs-logformat-supported), when you use one of the supported built-in logging methods, Lambda can also output your function's application logs (the logs your function code generates) in structured JSON format. When you configure your function's log format for these runtimes, the configuration you choose applies to both system and application logs.

For supported runtimes, if your function uses a supported logging library or method, you don't need to make any changes to your existing code for Lambda to capture logs in structured JSON.

**Note**  
Using JSON log formatting adds additional metadata and encodes log messages as JSON objects containing a series of key value pairs. Because of this, the size of your function's log messages can increase.

## Supported runtimes and logging methods
<a name="monitoring-cloudwatchlogs-logformat-supported"></a>

 Lambda currently supports the option to output JSON structured application logs for the following runtimes. 


| Language | Supported versions | 
| --- | --- | 
| Java | All Java runtimes except Java 8 on Amazon Linux 1 | 
| .NET | .NET 8 and later | 
| Node.js | Node.js 16 and later | 
| Python | Python 3.8 and later | 
| Rust | n/a | 

For Lambda to send your function's application logs to CloudWatch in structured JSON format, your function must use the following built-in logging tools to output logs:
+ **Java**: The `LambdaLogger` logger or Log4j2. For more information, see [Log and monitor Java Lambda functions](java-logging.md).
+ **.NET**: The `ILambdaLogger` instance on the context object. For more information, see [Log and monitor C\$1 Lambda functions](csharp-logging.md).
+ **Node.js** - The console methods `console.trace`, `console.debug`, `console.log`, `console.info`, `console.error`, and `console.warn`. For more information, see [Log and monitor Node.js Lambda functions](nodejs-logging.md).
+ **Python**: The standard Python `logging` library. For more information, see [Log and monitor Python Lambda functions](python-logging.md).
+ **Rust**: The `tracing` crate. For more information, see [Log and monitor Rust Lambda functions](rust-logging.md).

For other managed Lambda runtimes, Lambda currently only natively supports capturing system logs in structured JSON format. However, you can still capture application logs in structured JSON format in any runtime by using logging tools such as Powertools for AWS Lambda that output JSON formatted log outputs.

## Default log formats
<a name="monitoring-cloudwatchlogs-format-default"></a>

Currently, the default log format for all Lambda runtimes is plain text. For Lambda Managed Instances, the log format is always JSON and cannot be changed.

If you’re already using logging libraries like Powertools for AWS Lambda to generate your function logs in JSON structured format, you don’t need to change your code if you select JSON log formatting. Lambda doesn’t double-encode any logs that are already JSON encoded, so your function’s application logs will continue to be captured as before.

## JSON format for system logs
<a name="monitoring-cloudwatchlogs-JSON-system"></a>

When you configure your function's log format as JSON, each system log item (platform event) is captured as a JSON object that contains key value pairs with the following keys:
+ `"time"` - the time the log message was generated
+ `"type"` - the type of event being logged
+ `"record"` - the contents of the log output

The format of the `"record"` value varies according to the type of event being logged. For more information see [Telemetry API `Event` object types](telemetry-schema-reference.md#telemetry-api-events). For more information about the log levels assigned to system log events, see [System log level event mapping](monitoring-cloudwatchlogs-log-level.md#monitoring-cloudwatchlogs-log-level-mapping).

For comparison, the following two examples show the same log output in both plain text and structured JSON formats. Note that in most cases, system log events contain more information when output in JSON format than when output in plain text.

**Example plain text:**  

```
2024-03-13 18:56:24.046000 fbe8c1   INIT_START  Runtime Version: python:3.12.v18  Runtime Version ARN: arn:aws:lambda:eu-west-1::runtime:edb5a058bfa782cb9cedc6d534ac8b8c193bc28e9a9879d9f5ebaaf619cd0fc0
```

**Example structured JSON:**  

```
{
  "time": "2024-03-13T18:56:24.046Z",
  "type": "platform.initStart",
  "record": {
    "initializationType": "on-demand",
    "phase": "init",
    "runtimeVersion": "python:3.12.v18",
    "runtimeVersionArn": "arn:aws:lambda:eu-west-1::runtime:edb5a058bfa782cb9cedc6d534ac8b8c193bc28e9a9879d9f5ebaaf619cd0fc0"
  }
}
```

**Note**  
The [Accessing real-time telemetry data for extensions using the Telemetry API](telemetry-api.md) always emits platform events such as `START` and `REPORT` in JSON format. Configuring the format of the system logs Lambda sends to CloudWatch doesn’t affect Lambda Telemetry API behavior.

## JSON format for application logs
<a name="monitoring-cloudwatchlogs-JSON-application"></a>

When you configure your function's log format as JSON, application log outputs written using supported logging libraries and methods are captured as a JSON object that contains key value pairs with the following keys.
+ `"timestamp"` - the time the log message was generated
+ `"level"` - the log level assigned to the message
+ `"message"` - the contents of the log message
+ `"requestId"` (Python, .NET, and Node.js) or `"AWSrequestId"` (Java) - the unique request ID for the function invocation

Depending on the runtime and logging method that your function uses, this JSON object may also contain additional key pairs. For example, in Node.js, if your function uses `console` methods to log error objects using multiple arguments, The JSON object will contain extra key value pairs with the keys `errorMessage`, `errorType`, and `stackTrace`. To learn more about JSON formatted logs in different Lambda runtimes, see [Log and monitor Python Lambda functions](python-logging.md), [Log and monitor Node.js Lambda functions](nodejs-logging.md), and [Log and monitor Java Lambda functions](java-logging.md).

**Note**  
The key Lambda uses for the timestamp value is different for system logs and application logs. For system logs, Lambda uses the key `"time"` to maintain consistency with Telemetry API. For application logs, Lambda follows the conventions of the supported runtimes and uses `"timestamp"`.

For comparison, the following two examples show the same log output in both plain text and structured JSON formats.

**Example plain text:**  

```
2024-10-27T19:17:45.586Z 79b4f56e-95b1-4643-9700-2807f4e68189 INFO some log message
```

**Example structured JSON:**  

```
{
    "timestamp":"2024-10-27T19:17:45.586Z",
    "level":"INFO",
    "message":"some log message",
    "requestId":"79b4f56e-95b1-4643-9700-2807f4e68189"
}
```

## Setting your function's log format
<a name="monitoring-cloudwatchlogs-set-format"></a>

To configure the log format for your function, you can use the Lambda console or the AWS Command Line Interface (AWS CLI). You can also configure a function’s log format using the [CreateFunction](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html) and [UpdateFunctionConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateFunctionConfiguration.html) Lambda API commands, the AWS Serverless Application Model (AWS SAM) [AWS::Serverless::Function](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html) resource, and the CloudFormation [AWS::Lambda::Function](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html) resource.

Changing your function’s log format doesn’t affect existing logs stored in CloudWatch Logs. Only new logs will use the updated format.

If you change your function's log format to JSON and do not set log level, then Lambda automatically sets your function's application log level and system log level to INFO. This means that Lambda sends only log outputs of level INFO and lower to CloudWatch Logs. To learn more about application and system log-level filtering see [Log-level filtering](monitoring-cloudwatchlogs-log-level.md) 

**Note**  
For Python runtimes, when your function's log format is set to plain text, the default log-level setting is WARN. This means that Lambda only sends log outputs of level WARN and lower to CloudWatch Logs. Changing your function's log format to JSON changes this default behavior. To learn more about logging in Python, see [Log and monitor Python Lambda functions](python-logging.md).

For Node.js functions that emit embedded metric format (EMF) logs, changing your function's log format to JSON could result in CloudWatch being unable to recognize your metrics.

**Important**  
If your function uses Powertools for AWS Lambda (TypeScript) or the open-sourced EMF client libraries to emit EMF logs, update your [Powertools](https://github.com/aws-powertools/powertools-lambda-typescript) and [EMF](https://www.npmjs.com/package/aws-embedded-metrics) libraries to the latest versions to ensure that CloudWatch can continue to parse your logs correctly. If you switch to the JSON log format, we also recommend that you carry out testing to ensure compatibility with your function's embedded metrics. For further advice about node.js functions that emit EMF logs, see [Using embedded metric format (EMF) client libraries with structured JSON logs](nodejs-logging.md#nodejs-logging-advanced-emf).

**To configure a function’s log format (console)**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function

1. On the function configuration page, choose **Monitoring and operations tools**.

1. In the **Logging configuration** pane, choose **Edit**.

1. Under **Log content**, for **Log format** select either **Text** or **JSON**.

1. Choose **Save**.

**To change the log format of an existing function (AWS CLI)**
+ To change the log format of an existing function, use the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command. Set the `LogFormat` option in `LoggingConfig` to either `JSON` or `Text`.

  ```
  aws lambda update-function-configuration \
    --function-name myFunction \
    --logging-config LogFormat=JSON
  ```

**To set log format when you create a function (AWS CLI)**
+ To configure log format when you create a new function, use the `--logging-config` option in the [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) command. Set `LogFormat` to either `JSON` or `Text`. The following example command creates a Node.js function that outputs logs in structured JSON.

  If you don’t specify a log format when you create a function, Lambda will use the default log format for the runtime version you select. For information about default logging formats, see [Default log formats](#monitoring-cloudwatchlogs-format-default).

  ```
  aws lambda create-function \ 
    --function-name myFunction \ 
    --runtime nodejs24.x \
    --handler index.handler \
    --zip-file fileb://function.zip \
    --role arn:aws:iam::123456789012:role/LambdaRole \
    --logging-config LogFormat=JSON
  ```

# Log-level filtering
<a name="monitoring-cloudwatchlogs-log-level"></a>

Lambda can filter your function's logs so that only logs of a certain detail level or lower are sent to CloudWatch Logs. You can configure log-level filtering separately for your function's system logs (the logs that Lambda generates) and application logs (the logs that your function code generates).

For [Supported runtimes and logging methods](monitoring-cloudwatchlogs-logformat.md#monitoring-cloudwatchlogs-logformat-supported), you don't need to make any changes to your function code for Lambda to filter your function's application logs.

For all other runtimes and logging methods , your function code must output log events to `stdout` or `stderr` as JSON formatted objects that contain a key value pair with the key `"level"`. For example, Lambda interprets the following output to `stdout` as a DEBUG level log.

```
print('{"level": "debug", "msg": "my debug log", "timestamp": "2024-11-02T16:51:31.587199Z"}')
```

If the `"level"` value field is invalid or missing, Lambda will assign the log output the level INFO. For Lambda to use the timestamp field, you must specify the time in valid [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) timestamp format. If you don't supply a valid timestamp, Lambda will assign the log the level INFO and add a timestamp for you.

When naming the timestamp key, follow the conventions of the runtime you are using. Lambda supports most common naming conventions used by the managed runtimes.

**Note**  
To use log-level filtering, your function must be configured to use the JSON log format. The default log format for all Lambda managed runtimes is currently plain text. To learn how to configure your function's log format to JSON, see [Setting your function's log format](monitoring-cloudwatchlogs-logformat.md#monitoring-cloudwatchlogs-set-format).

For application logs (the logs generated by your function code), you can choose between the following log levels.


| Log level | Standard usage | 
| --- | --- | 
| TRACE (most detail) | The most fine-grained information used to trace the path of your code's execution | 
| DEBUG | Detailed information for system debugging | 
| INFO | Messages that record the normal operation of your function | 
| WARN | Messages about potential errors that may lead to unexpected behavior if unaddressed | 
| ERROR | Messages about problems that prevent the code from performing as expected | 
| FATAL (least detail) | Messages about serious errors that cause the application to stop functioning | 

When you select a log level, Lambda sends logs at that level and lower to CloudWatch Logs. For example, if you set a function’s application log level to WARN, Lambda doesn’t send log outputs at the INFO and DEBUG levels. The default application log level for log filtering is INFO.

When Lambda filters your function’s application logs, log messages with no level will be assigned the log level INFO.

For system logs (the logs generated by the Lambda service), you can choose between the following log levels.


| Log level | Usage | 
| --- | --- | 
| DEBUG (most detail) | Detailed information for system debugging | 
| INFO | Messages that record the normal operation of your function | 
| WARN (least detail) | Messages about potential errors that may lead to unexpected behavior if unaddressed | 

When you select a log level, Lambda sends logs at that level and lower. For example, if you set a function’s system log level to INFO, Lambda doesn’t send log outputs at the DEBUG level.

By default, Lambda sets the system log level to INFO. With this setting, Lambda automatically sends `"start"` and `"report"` log messages to CloudWatch. To receive more or less detailed system logs, change the log level to DEBUG or WARN. To see a list of the log levels that Lambda maps different system log events to, see [System log level event mapping](#monitoring-cloudwatchlogs-log-level-mapping).

## Configuring log-level filtering
<a name="monitoring-cloudwatchlogs-log-level-setting"></a>

To configure application and system log-level filtering for your function, you can use the Lambda console or the AWS Command Line Interface (AWS CLI). You can also configure a function’s log level using the [CreateFunction](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html) and [UpdateFunctionConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateFunctionConfiguration.html) Lambda API commands, the AWS Serverless Application Model (AWS SAM) [AWS::Serverless::Function](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html) resource, and the CloudFormation [AWS::Lambda::Function](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html) resource.

Note that if you set your function's log level in your code, this setting takes precedence over any other log level settings you configure. For example, if you use the Python `logging` `setLevel()` method to set your function's logging level to INFO, this setting takes precedence over a setting of WARN that you configure using the Lambda console.

**To configure an existing function’s application or system log level (console)**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. On the function configuration page, choose **Monitoring and operations tools**.

1. In the **Logging configuration** pane, choose **Edit**.

1. Under **Log content**, for **Log format** ensure **JSON** is selected.

1. Using the radio buttons, select your desired **Application log level** and **System log level** for your function.

1. Choose **Save**.

**To configure an existing function’s application or system log level (AWS CLI)**
+ To change the application or system log level of an existing function, use the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command. Use `--logging-config` to set `SystemLogLevel` to one of `DEBUG`, `INFO`, or `WARN`. Set `ApplicationLogLevel` to one of `DEBUG`, `INFO`, `WARN`, `ERROR`, or `FATAL`. 

  ```
  aws lambda update-function-configuration \
    --function-name myFunction \
    --logging-config LogFormat=JSON,ApplicationLogLevel=ERROR,SystemLogLevel=WARN
  ```

**To configure log-level filtering when you create a function**
+ To configure log-level filtering when you create a new function, use `--logging-config` to set the `SystemLogLevel` and `ApplicationLogLevel` keys in the [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) command. Set `SystemLogLevel` to one of `DEBUG`, `INFO`, or `WARN`. Set `ApplicationLogLevel` to one of `DEBUG`, `INFO`, `WARN`, `ERROR`, or `FATAL`.

  ```
  aws lambda create-function \
    --function-name myFunction \
    --runtime nodejs24.x \
    --handler index.handler \
    --zip-file fileb://function.zip \
    --role arn:aws:iam::123456789012:role/LambdaRole \ 
    --logging-config LogFormat=JSON,ApplicationLogLevel=ERROR,SystemLogLevel=WARN
  ```

## System log level event mapping
<a name="monitoring-cloudwatchlogs-log-level-mapping"></a>

For system level log events generated by Lambda, the following table defines the log level assigned to each event. To learn more about the events listed in the table, see [Lambda Telemetry API `Event` schema reference](telemetry-schema-reference.md)


| Event name | Condition | Assigned log level | 
| --- | --- | --- | 
| initStart | runtimeVersion is set | INFO | 
| initStart | runtimeVersion is not set | DEBUG | 
| initRuntimeDone | status=success | DEBUG | 
| initRuntimeDone | status\$1=success | WARN | 
| initReport | initializationType\$1=on-demand | INFO | 
| initReport | initializationType=on-demand | DEBUG | 
| initReport | status\$1=success | WARN | 
| restoreStart | runtimeVersion is set | INFO | 
| restoreStart | runtimeVersion is not set | DEBUG | 
| restoreRuntimeDone | status=success | DEBUG | 
| restoreRuntimeDone | status\$1=success | WARN | 
| restoreReport | status=success | INFO | 
| restoreReport | status\$1=success | WARN | 
| start | - | INFO | 
| runtimeDone | status=success | DEBUG | 
| runtimeDone | status\$1=success | WARN | 
| report | status=success | INFO | 
| report | status\$1=success | WARN | 
| extension | state=success | INFO | 
| extension | state\$1=success | WARN | 
| logSubscription | - | INFO | 
| telemetrySubscription | - | INFO | 
| logsDropped | - | WARN | 

**Note**  
The [Accessing real-time telemetry data for extensions using the Telemetry API](telemetry-api.md) always emits the complete set of platform events. Configuring the level of the system logs Lambda sends to CloudWatch doesn’t affect Lambda Telemetry API behavior.

## Application log-level filtering with custom runtimes
<a name="monitoring-cloudwatchlogs-log-level-custom"></a>

When you configure application log-level filtering for your function, behind the scenes Lambda sets the application log level in the runtime using the `AWS_LAMBDA_LOG_LEVEL` environment variable. Lambda also sets your function's log format using the `AWS_LAMBDA_LOG_FORMAT` environment variable. You can use these variables to integrate Lambda advanced logging controls into a [custom runtime](runtimes-custom.md).

For the ability to configure logging settings for a function using a custom runtime with the Lambda console, AWS CLI, and Lambda APIs, configure your custom runtime to check the value of these environment variables. You can then configure your runtime's loggers in accordance with the log format and log levels you select.

# Sending Lambda function logs to CloudWatch Logs
<a name="monitoring-cloudwatchlogs"></a>

By default, Lambda automatically captures logs for all function invocations and sends them to CloudWatch Logs, provided your function's execution role has the necessary permissions. These logs are, by default, stored in a log group named /aws/lambda/*<function-name>*. To enhance debugging, you can insert custom logging statements into your code, which Lambda will seamlessly integrate with CloudWatch Logs. If needed, you can configure your function to send logs to a different group using the Lambda console, AWS CLI, or Lambda API. See [Configuring CloudWatch log groups](monitoring-cloudwatchlogs-loggroups.md) to learn more.

You can view logs for Lambda functions using the Lambda console, the CloudWatch console, the AWS Command Line Interface (AWS CLI), or the CloudWatch API. For more information, see to [Viewing CloudWatch logs for Lambda functions](monitoring-cloudwatchlogs-view.md).

**Note**  
It may take 5 to 10 minutes for logs to show up after a function invocation.

## Required IAM permissions
<a name="monitoring-cloudwatchlogs-prereqs"></a>

Your [execution role](lambda-intro-execution-role.md) needs the following permissions to upload logs to CloudWatch Logs:
+ `logs:CreateLogGroup`
+ `logs:CreateLogStream`
+ `logs:PutLogEvents`

To learn more, see [Using identity-based policies (IAM policies) for CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-identity-based-access-control-cwl.html) in the *Amazon CloudWatch User Guide*.

You can add these CloudWatch Logs permissions using the `AWSLambdaBasicExecutionRole` AWS managed policy provided by Lambda. To add this policy to your role, run the following command:

```
aws iam attach-role-policy --role-name your-role --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
```

For more information, see [Working with AWS managed policies in the execution role](permissions-managed-policies.md).

## Pricing
<a name="monitoring-cloudwatchlogs-pricing"></a>

There is no additional charge for using Lambda logs; however, standard CloudWatch Logs charges apply. For more information, see [CloudWatch pricing.](https://aws.amazon.com/cloudwatch/pricing/)

# Configuring CloudWatch log groups
<a name="monitoring-cloudwatchlogs-loggroups"></a>

By default, CloudWatch automatically creates a log group named `/aws/lambda/<function name>` for your function when it's first invoked. To configure your function to send logs to an existing log group, or to create a new log group for your function, you can use the Lambda console or the AWS CLI. You can also configure custom log groups using the [CreateFunction](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html) and [UpdateFunctionConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateFunctionConfiguration.html) Lambda API commands and the AWS Serverless Application Model (AWS SAM) [AWS::Serverless::Function]() resource.

You can configure multiple Lambda functions to send logs to the same CloudWatch log group. For example, you could use a single log group to store logs for all of the Lambda functions that make up a particular application. When you use a custom log group for a Lambda function, the log streams Lambda creates include the function name and function version. This ensures that the mapping between log messages and functions is preserved, even if you use the same log group for multiple functions.

The log stream naming format for custom log groups follows this convention:

```
YYYY/MM/DD/<function_name>[<function_version>][<execution_environment_GUID>]
```

Note that when configuring a custom log group, the name you select for your log group must follow the [CloudWatch Logs naming rules](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogGroup.html). Additionally, custom log group names mustn't begin with the string `aws/`. If you create a custom log group beginning with `aws/`, Lambda won't be able to create the log group. As a result of this, your function's logs won't be sent to CloudWatch.

**To change a function’s log group (console)**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. On the function configuration page, choose **Monitoring and operations tools**.

1. In the **Logging configuration** pane, choose **Edit**.

1. In the **Logging group** pane, for **CloudWatch log group**, choose **Custom**.

1. Under **Custom log group**, enter the name of the CloudWatch log group you want your function to send logs to. If you enter the name of an existing log group, then your function will use that group. If no log group exists with the name that you enter, then Lambda will create a new log group for your function with that name.

**To change a function's log group (AWS CLI)**
+ To change the log group of an existing function, use the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command.

  ```
  aws lambda update-function-configuration \
    --function-name myFunction \
    --logging-config LogGroup=myLogGroup
  ```

**To specify a custom log group when you create a function (AWS CLI)**
+ To specify a custom log group when you create a new Lambda function using the AWS CLI, use the `--logging-config` option. The following example command creates a Node.js Lambda function that sends logs to a log group named `myLogGroup`.

  ```
  aws lambda create-function \
    --function-name myFunction \
    --runtime nodejs24.x \
    --handler index.handler \
    --zip-file fileb://function.zip \
    --role arn:aws:iam::123456789012:role/LambdaRole \
    --logging-config LogGroup=myLogGroup
  ```

## Execution role permissions
<a name="monitoring-cloudwatchlogs-configure-permissions"></a>

For your function to send logs to CloudWatch Logs, it must have the [logs:PutLogEvents](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html) permission. When you configure your function's log group using the Lambda console, Lambda will add this permission to the role under the following conditions:
+ The service destination is set to CloudWatch Logs
+ Your function's execution role doesn't have permissions to upload logs to CloudWatch Logs (the default destination)

**Note**  
Lambda does not add any Put permission for Amazon S3 or Firehose log destinations.

When Lambda adds this permission, it gives the function permission to send logs to any CloudWatch Logs log group.

To prevent Lambda from automatically updating the function's execution role and edit it manually instead, expand **Permissions** and uncheck **Add required permissions**.

When you configure your function's log group using the AWS CLI, Lambda won't automatically add the `logs:PutLogEvents` permission. Add the permission to your function's execution role if it doesn't already have it. This permission is included in the [AWSLambdaBasicExecutionRole](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole$jsonEditor) managed policy.

## CloudWatch logging for Lambda Managed Instances
<a name="monitoring-cloudwatchlogs-lmi"></a>

When using [Lambda Managed Instances](lambda-managed-instances.md), there are additional considerations for sending logs to CloudWatch Logs:

### VPC networking requirements
<a name="monitoring-cloudwatchlogs-lmi-networking"></a>

Lambda Managed Instances run on customer-owned EC2 instances within your VPC. To send logs to CloudWatch Logs and traces to X-Ray, you must ensure that these AWS APIs are routable from your VPC. You have several options:
+ **AWS PrivateLink (recommended)**: Use [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) to create VPC endpoints for CloudWatch Logs and X-Ray services. This allows your instances to access these services privately without requiring an internet gateway or NAT gateway. For more information, see [Using CloudWatch Logs with interface VPC endpoints](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html).
+ **NAT Gateway**: Configure a NAT gateway to allow outbound internet access from your private subnets.
+ **Internet Gateway**: For public subnets, ensure your VPC has an internet gateway configured.

If CloudWatch Logs or X-Ray APIs are not routable from your VPC, your function logs and traces will not be delivered.

### Concurrent invocations and log attribution
<a name="monitoring-cloudwatchlogs-lmi-concurrent"></a>

Lambda Managed Instances execution environments can process multiple invocations concurrently. When multiple invocations run simultaneously, their log entries are interleaved in the same log stream. To effectively filter and analyze logs from concurrent invocations, you should ensure each log entry includes the AWS request ID.

We recommend one of the following approaches:
+ **Use default Lambda runtime loggers (recommended)**: The default logging libraries provided by Lambda managed runtimes automatically include the request ID in each log entry.
+ **Implement structured JSON logging**: If you're building a [custom runtime](runtimes-custom.md) or need custom logging, implement JSON-formatted logs that include the request ID in each entry. Lambda Managed Instances only support the JSON log format. Include the `requestId` field in your JSON logs to enable filtering by invocation:

  ```
  {
    "timestamp": "2025-01-15T10:30:00.000Z",
    "level": "INFO",
    "requestId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "message": "Processing request"
  }
  ```

With request ID attribution, you can filter CloudWatch Logs log entries for a specific invocation using CloudWatch Logs Insights queries. For example:

```
fields @timestamp, @message
| filter requestId = "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
| sort @timestamp asc
```

For more information about Lambda Managed Instances logging requirements, see [Understanding the Lambda Managed Instances execution environment](lambda-managed-instances-execution-environment.md).

# Viewing CloudWatch logs for Lambda functions
<a name="monitoring-cloudwatchlogs-view"></a>

You can view Amazon CloudWatch logs for your Lambda function using the Lambda console, the CloudWatch console, or the AWS Command Line Interface (AWS CLI). Follow the instructions in the following sections to access your function's logs.

## Stream function logs with CloudWatch Logs Live Tail
<a name="monitoring-live-tail"></a>

Amazon CloudWatch Logs Live Tail helps you quickly troubleshoot your functions by displaying a streaming list of new log events directly in the Lambda console. You can view and filter ingested logs from your Lambda functions in real time, helping you to detect and resolve issues quickly.

**Note**  
Live Tail sessions incur costs by session usage time, per minute. For more information about pricing, see [Amazon CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/).

### Comparing Live Tail and --log-type Tail
<a name="live-tail-logtype"></a>

There are several differences between CloudWatch Logs Live Tail and the [LogType: Tail](https://docs.aws.amazon.com/lambda/latest/api/API_Invoke.html#lambda-Invoke-request-LogType) option in the Lambda API (`--log-type Tail` in the AWS CLI):
+ `--log-type Tail` returns only the first 4 KB of the invocation logs. Live Tail does not share this limit, and can receive up to 500 log events per second.
+ `--log-type Tail` captures and sends the logs with the response, which can impact the function's response latency. Live Tail does not affect function response latency.
+ `--log-type Tail` supports synchronous invocations only. Live Tail works for both synchronous and asynchronous invocations.

**Note**  
[Lambda Managed Instances](lambda-managed-instances.md) does not support the `--log-type Tail` option. Use CloudWatch Logs Live Tail or query CloudWatch Logs directly to view logs for Managed Instances functions.

### Permissions
<a name="live-tail-permissions"></a>

The following permissions are required to start and stop CloudWatch Logs Live Tail sessions:
+ `logs:DescribeLogGroups`
+ `logs:StartLiveTail`
+ `logs:StopLiveTail`

### Start a Live Tail session in the Lambda console
<a name="live-tail-console"></a>

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the name of the function.

1. Choose the **Test** tab.

1. In the **Test event** pane, choose **CloudWatch Logs Live Tail**.

1. For **Select log groups**, the function's log group is selected by default. You can select up to five log groups at a time.

1. (Optional) To display only log events that contain certain words or other strings, enter the word or string in the **Add filter pattern** box. The filters field is case-sensitive. You can include multiple terms and pattern operators in this field, including regular expressions (regex). For more information about pattern syntax, see [Filter pattern syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html). in the *Amazon CloudWatch Logs User Guide*.

1. Choose **Start**. Matching log events begin appearing in the window.

1. To stop the Live Tail session, choose **Stop**.
**Note**  
The Live Tail session automatically stops after 15 minutes of inactivity or when the Lambda console session times out.

## Access function logs using the console
<a name="monitoring-cloudwatchlogs-console"></a>

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Select a function.

1. Choose the **Monitor** tab.

1. Choose **View CloudWatch logs** to open the CloudWatch console.

1. Scroll down and choose the **Log stream** for the function invocations you want to look at.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/log-stream.png)

Each instance of a Lambda function has a dedicated log stream. If a function scales up, each concurrent instance has its own log stream. Each time a new execution environment is created in response to an invocation, this generates a new log stream. The naming convention for log streams is:

```
YYYY/MM/DD[Function version][Execution environment GUID]
```

A single execution environment writes to the same log stream during its lifetime. The log stream contains messages from that execution environment and also any output from your Lambda function’s code. Every message is timestamped, including your custom logs. Even if your function does not log any output from your code, there are three minimal log statements generated per invocation (START, END and REPORT):

![\[monitoring observability figure 3\]](http://docs.aws.amazon.com/lambda/latest/dg/images/monitoring-observability-figure-3.png)


These logs show:
+  **RequestId** – this is a unique ID generated per request. If the Lambda function retries a request, this ID does not change and appears in the logs for each subsequent retry.
+  **Start/End** – these bookmark a single invocation, so every log line between these belongs to the same invocation.
+  **Duration** – the total invocation time for the handler function, excluding `INIT` code.
+  **Billed Duration** – applies rounding logic for billing purposes.
+  **Memory Size** – the amount of memory allocated to the function.
+  **Max Memory Used** – the maximum amount of memory used during the invocation.
+  **Init Duration** – the time taken to run the `INIT` section of code, outside of the main handler.

## Access logs with the AWS CLI
<a name="monitoring-cloudwatchlogs-cli"></a>

The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in your command line shell. To complete the steps in this section, you must have the [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

You can use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) to retrieve logs for an invocation using the `--log-type` command option. The response contains a `LogResult` field that contains up to 4 KB of base64-encoded logs from the invocation.

**Example retrieve a log ID**  
The following example shows how to retrieve a *log ID* from the `LogResult` field for a function named `my-function`.  

```
aws lambda invoke --function-name my-function out --log-type Tail
```
You should see the following output:  

```
{
    "StatusCode": 200,
    "LogResult": "U1RBUlQgUmVxdWVzdElkOiA4N2QwNDRiOC1mMTU0LTExZTgtOGNkYS0yOTc0YzVlNGZiMjEgVmVyc2lvb...",
    "ExecutedVersion": "$LATEST"
}
```

**Example decode the logs**  
In the same command prompt, use the `base64` utility to decode the logs. The following example shows how to retrieve base64-encoded logs for `my-function`.  

```
aws lambda invoke --function-name my-function out --log-type Tail \
--query 'LogResult' --output text --cli-binary-format raw-in-base64-out | base64 --decode
```
The **cli-binary-format** option is required if you're using AWS CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [AWS CLI supported global command line options](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *AWS Command Line Interface User Guide for Version 2*.  
You should see the following output:  

```
START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST
"AWS_SESSION_TOKEN": "AgoJb3JpZ2luX2VjELj...", "_X_AMZN_TRACE_ID": "Root=1-5d02e5ca-f5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0"",ask/lib:/opt/lib",
END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8
REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8  Duration: 79.67 ms      Billed Duration: 80 ms         Memory Size: 128 MB     Max Memory Used: 73 MB
```
The `base64` utility is available on Linux, macOS, and [Ubuntu on Windows](https://docs.microsoft.com/en-us/windows/wsl/install-win10). macOS users may need to use `base64 -D`.



**Example get-logs.sh script**  
In the same command prompt, use the following script to download the last five log events. The script uses `sed` to remove quotes from the output file, and sleeps for 15 seconds to allow time for the logs to become available. The output includes the response from Lambda and the output from the `get-log-events` command.   
Copy the contents of the following code sample and save in your Lambda project directory as `get-logs.sh`.  
The **cli-binary-format** option is required if you're using AWS CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [AWS CLI supported global command line options](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *AWS Command Line Interface User Guide for Version 2*.  

```
#!/bin/bash
aws lambda invoke --function-name my-function --cli-binary-format raw-in-base64-out --payload '{"key": "value"}' out
sed -i'' -e 's/"//g' out
sleep 15
aws logs get-log-events --log-group-name /aws/lambda/my-function --log-stream-name stream1 --limit 5
```

**Example macOS and Linux (only)**  
In the same command prompt, macOS and Linux users may need to run the following command to ensure the script is executable.  

```
chmod -R 755 get-logs.sh
```

**Example retrieve the last five log events**  
In the same command prompt, run the following script to get the last five log events.  

```
./get-logs.sh
```
You should see the following output:  

```
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{
    "events": [
        {
            "timestamp": 1559763003171,
            "message": "START RequestId: 4ce9340a-b765-490f-ad8a-02ab3415e2bf Version: $LATEST\n",
            "ingestionTime": 1559763003309
        },
        {
            "timestamp": 1559763003173,
            "message": "2019-06-05T19:30:03.173Z\t4ce9340a-b765-490f-ad8a-02ab3415e2bf\tINFO\tENVIRONMENT VARIABLES\r{\r  \"AWS_LAMBDA_FUNCTION_VERSION\": \"$LATEST\",\r ...",
            "ingestionTime": 1559763018353
        },
        {
            "timestamp": 1559763003173,
            "message": "2019-06-05T19:30:03.173Z\t4ce9340a-b765-490f-ad8a-02ab3415e2bf\tINFO\tEVENT\r{\r  \"key\": \"value\"\r}\n",
            "ingestionTime": 1559763018353
        },
        {
            "timestamp": 1559763003218,
            "message": "END RequestId: 4ce9340a-b765-490f-ad8a-02ab3415e2bf\n",
            "ingestionTime": 1559763018353
        },
        {
            "timestamp": 1559763003218,
            "message": "REPORT RequestId: 4ce9340a-b765-490f-ad8a-02ab3415e2bf\tDuration: 26.73 ms\tBilled Duration: 27 ms \tMemory Size: 128 MB\tMax Memory Used: 75 MB\t\n",
            "ingestionTime": 1559763018353
        }
    ],
    "nextForwardToken": "f/34783877304859518393868359594929986069206639495374241795",
    "nextBackwardToken": "b/34783877303811383369537420289090800615709599058929582080"
}
```

## Parsing logs and structured logging
<a name="querying-logs"></a>

With CloudWatch Logs Insights, you can search and analyze log data using a specialized [query syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html). It performs queries over multiple log groups and provides powerful filtering using [glob](https://en.wikipedia.org/wiki/Glob_(programming)) and [regular expressions](https://en.wikipedia.org/wiki/Regular_expression) pattern matching.

You can take advantage of these capabilities by implementing structured logging in your Lambda functions. Structured logging organizes your logs into a pre-defined format, making it easier to query for. Using log levels is an important first step in generating filter-friendly logs that separate informational messages from warnings or errors. For example, consider the following Node.js code:

```
exports.handler = async (event) => {
    console.log("console.log - Application is fine")
    console.info("console.info - This is the same as console.log")
    console.warn("console.warn - Application provides a warning")
    console.error("console.error - An error occurred")
}
```

The resulting CloudWatch log file contains a separate field specifying the log level:

![\[monitoring observability figure 10\]](http://docs.aws.amazon.com/lambda/latest/dg/images/monitoring-observability-figure-10.png)


A CloudWatch Logs Insights query can then filter on log level. For example, to query for errors only, you can use the following query:

```
fields @timestamp, @message
| filter @message like /ERROR/
| sort @timestamp desc
```

### JSON structured logging
<a name="querying-logs-json"></a>

JSON is commonly used to provide structure for application logs. In the following example, the logs have been converted to JSON to output three distinct values:

![\[monitoring observability figure 11\]](http://docs.aws.amazon.com/lambda/latest/dg/images/monitoring-observability-figure-11.png)


The CloudWatch Logs Insights feature automatically discovers values in JSON output and parses the messages as fields, without the need for custom glob or regular expression. By using the JSON-structured logs, the following query finds invocations where the uploaded file was larger than 1 MB, the upload time was more than 1 second, and the invocation was not a cold start:

```
fields @message
| filter @message like /INFO/
| filter uploadedBytes > 1000000
| filter uploadTimeMS > 1000
| filter invocation != 1
```

This query might produce the following result:

![\[monitoring observability figure 12\]](http://docs.aws.amazon.com/lambda/latest/dg/images/monitoring-observability-figure-12.png)


The discovered fields in JSON are automatically populated in the *Discovered fields* menu on the right side. Standard fields emitted by the Lambda service are prefixed with '@', and you can query on these fields in the same way. Lambda logs always include the fields @timestamp, @logStream, @message, @requestId, @duration, @billedDuration, @type, @maxMemoryUsed, @memorySize. If X-Ray is enabled for a function, logs also include @xrayTraceId and @xraySegmentId.

When an AWS event source such as Amazon S3, Amazon SQS, or Amazon EventBridge invokes your function, the entire event is provided to the function as a JSON object input. By logging this event in the first line of the function, you can then query on any of the nested fields using CloudWatch Logs Insights.

### Useful Insights queries
<a name="useful-logs-queries"></a>

The following table shows example Insights queries that can be useful for monitoring Lambda functions.


| Description | Example query syntax | 
| --- | --- | 
|  The last 100 errors  |  

```
 fields Timestamp, LogLevel, Message
 \| filter LogLevel == "ERR"
 \| sort @timestamp desc
 \| limit 100
```  | 
|  The top 100 highest billed invocations  |  

```
filter @type = "REPORT"
\| fields @requestId, @billedDuration
\| sort by @billedDuration desc
\| limit 100
```  | 
|  Percentage of cold starts in total invocations  |  

```
filter @type = "REPORT"
\| stats sum(strcontains(@message, "Init Duration"))/count(*) * 100 as
  coldStartPct, avg(@duration)
  by bin(5m)
```  | 
|  Percentile report of Lambda duration  |  

```
filter @type = "REPORT"
\| stats
    avg(@billedDuration) as Average,
    percentile(@billedDuration, 99) as NinetyNinth,
    percentile(@billedDuration, 95) as NinetyFifth,
    percentile(@billedDuration, 90) as Ninetieth
    by bin(30m)
```  | 
|  Percentile report of Lambda memory usage  |  

```
filter @type="REPORT"
\| stats avg(@maxMemoryUsed/1024/1024) as mean_MemoryUsed,
    min(@maxMemoryUsed/1024/1024) as min_MemoryUsed,
    max(@maxMemoryUsed/1024/1024) as max_MemoryUsed,
    percentile(@maxMemoryUsed/1024/1024, 95) as Percentile95
```  | 
|  Invocations using 100% of assigned memory  |  

```
filter @type = "REPORT" and @maxMemoryUsed=@memorySize
\| stats
    count_distinct(@requestId)
    by bin(30m)
```  | 
|  Average memory used across invocations  |  

```
avgMemoryUsedPERC,
    avg(@billedDuration) as avgDurationMS
    by bin(5m)
```  | 
|  Visualization of memory statistics  |  

```
filter @type = "REPORT"
\| stats
    max(@maxMemoryUsed / 1024 / 1024) as maxMemMB,
    avg(@maxMemoryUsed / 1024 / 1024) as avgMemMB,
    min(@maxMemoryUsed / 1024 / 1024) as minMemMB,
    (avg(@maxMemoryUsed / 1024 / 1024) / max(@memorySize / 1024 / 1024)) * 100 as avgMemUsedPct,
    avg(@billedDuration) as avgDurationMS
    by bin(30m)
```  | 
|  Invocations where Lambda exited  |  

```
filter @message like /Process exited/
\| stats count() by bin(30m)
```  | 
|  Invocations that timed out  |  

```
filter @message like /Task timed out/
\| stats count() by bin(30m)
```  | 
|  Latency report  |  

```
filter @type = "REPORT"
\| stats avg(@duration), max(@duration), min(@duration)
  by bin(5m)
```  | 
|  Over-provisioned memory  |  

```
filter @type = "REPORT"
\| stats max(@memorySize / 1024 / 1024) as provisonedMemMB,
        min(@maxMemoryUsed / 1024 / 1024) as smallestMemReqMB,
        avg(@maxMemoryUsed / 1024 / 1024) as avgMemUsedMB,
        max(@maxMemoryUsed / 1024 / 1024) as maxMemUsedMB,
        provisonedMemMB - maxMemUsedMB as overProvisionedMB
```  | 

## Log visualization and dashboards
<a name="monitoring-logs-visualization"></a>

For any CloudWatch Logs Insights query, you can export the results to markdown or CSV format. In some cases, it might be more useful to create [ visualizations from queries](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_Insights-Visualizing-Log-Data.html), providing there is at least one aggregation function. The `stats` function allows you to define aggregations and grouping.

The previous *logInsightsJSON* example filtered on upload size and upload time and excluded first invocations. This resulted in a table of data. For monitoring a production system, it may be more useful to visualize minimum, maximum, and average file sizes to find outliers. To do this, apply the stats function with the required aggregates, and group on a time value such as every minute:

For example, consider the following query. This is the same example query from the [JSON structured logging](#querying-logs-json) section, but with additional aggregation functions:

```
fields @message
| filter @message like /INFO/
| filter uploadedBytes > 1000000
| filter uploadTimeMS > 1000
| filter invocation != 1
| stats min(uploadedBytes), avg(uploadedBytes), max(uploadedBytes) by bin (1m)
```

We included these aggregates because it may be more useful to visualize minimum, maximum, and average file sizes to find outliers. You can view the results in the **Visualization** tab:

![\[monitoring observability figure 14\]](http://docs.aws.amazon.com/lambda/latest/dg/images/monitoring-observability-figure-14.png)


After you have finished building the visualization, you can optionally add the graph to a CloudWatch dashboard. To do this, choose **Add to dashboard** above the visualization. This adds the query as a widget and enables you to select automatic refresh intervals, making it easier to continuously monitor the results:

![\[monitoring observability figure 15\]](http://docs.aws.amazon.com/lambda/latest/dg/images/monitoring-observability-figure-15.png)


# Sending Lambda function logs to Firehose
<a name="logging-with-firehose"></a>

The Lambda console now offers the option to send function logs to Firehose. This enables real-time streaming of your logs to various destinations supported by Firehose, including third-party analytics tools and custom endpoints.

**Note**  
You can configure Lambda function logs to be sent to Firehose using the Lambda console, AWS CLI, AWS CloudFormation, and all AWS SDKs.

## Pricing
<a name="logging-firehose-pricing"></a>

For details on pricing, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/#Vended_Logs).

## Required permissions for Firehose log destination
<a name="logging-firehose-permissions"></a>

When using the Lambda console to configure Firehose as your function's log destination, you need:

1. The [required IAM permissions](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html#monitoring-cloudwatchlogs-prereqs) to use CloudWatch Logs with Lambda.

1. To [set up subscription filters with Firehose](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample). This filter defines which log events are delivered to your Firehose stream.

## Sending Lambda function logs to Firehose
<a name="logging-firehose-setup"></a>

In the Lambda console, you can send function logs directly to Firehose after creating a new function. To do this, complete these steps:

1. Sign in to the AWS Management Console and open the Lambda console.

1. Choose your function's name.

1. Choose the **Configuration** tab.

1. Choose the **Monitoring and operations tools** tab.

1. In the "Logging configuration" section, choose **Edit**.

1. In the "Log content" section, select a log format.

1. In the "Log destination" section, complete the following steps:

   1. Select a destination service.

   1. Choose to **Create a new log group** or use an **Existing log group**.
**Note**  
If choosing an existing log group for a Firehose destination, ensure the log group you choose is a `Delivery` log group type.

   1. Choose a Firehose stream.

   1. The CloudWatch `Delivery` log group will appear.

1. Choose **Save**.

**Note**  
If the IAM role provided in the console doesn't have the required permission, then the destination setup will fail. To fix this, refer to Required permissions for Firehose log destination to provide the required permissions.

## Cross-Account Logging
<a name="cross-account-logging-firehose"></a>

You can configure Lambda to send logs to Firehose delivery stream in a different AWS account. This requires setting up a destination and configuring appropriate permissions in both accounts.

For detailed instructions on setting up cross-account logging, including required IAM roles and policies, see [Setting up a new cross-account subscription](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CrossAccountSubscriptions.html) in the CloudWatch Logs documentation.

# Sending Lambda function logs to Amazon S3
<a name="logging-with-s3"></a>

You can configure your Lambda function to send logs directly to Amazon S3 using the Lambda console. This feature provides a cost-effective solution for long-term log storage and enables powerful analysis options using services like Athena.

**Note**  
You can configure Lambda function logs to be sent to Amazon S3 using the Lambda console, AWS CLI, AWS CloudFormation, and all AWS SDKs.

## Pricing
<a name="logging-s3-pricing"></a>

For details on pricing, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/#Vended_Logs).

## Required permissions for Amazon S3 log destination
<a name="logging-s3-permissions"></a>

When using the Lambda console to configure Amazon S3 as your function's log destination, you need:

1. The [required IAM permissions](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html#monitoring-cloudwatchlogs-prereqs) to use CloudWatch Logs with Lambda.

1. To [Set up a CloudWatch Logs subscriptions filter to send Lambda function logs to Amazon S3](#using-cwl-subscription-filter-lambda-s3). This filter defines which log events are delivered to your Amazon S3 bucket.

## Set up a CloudWatch Logs subscriptions filter to send Lambda function logs to Amazon S3
<a name="using-cwl-subscription-filter-lambda-s3"></a>

To send logs from CloudWatch Logs to Amazon S3, you need to create a subscription filter. This filter defines which log events are delivered to your Amazon S3 bucket. Your Amazon S3 bucket must be in the same Region as your log group.

### To create a subscription filter for Amazon S3
<a name="create-subscription-filter-s3"></a>

1. Create an Amazon Simple Storage Service (Amazon S3) bucket. We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, skip to step 2.

   Run the following command, replacing the placeholder Region with the Region you want to use:

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-bucket2 --create-bucket-configuration LocationConstraint=region
   ```
**Note**  
`amzn-s3-demo-bucket2` is an example Amazon S3 bucket name. It is *reserved*. For this procedure to work, you must to replace it with your unique Amazon S3 bucket name.

   The following is example output:

   ```
   {
       "Location": "/amzn-s3-demo-bucket2"
   }
   ```

1. Create the IAM role that grants CloudWatch Logs permission to put data into your Amazon S3 bucket. This policy includes a aws:SourceArn global condition context key to help prevent the confused deputy security issue. For more information, see [Confused deputy prevention](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions-confused-deputy.html).

   1. Use a text editor to create a trust policy in a file `~/TrustPolicyForCWL.json` as follows:

      ```
      {
          "Statement": {
              "Effect": "Allow",
              "Principal": { "Service": "logs.amazonaws.com" },
              "Condition": { 
                  "StringLike": {
                      "aws:SourceArn": "arn:aws:logs:region:123456789012:*"
                  } 
               },
              "Action": "sts:AssumeRole"
          } 
      }
      ```

   1. Use the create-role command to create the IAM role, specifying the trust policy file. Note of the returned Role.Arn value, as you will need it in a later step:

      ```
      aws iam create-role \
       --role-name CWLtoS3Role \
       --assume-role-policy-document file://~/TrustPolicyForCWL.json
      {
          "Role": {
              "AssumeRolePolicyDocument": {
                  "Statement": {
                      "Action": "sts:AssumeRole",
                      "Effect": "Allow",
                      "Principal": {
                          "Service": "logs.amazonaws.com"
                      },
                      "Condition": { 
                          "StringLike": {
                              "aws:SourceArn": "arn:aws:logs:region:123456789012:*"
                          } 
                      }
                  }
              },
              "RoleId": "AAOIIAH450GAB4HC5F431",
              "CreateDate": "2015-05-29T13:46:29.431Z",
              "RoleName": "CWLtoS3Role",
              "Path": "/",
              "Arn": "arn:aws:iam::123456789012:role/CWLtoS3Role"
          }
      }
      ```

1. Create a permissions policy to define what actions CloudWatch Logs can do on your account. First, use a text editor to create a permissions policy in a file `~/PermissionsForCWL.json`:

   ```
   {
     "Statement": [
       {
         "Effect": "Allow",
         "Action": ["s3:PutObject"],
         "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket2/*"]
       }
     ]
   }
   ```

   Associate the permissions policy with the role using the following `put-role-policy` command:

   ```
   aws iam put-role-policy --role-name CWLtoS3Role --policy-name Permissions-Policy-For-S3 --policy-document file://~/PermissionsForCWL.json
   ```

1. Create a `Delivery` log group or use a existing `Delivery` log group.

   ```
   aws logs create-log-group --log-group-name my-logs --log-group-class DELIVERY --region REGION_NAME
   ```

1. `PutSubscriptionFilter` to set up destination

   ```
   aws logs put-subscription-filter
   --log-group-name my-logs
   --filter-name my-lambda-delivery
   --filter-pattern ""
   --destination-arn arn:aws:s3:::amzn-s3-demo-bucket2
   --role-arn arn:aws:iam::123456789012:role/CWLtoS3Role
   --region REGION_NAME
   ```

## Sending Lambda function logs to Amazon S3
<a name="logging-s3-setup"></a>

In the Lambda console, you can send function logs directly to Amazon S3 after creating a new function. To do this, complete these steps:

1. Sign in to the AWS Management Console and open the Lambda console.

1. Choose your function's name.

1. Choose the **Configuration** tab.

1. Choose the **Monitoring and operations tools** tab.

1. In the "Logging configuration" section, choose **Edit**.

1. In the "Log content" section, select a log format.

1. In the "Log destination" section, complete the following steps:

   1. Select a destination service.

   1. Choose to **Create a new log group** or use an **Existing log group**.
**Note**  
If choosing an existing log group for an Amazon S3 destination, ensure the log group you choose is a `Delivery` log group type.

   1. Choose an Amazon S3 bucket to be the destination for your function logs.

   1. The CloudWatch `Delivery` log group will appear.

1. Choose **Save**.

**Note**  
If the IAM role provided in the console doesn't have the required permissions, then the destination setup will fail. To fix this, refer to [Required permissions for Amazon S3 log destination](#logging-s3-permissions).

## Cross-Account Logging
<a name="cross-account-logging-s3"></a>

You can configure Lambda to send logs to an Amazon S3 bucket in a different AWS account. This requires setting up a destination and configuring appropriate permissions in both accounts.

For detailed instructions on setting up cross-account logging, including required IAM roles and policies, see [Setting up a new cross-account subscription](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CrossAccountSubscriptions.html) in the CloudWatch Logs documentation.