

# Configuring AWS Lambda functions
<a name="lambda-functions"></a>

Learn how to configure the core capabilities and options for your Lambda function using the Lambda API or console.

**[.zip file archives](configuration-function-zip.md) **  
Create a Lambda function deployment package when you want to include dependencies, custom runtime layers, or any files beyond your function code. The deployment package is a .zip file archive containing your function code and dependencies.

**[Container images](images-create.md) **  
Use container images to package your function code and dependencies when you need more control over the build process, or if your function requires custom runtime configurations. You can build, test, and deploy Lambda functions as container images using tools like Docker CLI.

**[Memory](configuration-memory.md)**  
Learn how and when to increase function memory.

**[Ephemeral storage](configuration-ephemeral-storage.md) **  
Learn how and when to increase your function's temporary storage capacity.

**[Timeout](configuration-timeout.md) **  
Learn how and when to increase your function's timeout value.

** [ Environment variables](configuration-envvars.md)**  
You can make your function code portable and keep secrets out of your code by storing them in your function's configuration by using environment variables.

**[Outbound networking](configuration-vpc.md) **  
 You can use your Lambda function with AWS resources in an Amazon VPC. Connecting your function to a VPC lets you access resources in a private subnet such as relational databases and caches.

** [Inbound networking](configuration-vpc-endpoints.md)**  
You can use an interface VPC endpoint to invoke your Lambda functions without crossing the public internet.

**[File system](configuration-filesystem.md)**  
 You can use your Lambda function to mount a Amazon EFS to a local directory. A file system allows your function code to access and modify shared resources safely and at high concurrency.

**[Aliases](configuration-aliases.md)**  
You can configure your clients to invoke a specific Lambda function version by using an alias, instead of updating the client.

**[Versions](configuration-versions.md)**  
By publishing a version of your function, you can store your code and configuration as a separate resource that cannot be changed.

**[Tags](configuration-tags.md)**  
Use tags to enable attribute-based access control (ABAC), to organize your Lambda functions, and to filter and generate reports on your functions using the AWS Cost Explorer or AWS Billing and Cost Management services.

**[Response streaming](configuration-response-streaming.md)**  
You can configure your Lambda function URLs to stream response payloads back to clients. Response streaming can benefit latency sensitive applications by improving time to first byte (TTFB) performance. This is because you can send partial responses back to the client as they become available. Additionally, you can use response streaming to build functions that return larger payloads.

**[Metadata endpoint](configuration-metadata-endpoint.md)**  
Use the Lambda metadata endpoint to discover which Availability Zone your function is running in, enabling you to optimize latency by routing to same-AZ resources and to implement AZ-aware resilience patterns.

# Deploying Lambda functions as .zip file archives
<a name="configuration-function-zip"></a>

When you create a Lambda function, you package your function code into a deployment package. Lambda supports two types of deployment packages: container images and .zip file archives. The workflow to create a function depends on the deployment package type. To configure a function defined as a container image, see [Create a Lambda function using a container image](images-create.md).

You can use the Lambda console and the Lambda API to create a function defined with a .zip file archive. You can also upload an updated .zip file to change the function code. 

**Note**  
You cannot change the [deployment package type](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html#lambda-CreateFunction-request-PackageType) (.zip or container image) for an existing function. For example, you cannot convert a container image function to use a .zip file archive. You must create a new function.

**Topics**
+ [Creating the function](#configuration-function-create)
+ [Using the console code editor](#configuration-functions-console-update)
+ [Updating function code](#configuration-function-update)
+ [Changing the runtime](#configuration-function-runtime)
+ [Changing the architecture](#configuration-function-arch)
+ [Using the Lambda API](#configuration-function-api)
+ [Downloading your function code](#configuration-function-download)
+ [CloudFormation](#configuration-function-cloudformation)
+ [Encrypting Lambda .zip deployment packages](encrypt-zip-package.md)

## Creating the function
<a name="configuration-function-create"></a>

When you create a function defined with a .zip file archive, you choose a code template, the language version, and the execution role for the function. You add your function code after Lambda creates the function.

**To create the function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose **Create function**.

1. Choose **Author from scratch** or **Use a blueprint** to create your function. 

1. Under **Basic information**, do the following:

   1. For **Function name**, enter the function name. Function names are limited to 64 characters in length.

   1. For **Runtime**, choose the language version to use for your function.

   1. (Optional) For **Architecture**, choose the instruction set architecture to use for your function. The default architecture is x86\$164. When you build the deployment package for your function, make sure that it is compatible with this [instruction set architecture](foundation-arch.md).

1. (Optional) Under **Permissions**, expand **Change default execution role**. You can create a new **Execution role** or use an existing role.

1. (Optional) Expand **Advanced settings**. You can choose a **Code signing configuration** for the function. You can also configure an (Amazon VPC) for the function to access.

1. Choose **Create function**.

Lambda creates the new function. You can now use the console to add the function code and configure other function parameters and features. For code deployment instructions, see the handler page for the runtime your function uses. 

------
#### [ Node.js ]

[Deploy Node.js Lambda functions with .zip file archives](nodejs-package.md) 

------
#### [ Python ]

 [Working with .zip file archives for Python Lambda functions](python-package.md) 

------
#### [ Ruby ]

 [Deploy Ruby Lambda functions with .zip file archives](ruby-package.md) 

------
#### [ Java ]

 [Deploy Java Lambda functions with .zip or JAR file archives](java-package.md) 

------
#### [ Go ]

 [Deploy Go Lambda functions with .zip file archives](golang-package.md) 

------
#### [ C\$1 ]

 [Build and deploy C\$1 Lambda functions with .zip file archives](csharp-package.md) 

------
#### [ PowerShell ]

 [Deploy PowerShell Lambda functions with .zip file archives](powershell-package.md) 

------

## Using the console code editor
<a name="configuration-functions-console-update"></a>

The console creates a Lambda function with a single source file. For scripting languages, you can edit this file and add more files using the built-in code editor. To save your changes, choose **Save**. Then, to run your code, choose **Test**.

When you save your function code, the Lambda console creates a .zip file archive deployment package. When you develop your function code outside of the console (using an IDE) you need to [create a deployment package](nodejs-package.md) to upload your code to the Lambda function.

## Updating function code
<a name="configuration-function-update"></a>

For scripting languages (Node.js, Python, and Ruby), you can edit your function code in the embedded code editor. If the code is larger than 3MB, or if you need to add libraries, or for languages that the editor doesn't support (Java, Go, C\$1), you must upload your function code as a .zip archive. If the .zip file archive is smaller than 50 MB, you can upload the .zip file archive from your local machine. If the file is larger than 50 MB, upload the file to the function from an Amazon S3 bucket.

**To upload function code as a .zip archive**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the function to update and choose the **Code** tab.

1. Under **Code source**, choose **Upload from**.

1. Choose **.zip file**, and then choose **Upload**. 

   1. In the file chooser, select the new image version, choose **Open**, and then choose **Save**.

1. (Alternative to step 4) Choose **Amazon S3 location**.

   1. In the text box, enter the S3 link URL of the .zip file archive, then choose **Save**.

## Changing the runtime
<a name="configuration-function-runtime"></a>

If you update the function configuration to use a new runtime, you may need to update the function code to be compatible with the new runtime. If you update the function configuration to use a different runtime, you **must** provide new function code that is compatible with the runtime and architecture. For instructions on how to create a deployment package for the function code, see the handler page for the runtime that the function uses.

The Node.js 20, Python 3.12, Java 21, .NET 8, Ruby 3.3, and later base images are based on the Amazon Linux 2023 minimal container image. Earlier base images use Amazon Linux 2. AL2023 provides several advantages over Amazon Linux 2, including a smaller deployment footprint and updated versions of libraries such as `glibc`. For more information, see [Introducing the Amazon Linux 2023 runtime for AWS Lambda](https://aws.amazon.com/blogs/compute/introducing-the-amazon-linux-2023-runtime-for-aws-lambda/) on the AWS Compute Blog.

**To change the runtime**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the function to update and choose the **Code** tab.

1. Scroll down to the **Runtime settings** section, which is under the code editor.

1. Choose **Edit**.

   1. For **Runtime**, select the runtime identifier.

   1. For **Handler**, specify file name and handler for your function.

   1. For **Architecture**, choose the instruction set architecture to use for your function.

1. Choose **Save**.

## Changing the architecture
<a name="configuration-function-arch"></a>

Before you can change the instruction set architecture, you need to ensure that your function's code is compatible with the target architecture. 

If you use Node.js, Python, or Ruby and you edit your function code in the embedded editor, the existing code may run without modification.

However, if you provide your function code using a .zip file archive deployment package, you must prepare a new .zip file archive that is compiled and built correctly for the target runtime and instruction-set architecture. For instructions, see the handler page for your function runtime.

**To change the instruction set architecture**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the function to update and choose the **Code** tab.

1. Under **Runtime settings**, choose **Edit**.

1. For **Architecture**, choose the instruction set architecture to use for your function.

1. Choose **Save**.

## Using the Lambda API
<a name="configuration-function-api"></a>

To create and configure a function that uses a .zip file archive, use the following API operations: 
+ [CreateFunction](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html)
+ [UpdateFunctionCode](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateFunctionCode.html)
+ [UpdateFunctionConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateFunctionConfiguration.html)

## Downloading your function code
<a name="configuration-function-download"></a>

You can download the current unpublished (`$LATEST`) version of your function code .zip via the Lambda console. To do this, first ensure that you have the following IAM permissions:
+ `iam:GetPolicy`
+ `iam:GetPolicyVersion`
+ `iam:GetRole`
+ `iam:GetRolePolicy`
+ `iam:ListAttachedRolePolicies`
+ `iam:ListRolePolicies`
+ `iam:ListRoles`

**To download the function code .zip**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the function you want to download the function code .zip for.

1. In the **Function overview**, choose the **Download** button, then choose **Download function code .zip**.

   1. Alternatively, choose **Download AWS SAM file** to generate and download a SAM template based on your function's configuration. You can also choose **Download both** to download both the .zip and the SAM template.

## CloudFormation
<a name="configuration-function-cloudformation"></a>

You can use CloudFormation to create a Lambda function that uses a .zip file archive. In your CloudFormation template, the `AWS::Lambda::Function` resource specifies the Lambda function. For descriptions of the properties in the `AWS::Lambda::Function` resource, see [AWS::Lambda::Function](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html) in the *AWS CloudFormation User Guide*.

In the `AWS::Lambda::Function` resource, set the following properties to create a function defined as a .zip file archive:
+ AWS::Lambda::Function
  + PackageType – Set to `Zip`.
  + Code – Enter the Amazon S3 bucket name and .zip file name in the `S3Bucket` and `S3Key`fields. For Node.js or Python, you can provide inline source code of your Lambda function.
  + Runtime – Set the runtime value.
  + Architecture – Set the architecture value to `arm64` to use the AWS Graviton2 processor. By default, the architecture value is `x86_64`.

# Encrypting Lambda .zip deployment packages
<a name="encrypt-zip-package"></a>

Lambda always provides server-side encryption at rest for .zip deployment packages and function configuration details with an AWS KMS key. By default, Lambda uses an [AWS owned key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk). If this default behavior suits your workflow, you don't need to set up anything else. AWS doesn't charge you to use this key.

If you prefer, you can provide an AWS KMS customer managed key instead. You might do this to have control over rotation of the KMS key or to meet the requirements of your organization for managing KMS keys. When you use a customer managed key, only users in your account with access to the KMS key can view or manage the function's code or configuration.

Customer managed keys incur standard AWS KMS charges. For more information, see [AWS Key Management Service pricing](https://aws.amazon.com/kms/pricing/).

## Create a customer managed key
<a name="create-key"></a>

 You can create a symmetric customer managed key by using the AWS Management Console, or the AWS KMS APIs.

**To create a symmetric customer managed key**

Follow the steps for [Creating symmetric encryption Creating symmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.

### Permissions
<a name="enable-zip-permissions"></a>

**Key policy**

[Key policies](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) control access to your customer managed key. Every customer managed key must have exactly one key policy, which contains statements that determine who can use the key and how they can use it. For more information, see [How to change a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html#key-policy-modifying-how-to) in the *AWS Key Management Service Developer Guide*.

When you use a customer managed key to encrypt a .zip deployment package, Lambda doesn't add a [grant](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html) to the key. Instead, your AWS KMS key policy must allow Lambda to call the following AWS KMS API operations on your behalf:
+ [kms:GenerateDataKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html)
+ [kms:Decrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html)

The following example key policy allows all Lambda functions in account 111122223333 to call the required AWS KMS operations for the specified customer managed key:

**Example AWS KMS key policy**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "lambda.amazonaws.com"
            },
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-id",
            "Condition": {
                "StringLike": {
                "kms:EncryptionContext:aws:lambda:FunctionArn": "arn:aws:lambda:us-east-1:111122223333:function:*"
                }
            }
        }
    ]
}
```

For more information about [troubleshooting key access](https://docs.aws.amazon.com/kms/latest/developerguide/policy-evaluation.html#example-no-iam), see the *AWS Key Management Service Developer Guide*.

**Principal permissions**

When you use a customer managed key to encrypt a .zip deployment package, only [principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/intro-structure.html) with access to that key can access the .zip deployment package. For example, principals who don't have access to the customer managed key can't download the .zip package using the presigned S3 URL that's included in the [GetFunction](https://docs.aws.amazon.com/lambda/latest/api/API_GetFunction.html) response. An `AccessDeniedException` is returned in the `Code` section of the response.

**Example AWS KMS AccessDeniedException**  

```
{
    "Code": {
        "RepositoryType": "S3",
        "Error": {
            "ErrorCode": "AccessDeniedException",
            "Message": "KMS access is denied. Check your KMS permissions. KMS Exception: AccessDeniedException KMS Message: User: arn:aws:sts::111122223333:assumed-role/LambdaTestRole/session is not authorized to perform: kms:Decrypt on resource: arn:aws:kms:us-east-1:111122223333:key/key-id with an explicit deny in a resource-based policy"
        },
        "SourceKMSKeyArn": "arn:aws:kms:us-east-1:111122223333:key/key-id"
    },
	...
```

For more information about permissions for AWS KMS keys, see [Authentication and access control for AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/control-access.html).

## Using a customer managed key for your .zip deployment package
<a name="enable-zip-custom-encryption"></a>

Use the following API parameters to configure customer managed keys for .zip deployment packages:
+ [SourceKMSKeyArn](https://docs.aws.amazon.com/lambda/latest/api/API_FunctionCode.html#lambda-Type-FunctionCode-SourceKMSKeyArn): Encrypts the source .zip deployment package (the file that you upload).
+ [KMSKeyArn](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html#lambda-CreateFunction-request-KMSKeyArn): Encrypts [environment variables](configuration-envvars-encryption.md) and [Lambda SnapStart](snapstart.md) snapshots.

When `SourceKMSKeyArn` and `KMSKeyArn` are both specified, Lambda uses the `KMSKeyArn` key to encrypt the unzipped version of the package that Lambda uses to invoke the function. When `SourceKMSKeyArn` is specified but `KMSKeyArn` is not, Lambda uses an [AWS managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) to encrypt the unzipped version of the package.

------
#### [ Lambda console ]

**To add customer managed key encryption when you create a function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose **Create function**.

1. Choose **Author from scratch** or **Container image**. 

1. Under **Basic information**, do the following:

   1. For **Function name**, enter the function name.

   1. For **Runtime**, choose the language version to use for your function.

1. Expand **Advanced settings**, and then select **Enable encryption with an AWS KMS customer managed key**.

1. Choose a customer managed key.

1. Choose **Create function**.

To remove customer managed key encryption, or to use a different key, you must upload the .zip deployment package again.

**To add customer managed key encryption to an existing function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the name of a function.

1. In the **Code source** pane, choose **Upload from**.

1. Choose **.zip file** or **Amazon S3 location**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/upload-zip.png)

1. Upload the file or enter the Amazon S3 location.

1. Choose **Enable encryption with an AWS KMS customer managed key**.

1. Choose a customer managed key.

1. Choose **Save**.

------
#### [ AWS CLI ]

**To add customer managed key encryption when you create a function**

In the following [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) example:
+ `--code`: Specifies the local path to the .zip deployment package (`ZipFile`) and the customer managed key to encrypt it (`SourceKMSKeyArn`).
+ `--kms-key-arn`: Specifies the customer managed key to encrypt the environment variables and the unzipped version of the deployment package.

```
aws lambda create-function \
  --function-name myFunction \
  --runtime nodejs24.x \
  --handler index.handler \
  --role arn:aws:iam::111122223333:role/service-role/my-lambda-role \
  --code ZipFile=fileb://myFunction.zip,SourceKMSKeyArn=arn:aws:kms:us-east-1:111122223333:key/key-id \
  --kms-key-arn arn:aws:kms:us-east-1:111122223333:key/key2-id
```

In the following [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) example:
+ `--code`: Specifies the location of the .zip file in an Amazon S3 bucket (`S3Bucket`, `S3Key`, `S3ObjectVersion`) and the customer managed key to encrypt it (`SourceKMSKeyArn`).
+ `--kms-key-arn`: Specifies the customer managed key to encrypt the environment variables and the unzipped version of the deployment package.

```
aws lambda create-function \
  --function-name myFunction \
  --runtime nodejs24.x --handler index.handler \
  --role arn:aws:iam::111122223333:role/service-role/my-lambda-role \
  --code S3Bucket=amzn-s3-demo-bucket,S3Key=myFileName.zip,S3ObjectVersion=myObjectVersion,SourceKMSKeyArn=arn:aws:kms:us-east-1:111122223333:key/key-id \
  --kms-key-arn arn:aws:kms:us-east-1:111122223333:key/key2-id
```

**To add customer managed key encryption to an existing function**

In the following [update-function-code](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-code.html) example:
+ `--zip-file`: Specifies the local path to the .zip deployment package.
+ `--source-kms-key-arn`: Specifies the customer managed key to encrypt the zipped version of the deployment package. Lambda uses an AWS owned key to encrypt the unzipped package for function invocations. If you want to use a customer managed key to encrypt the unzipped version of the package, run the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command with the `--kms-key-arn` option.

```
aws lambda update-function-code \
  --function-name myFunction \
  --zip-file fileb://myFunction.zip \
  --source-kms-key-arn arn:aws:kms:us-east-1:111122223333:key/key-id
```

In the following [update-function-code](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-code.html) example:
+ `--s3-bucket`: Specifies the location of the .zip file in an Amazon S3 bucket.
+ `--s3-key`: Specifies the Amazon S3 key of the deployment package.
+ `--s3-object-version`: For versioned objects, the version of the deployment package object to use.
+ `--source-kms-key-arn`: Specifies the customer managed key to encrypt the zipped version of the deployment package. Lambda uses an AWS owned key to encrypt the unzipped package for function invocations. If you want to use a customer managed key to encrypt the unzipped version of the package, run the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command with the `--kms-key-arn` option.

```
aws lambda update-function-code \
  --function-name myFunction \
  --s3-bucket amzn-s3-demo-bucket \
  --s3-key myFileName.zip \
  --s3-object-version myObject Version
  --source-kms-key-arn arn:aws:kms:us-east-1:111122223333:key/key-id
```

**To remove customer managed key encryption from an existing function**

In the following [update-function-code](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-code.html) example, `--zip-file` specifies the local path to the .zip deployment package. When you run this command without the `--source-kms-key-arn` option, Lambda uses an AWS owned key to encrypt the zipped version of the deployment package.

```
aws lambda update-function-code \
  --function-name myFunction \
  --zip-file fileb://myFunction.zip
```

------

# Create a Lambda function using a container image
<a name="images-create"></a>

Your AWS Lambda function's code consists of scripts or compiled programs and their dependencies. You use a *deployment package* to deploy your function code to Lambda. Lambda supports two types of deployment packages: container images and .zip file archives. 

There are three ways to build a container image for a Lambda function:
+ [Using an AWS base image for Lambda](#runtimes-images-lp)

  The [AWS base images](#runtimes-images-lp) are preloaded with a language runtime, a runtime interface client to manage the interaction between Lambda and your function code, and a runtime interface emulator for local testing.
+ [Using an AWS OS-only base image](#runtimes-images-provided)

  [AWS OS-only base images](https://gallery.ecr.aws/lambda/provided) contain an Amazon Linux distribution and the [runtime interface emulator](https://github.com/aws/aws-lambda-runtime-interface-emulator/). These images are commonly used to create container images for compiled languages, such as [Go](go-image.md#go-image-provided) and [Rust](lambda-rust.md), and for a language or language version that Lambda doesn't provide a base image for, such as Node.js 19. You can also use OS-only base images to implement a [custom runtime](runtimes-custom.md). To make the image compatible with Lambda, you must include a [runtime interface client](#images-ric) for your language in the image.
+ [Using a non-AWS base image](#images-types)

  You can use an alternative base image from another container registry, such as Alpine Linux or Debian. You can also use a custom image created by your organization. To make the image compatible with Lambda, you must include a [runtime interface client](#images-ric) for your language in the image.

**Tip**  
To reduce the time it takes for Lambda container functions to become active, see [Use multi-stage builds](https://docs.docker.com/build/building/multi-stage/) in the Docker documentation. To build efficient container images, follow the [Best practices for writing Dockerfiles](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/).

To create a Lambda function from a container image, build your image locally and upload it to an Amazon Elastic Container Registry (Amazon ECR) repository. If you're using a container image provided by an [AWS Marketplace](https://docs.aws.amazon.com/marketplace/latest/userguide/container-based-products.html) seller, you need to clone the image to your private Amazon ECR repository first. Then, specify the repository URI when you create the function. The Amazon ECR repository must be in the same AWS Region as the Lambda function. You can create a function using an image in a different AWS account, as long as the image is in the same Region as the Lambda function. For more information, see [Amazon ECR cross-account permissions](#configuration-images-xaccount-permissions).

**Note**  
Lambda does not support Amazon ECR FIPS endpoints for container images. If your repository URI includes `ecr-fips`, you are using a FIPS endpoint. Example: `111122223333.dkr.ecr-fips.us-east-1.amazonaws.com`.

This page explains the base image types and requirements for creating Lambda-compatible container images.

**Note**  
You cannot change the [deployment package type](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html#lambda-CreateFunction-request-PackageType) (.zip or container image) for an existing function. For example, you cannot convert a container image function to use a .zip file archive. You must create a new function.

**Topics**
+ [Requirements](#images-reqs)
+ [Using an AWS base image for Lambda](#runtimes-images-lp)
+ [Using an AWS OS-only base image](#runtimes-images-provided)
+ [Using a non-AWS base image](#images-types)
+ [Runtime interface clients](#images-ric)
+ [Amazon ECR permissions](#gettingstarted-images-permissions)
+ [Function lifecycle](#images-lifecycle)

## Requirements
<a name="images-reqs"></a>

Install the [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and the [Docker CLI](https://docs.docker.com/get-docker). Additionally, note the following requirements:
+ The container image must implement the [Using the Lambda runtime API for custom runtimes](runtimes-api.md). The AWS open-source [runtime interface clients](#images-ric) implement the API. You can add a runtime interface client to your preferred base image to make it compatible with Lambda.
+ The container image must be able to run on a read-only file system. Your function code can access a writable `/tmp` directory with between 512 MB and 10,240 MB, in 1-MB increments, of storage. 
+ The default Lambda user must be able to read all the files required to run your function code. Lambda follows security best practices by defining a default Linux user with least-privileged permissions. This means that you don't need to specify a [USER](https://docs.docker.com/reference/dockerfile/#user) in your Dockerfile. Verify that your application code does not rely on files that other Linux users are restricted from running.
+ Lambda supports only Linux-based container images.
+ Lambda provides multi-architecture base images. However, the image you build for your function must target only one of the architectures. Lambda does not support functions that use multi-architecture container images.

## Using an AWS base image for Lambda
<a name="runtimes-images-lp"></a>

You can use one of the [AWS base images](https://gallery.ecr.aws/lambda/) for Lambda to build the container image for your function code. The base images are preloaded with a language runtime and other components required to run a container image on Lambda. You add your function code and dependencies to the base image and then package it as a container image.

AWS periodically provides updates to the AWS base images for Lambda. If your Dockerfile includes the image name in the FROM property, your Docker client pulls the latest version of the image from the [Amazon ECR repository](https://gallery.ecr.aws/lambda/). To use the updated base image, you must rebuild your container image and [update the function code](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-code.html).

The Node.js 20, Python 3.12, Java 21, .NET 8, Ruby 3.3, and later base images are based on the [Amazon Linux 2023 minimal container image](https://docs.aws.amazon.com/linux/al2023/ug/minimal-container.html). Earlier base images use Amazon Linux 2. AL2023 provides several advantages over Amazon Linux 2, including a smaller deployment footprint and updated versions of libraries such as `glibc`.

AL2023-based images use `microdnf` (symlinked as `dnf`) as the package manager instead of `yum`, which is the default package manager in Amazon Linux 2. `microdnf` is a standalone implementation of `dnf`. For a list of packages that are included in AL2023-based images, refer to the **Minimal Container** columns in [Comparing packages installed on Amazon Linux 2023 Container Images](https://docs.aws.amazon.com/linux/al2023/ug/al2023-container-image-types.html). For more information about the differences between AL2023 and Amazon Linux 2, see [Introducing the Amazon Linux 2023 runtime for AWS Lambda](https://aws.amazon.com/blogs/compute/introducing-the-amazon-linux-2023-runtime-for-aws-lambda/) on the AWS Compute Blog.

**Note**  
To run AL2023-based images locally, including with AWS Serverless Application Model (AWS SAM), you must use Docker version 20.10.10 or later.

To build a container image using an AWS base image, choose the instructions for your preferred language:
+ [Node.js](nodejs-image.md#nodejs-image-instructions)
+ [TypeScript](typescript-image.md#base-image-typescript) (uses a Node.js base image)
+ [Python](python-image.md#python-image-instructions)
+ [Java](java-image.md#java-image-instructions) 
+ [Go](go-image.md#go-image-provided)
+ [.NET](csharp-image.md#csharp-image-instructions)
+ [Ruby](ruby-image.md#ruby-image-instructions)

## Using an AWS OS-only base image
<a name="runtimes-images-provided"></a>

[AWS OS-only base images](https://gallery.ecr.aws/lambda/provided) contain an Amazon Linux distribution and the [runtime interface emulator](https://github.com/aws/aws-lambda-runtime-interface-emulator/). These images are commonly used to create container images for compiled languages, such as [Go](go-image.md#go-image-provided) and [Rust](lambda-rust.md), and for a language or language version that Lambda doesn't provide a base image for, such as Node.js 19. You can also use OS-only base images to implement a [custom runtime](runtimes-custom.md). To make the image compatible with Lambda, you must include a [runtime interface client](#images-ric) for your language in the image.


| Tags | Runtime | Operating system | Dockerfile | Deprecation | 
| --- | --- | --- | --- | --- | 
| al2023 | OS-only Runtime | Amazon Linux 2023 | [Dockerfile for OS-only Runtime on GitHub](https://github.com/aws/aws-lambda-base-images/blob/provided.al2023/Dockerfile.provided.al2023) |   Jun 30, 2029   | 
| al2 | OS-only Runtime | Amazon Linux 2 | [Dockerfile for OS-only Runtime on GitHub](https://github.com/aws/aws-lambda-base-images/blob/provided.al2/Dockerfile.provided.al2) |   Jul 31, 2026   | 

Amazon Elastic Container Registry Public Gallery: [gallery.ecr.aws/lambda/provided](https://gallery.ecr.aws/lambda/provided)

## Using a non-AWS base image
<a name="images-types"></a>

Lambda supports any image that conforms to one of the following image manifest formats:
+ Docker image manifest V2, schema 2 (used with Docker version 1.10 and newer)
+ Open Container Initiative (OCI) Specifications (v1.0.0 and up)

Lambda supports a maximum uncompressed image size of 10 GB, including all layers.

**Note**  
To make the image compatible with Lambda, you must include a [runtime interface client](#images-ric) for your language in the image.
For optimal performance, keep your image manifest size under 25,400 bytes. To reduce image manifest size, minimize the number of layers in your image and reduce annotations.

## Runtime interface clients
<a name="images-ric"></a>

If you use an [OS-only base image](#runtimes-images-provided) or an alternative base image, you must include a runtime interface client in your image. The runtime interface client must extend the [Using the Lambda runtime API for custom runtimes](runtimes-api.md), which manages the interaction between Lambda and your function code. AWS provides open-source runtime interface clients for the following languages:
+  [Node.js](nodejs-image.md#nodejs-image-clients) 
+  [Python](python-image.md#python-image-clients) 
+  [Java](java-image.md#java-image-clients) 
+  [.NET](csharp-image.md#csharp-image-clients) 
+  [Go](go-image.md#go-image-clients) 
+  [Ruby](ruby-image.md#ruby-image-clients) 
+  [Rust](lambda-rust.md) – 

If you're using a language that doesn't have an AWS-provided runtime interface client, you must create your own.

## Amazon ECR permissions
<a name="gettingstarted-images-permissions"></a>

Before you create a Lambda function from a container image, you must build the image locally and upload it to an Amazon ECR repository. When you create the function, specify the Amazon ECR repository URI.

Make sure that the permissions for the user or role that creates the function includes `GetRepositoryPolicy`, `SetRepositoryPolicy`, `BatchGetImage`, and `GetDownloadUrlForLayer`.

For example, use the IAM console to create a role with the following policy:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "ecr:SetRepositoryPolicy",
        "ecr:GetRepositoryPolicy",
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ],
      "Resource": "arn:aws:ecr:us-east-1:111122223333:repository/hello-world"
    }
  ]
}
```

------

### Amazon ECR repository policies
<a name="configuration-images-permissions"></a>

For a function in the same account as the container image in Amazon ECR, you can add `ecr:BatchGetImage` and `ecr:GetDownloadUrlForLayer` permissions to your Amazon ECR repository policy. The following example shows the minimum policy:

```
{
        "Sid": "LambdaECRImageRetrievalPolicy",
        "Effect": "Allow",
        "Principal": {
          "Service": "lambda.amazonaws.com"
        },
        "Action": [
          "ecr:BatchGetImage",
          "ecr:GetDownloadUrlForLayer"
        ]
    }
```

For more information about Amazon ECR repository permissions, see [Private repository policies](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-policies.html) in the *Amazon Elastic Container Registry User Guide*.

If the Amazon ECR repository does not include these permissions, Lambda attempts to add them automatically. Lambda can add permissions only if the principal calling Lambda has `ecr:getRepositoryPolicy` and `ecr:setRepositoryPolicy` permissions. 

To view or edit your Amazon ECR repository permissions, follow the directions in [Setting a private repository policy statement](https://docs.aws.amazon.com/AmazonECR/latest/userguide/set-repository-policy.html) in the *Amazon Elastic Container Registry User Guide*.

#### Amazon ECR cross-account permissions
<a name="configuration-images-xaccount-permissions"></a>

A different account in the same region can create a function that uses a container image owned by your account. In the following example, your [Amazon ECR repository permissions policy](https://docs.aws.amazon.com/AmazonECR/latest/userguide/set-repository-policy.html) needs the following statements to grant access to account number 123456789012.
+ **CrossAccountPermission** – Allows account 123456789012 to create and update Lambda functions that use images from this ECR repository.
+ **LambdaECRImageCrossAccountRetrievalPolicy** – Lambda will eventually set a function's state to inactive if it is not invoked for an extended period. This statement is required so that Lambda can retrieve the container image for optimization and caching on behalf of the function owned by 123456789012. 

**Example — Add cross-account permission to your repository**    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "CrossAccountPermission",
      "Effect": "Allow",
      "Action": [
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ],
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:root"
      },
      "Resource": "arn:aws:ecr:us-east-1:123456789012:repository/example-lambda-repository"
    },
    {
      "Sid": "LambdaECRImageCrossAccountRetrievalPolicy",
      "Effect": "Allow",
      "Action": [
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ],
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Condition": {
        "ArnLike": {
          "aws:sourceARN": "arn:aws:lambda:us-east-1:123456789012:function:*"
        }
      },
      "Resource": "arn:aws:ecr:us-east-1:123456789012:repository/example-lambda-repository"
    }
  ]
}
```

To give access to multiple accounts, you add the account IDs to the Principal list in the `CrossAccountPermission` policy and to the Condition evaluation list in the `LambdaECRImageCrossAccountRetrievalPolicy`.

If you are working with multiple accounts in an AWS Organization, we recommend that you enumerate each account ID in the ECR permissions policy. This approach aligns with the AWS security best practice of setting narrow permissions in IAM policies.

In addition to Lambda permissions, the user or role that creates the function must also have `BatchGetImage` and `GetDownloadUrlForLayer` permissions.

## Function lifecycle
<a name="images-lifecycle"></a>

After you upload a new or updated container image, Lambda optimizes the image before the function can process invocations. The optimization process can take a few seconds. The function remains in the `Pending` state until the process completes, when the state transitions to `Active`. You can't invoke the function until it reaches the `Active` state. 

If a function is not invoked for multiple weeks, Lambda reclaims its optimized version, and the function transitions to the `Inactive` state. To reactivate the function, you must invoke it. Lambda rejects the first invocation and the function enters the `Pending` state until Lambda re-optimizes the image. The function then returns to the `Active` state.

Lambda periodically fetches the associated container image from the Amazon ECR repository. If the corresponding container image no longer exists on Amazon ECR or permissions are revoked, the function enters the `Failed` state, and Lambda returns a failure for any function invocations.

You can use the Lambda API to get information about a function's state. For more information, see [Lambda function states](functions-states.md).

# Configure Lambda function memory
<a name="configuration-memory"></a>

Lambda allocates CPU power in proportion to the amount of memory configured. *Memory* is the amount of memory available to your Lambda function at runtime. You can increase or decrease the memory and CPU power allocated to your function using the **Memory** setting. You can configure memory between 128 MB and 10,240 MB in 1-MB increments. At 1,769 MB, a function has the equivalent of one vCPU (one vCPU-second of credits per second).

This page describes how and when to update the memory setting for a Lambda function.

**Topics**
+ [Determining the appropriate memory setting for a Lambda function](#configuration-memory-use-cases)
+ [Configuring function memory (console)](#configuration-memory-console)
+ [Configuring function memory (AWS CLI)](#configuration-memory-cli)
+ [Configuring function memory (AWS SAM)](#configuration-memory-sam)
+ [Accepting function memory recommendations (console)](#configuration-memory-optimization-accept)

## Determining the appropriate memory setting for a Lambda function
<a name="configuration-memory-use-cases"></a>

Memory is the principal lever for controlling the performance of a function. The default setting, 128 MB, is the lowest possible setting. We recommend that you only use 128 MB for simple Lambda functions, such as those that transform and route events to other AWS services. A higher memory allocation can improve performance for functions that use imported libraries, [Lambda layers](chapter-layers.md), Amazon Simple Storage Service (Amazon S3) or Amazon Elastic File System (Amazon EFS). Adding more memory proportionally increases the amount of CPU, increasing the overall computational power available. If a function is CPU, network or memory-bound, then increasing the memory setting can dramatically improve its performance.

To find the right memory configuration, monitor your functions with Amazon CloudWatch and set alarms if memory consumption is approaching the configured maximums. This can help identify memory-bound functions. For CPU-bound and IO-bound functions, monitoring the duration can also provide insight. In these cases, increasing the memory can help resolve the compute or network bottlenecks.

You can also consider using the open source [AWS Lambda Power Tuning](https://github.com/alexcasalboni/aws-lambda-power-tuning) tool. This tool uses AWS Step Functions to run multiple concurrent versions of a Lambda function at different memory allocations and measure the performance. The input function runs in your AWS account, performing live HTTP calls and SDK interaction, to measure likely performance in a live production scenario. You can also implement a CI/CD process to use this tool to automatically measure the performance of new functions that you deploy.

## Configuring function memory (console)
<a name="configuration-memory-console"></a>

You can configure the memory of your function in the Lambda console.

**To update the memory of a function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose the **Configuration** tab and then choose **General configuration**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/configuration-tab.png)

1. Under **General configuration**, choose **Edit**.

1. For **Memory**, set a value from 128 MB to 10,240 MB.

1. Choose **Save**.

## Configuring function memory (AWS CLI)
<a name="configuration-memory-cli"></a>

You can use the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command to configure the memory of your function.

**Example**  

```
aws lambda update-function-configuration \
  --function-name my-function \
  --memory-size 1024
```

## Configuring function memory (AWS SAM)
<a name="configuration-memory-sam"></a>

You can use the [AWS Serverless Application Model](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/serverless-getting-started.html ) to configure memory for your function. Update the [MemorySize](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/sam-resource-function.html#sam-function-memorysize) property in your `template.yaml` file and then run [sam deploy](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html).

**Example template.yaml**  

```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An AWS Serverless Application Model template describing your function.
Resources:
  my-function:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Description: ''
      MemorySize: 1024
      # Other function properties...
```

## Accepting function memory recommendations (console)
<a name="configuration-memory-optimization-accept"></a>

If you have administrator permissions in AWS Identity and Access Management (IAM), you can opt in to receive Lambda function memory setting recommendations from AWS Compute Optimizer. For instructions on opting in to memory recommendations for your account or organization, see [Opting in your account](https://docs.aws.amazon.com/compute-optimizer/latest/ug/getting-started.html#account-opt-in) in the *AWS Compute Optimizer User Guide*.

**Note**  
Compute Optimizer supports only functions that use x86\$164 architecture.

When you've opted in and your [Lambda function meets Compute Optimizer requirements](https://docs.aws.amazon.com/compute-optimizer/latest/ug/requirements.html#requirements-lambda-functions), you can view and accept function memory recommendations from Compute Optimizer in the Lambda console in **General configuration**.

# Configure ephemeral storage for Lambda functions
<a name="configuration-ephemeral-storage"></a>

Lambda provides ephemeral storage for functions in the `/tmp` directory. This storage is temporary and unique to each execution environment. You can control the amount of ephemeral storage allocated to your function using the **Ephemeral storage** setting. You can configure ephemeral storage between 512 MB and 10,240 MB, in 1-MB increments. All data stored in `/tmp` is encrypted at rest with a key managed by AWS.

This page describes common use cases and how to update the ephemeral storage for a Lambda function.

**Topics**
+ [Common use cases for increased ephemeral storage](#configuration-ephemeral-storage-use-cases)
+ [Configuring ephemeral storage (console)](#configuration-ephemeral-storage-console)
+ [Configuring ephemeral storage (AWS CLI)](#configuration-ephemeral-storage-cli)
+ [Configuring ephemeral storage (AWS SAM)](#configuration-ephemeral-storage-sam)

## Common use cases for increased ephemeral storage
<a name="configuration-ephemeral-storage-use-cases"></a>

Here are several common use cases that benefit from increased ephemeral storage:
+ **Extract-transform-load (ETL) jobs:** Increase ephemeral storage when your code performs intermediate computation or downloads other resources to complete processing. More temporary space enables more complex ETL jobs to run in Lambda functions.
+ **Machine learning (ML) inference:** Many inference tasks rely on large reference data files, including libraries and models. With more ephemeral storage, you can download larger models from Amazon Simple Storage Service (Amazon S3) to `/tmp` and use them in your processing.
+ **Data processing:** For workloads that download objects from Amazon S3 in response to S3 events, more `/tmp` space makes it possible to handle larger objects without using in-memory processing. Workloads that create PDFs or process media also benefit from more ephemeral storage.
+ **Graphics processing:** Image processing is a common use case for Lambda-based applications. For workloads that process large TIFF files or satellite images, more ephemeral storage makes it easier to use libraries and perform the computation in Lambda.

## Configuring ephemeral storage (console)
<a name="configuration-ephemeral-storage-console"></a>

You can configure ephemeral storage in the Lambda console.

**To modify ephemeral storage for a function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose the **Configuration** tab and then choose **General configuration**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/configuration-tab.png)

1. Under **General configuration**, choose **Edit**.

1. For **Ephemeral storage**, set a value between 512 MB and 10,240 MB, in 1-MB increments.

1. Choose **Save**.

## Configuring ephemeral storage (AWS CLI)
<a name="configuration-ephemeral-storage-cli"></a>

You can use the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command to configure ephemeral storage.

**Example**  

```
aws lambda update-function-configuration \
  --function-name my-function \
  --ephemeral-storage '{"Size": 1024}'
```

## Configuring ephemeral storage (AWS SAM)
<a name="configuration-ephemeral-storage-sam"></a>

You can use the [AWS Serverless Application Model](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/serverless-getting-started.html ) to configure ephemeral storage for your function. Update the [EphemeralStorage](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/sam-resource-function.html#sam-function-ephemeralstorage) property in your `template.yaml` file and then run [sam deploy](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html).

**Example template.yaml**  

```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An AWS Serverless Application Model template describing your function.
Resources:
  my-function:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Description: ''
      MemorySize: 128
      Timeout: 120
      Handler: index.handler
      Runtime: nodejs22.x
      Architectures:
        - x86_64
      EphemeralStorage:
        Size: 10240
      # Other function properties...
```

# Selecting and configuring an instruction set architecture for your Lambda function
<a name="foundation-arch"></a>

 The *instruction set architecture* of a Lambda function determines the type of computer processor that Lambda uses to run the function. Lambda provides a choice of instruction set architectures:
+ arm64 – 64-bit ARM architecture, for the AWS Graviton2 processor.
+ x86\$164 – 64-bit x86 architecture, for x86-based processors.

**Note**  
The arm64 architecture is available in most AWS Regions. For more information, see [AWS Lambda Pricing](https://aws.amazon.com//lambda/pricing/#aws-element-9ccd9262-b656-4d9c-8a72-34ee6b662135). In the memory prices table, choose the **Arm Price** tab, and then open the **Region** dropdown list to see which AWS Regions support arm64 with Lambda.  
For an example of how to create a function with arm64 architecture, see [AWS Lambda Functions Powered by AWS Graviton2 Processor](https://aws.amazon.com/blogs/aws/aws-lambda-functions-powered-by-aws-graviton2-processor-run-your-functions-on-arm-and-get-up-to-34-better-price-performance/).

**Topics**
+ [Advantages of using arm64 architecture](#foundation-arch-adv)
+ [Requirements for migration to arm64 architecture](#foundation-arch-consider)
+ [Function code compatibility with arm64 architecture](#foundation-arch-considerations)
+ [How to migrate to arm64 architecture](#foundation-arch-steps)
+ [Configuring the instruction set architecture](#foundation-arch-config)

## Advantages of using arm64 architecture
<a name="foundation-arch-adv"></a>

Lambda functions that use arm64 architecture (AWS Graviton2 processor) can achieve significantly better price and performance than the equivalent function running on x86\$164 architecture. Consider using arm64 for compute-intensive applications such as high-performance computing, video encoding, and simulation workloads.

The Graviton2 CPU uses the Neoverse N1 core and supports Armv8.2 (including CRC and crypto extensions) plus several other architectural extensions.

Graviton2 reduces memory read time by providing a larger L2 cache per vCPU, which improves the latency performance of web and mobile backends, microservices, and data processing systems. Graviton2 also provides improved encryption performance and supports instruction sets that improve the latency of CPU-based machine learning inference.

For more information about AWS Graviton2, see [AWS Graviton Processor](https://aws.amazon.com/ec2/graviton).

## Requirements for migration to arm64 architecture
<a name="foundation-arch-consider"></a>

When you select a Lambda function to migrate to arm64 architecture, to ensure a smooth migration, make sure that your function meets the following requirements:
+ The deployment package contains only open-source components and source code that you control, so that you can make any necessary updates for the migration.
+ If the function code includes third-party dependencies, each library or package provides an arm64 version.

## Function code compatibility with arm64 architecture
<a name="foundation-arch-considerations"></a>

Your Lambda function code must be compatible with the instruction set architecture of the function. Before you migrate a function to arm64 architecture, note the following points about the current function code:
+ If you added your function code using the embedded code editor, your code probably runs on either architecture without modification.
+ If you uploaded your function code, you must upload new code that is compatible with your target architecture.
+ If your function uses layers, you must [check each layer](adding-layers.md#finding-layer-information) to ensure that it is compatible with the new architecture. If a layer is not compatible, edit the function to replace the current layer version with a compatible layer version.
+ If your function uses Lambda extensions, you must check each extension to ensure that it is compatible with the new architecture.
+ If your function uses a container image deployment package type, you must create a new container image that is compatible with the architecture of the function.

## How to migrate to arm64 architecture
<a name="foundation-arch-steps"></a>



To migrate a Lambda function to the arm64 architecture, we recommend following these steps:

1. Build the list of dependencies for your application or workload. Common dependencies include:
   + All the libraries and packages that the function uses.
   + The tools that you use to build, deploy, and test the function, such as compilers, test suites, continuous integration and continuous delivery (CI/CD) pipelines, provisioning tools, and scripts.
   + The Lambda extensions and third-party tools that you use to monitor the function in production.

1. For each of the dependencies, check the version, and then check whether arm64 versions are available.

1. Build an environment to migrate your application.

1. Bootstrap the application.

1. Test and debug the application.

1. Test the performance of the arm64 function. Compare the performance with the x86\$164 version.

1. Update your infrastructure pipeline to support arm64 Lambda functions.

1. Stage your deployment to production.

   For example, use [alias routing configuration](configuring-alias-routing.md) to split traffic between the x86 and arm64 versions of the function, and compare the performance and latency.

For more information about how to create a code environment for arm64 architecture, including language-specific information for Java, Go, .NET, and Python, see the [Getting started with AWS Graviton](https://github.com/aws/aws-graviton-getting-started) GitHub repository.

## Configuring the instruction set architecture
<a name="foundation-arch-config"></a>

You can configure the instruction set architecture for new and existing Lambda functions using the Lambda console, AWS SDKs, AWS Command Line Interface (AWS CLI), or CloudFormation. Follow these steps to change the instruction set architecture for an existing Lambda function from the console.

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the name of the function that you want to configure the instruction set architecture for.

1. On the main **Code** tab, for the **Runtime settings** section, choose **Edit**.

1. Under **Architecture**, choose the instruction set architecture you want your function to use.

1. Choose **Save**.

# Configure Lambda function timeout
<a name="configuration-timeout"></a>

Lambda runs your code for a set amount of time before timing out. *Timeout* is the maximum amount of time in seconds that a Lambda function can run. The default value for this setting is 3 seconds, but you can adjust this in increments of 1 second up to a maximum value of 900 seconds (15 minutes).

This page describes how and when to update the timeout setting for a Lambda function.

**Topics**
+ [Determining the appropriate timeout value for a Lambda function](#configuration-timeout-use-cases)
+ [Configuring timeout (console)](#configuration-timeout-console)
+ [Configuring timeout (AWS CLI)](#configuration-timeout-cli)
+ [Configuring timeout (AWS SAM)](#configuration-timeout-sam)

## Determining the appropriate timeout value for a Lambda function
<a name="configuration-timeout-use-cases"></a>

If the timeout value is close to the average duration of a function, there is a higher risk that the function will time out unexpectedly. The duration of a function can vary based on the amount of data transfer and processing, and the latency of any services the function interacts with. Some common causes of timeout include:
+ Downloads from Amazon Simple Storage Service (Amazon S3) are larger or take longer than average.
+ A function makes a request to another service, which takes longer to respond.
+ The parameters provided to a function require more computational complexity in the function, which causes the invocation to take longer.

When testing your application, ensure that your tests accurately reflect the size and quantity of data and realistic parameter values. Tests often use small samples for convenience, but you should use datasets at the upper bounds of what is reasonably expected for your workload.

## Configuring timeout (console)
<a name="configuration-timeout-console"></a>

You can configure function timeout in the Lambda console.

**To modify the timeout for a function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose the **Configuration** tab and then choose **General configuration**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/configuration-tab.png)

1. Under **General configuration**, choose **Edit**.

1. For **Timeout**, set a value between 1 and 900 seconds (15 minutes).

1. Choose **Save**.

## Configuring timeout (AWS CLI)
<a name="configuration-timeout-cli"></a>

You can use the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command to configure the timeout value, in seconds. The following example command increases the function timeout to 120 seconds (2 minutes).

**Example**  

```
aws lambda update-function-configuration \
  --function-name my-function \
  --timeout 120
```

## Configuring timeout (AWS SAM)
<a name="configuration-timeout-sam"></a>

You can use the [AWS Serverless Application Model](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/serverless-getting-started.html ) to configure the timeout value for your function. Update the [Timeout](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/sam-resource-function.html#sam-function-timeout) property in your `template.yaml` file and then run [sam deploy](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html).

**Example template.yaml**  

```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An AWS Serverless Application Model template describing your function.
Resources:
  my-function:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Description: ''
      MemorySize: 128
      Timeout: 120
      # Other function properties...
```

# Working with Lambda environment variables
<a name="configuration-envvars"></a>

You can use environment variables to adjust your function's behavior without updating code. An environment variable is a pair of strings that is stored in a function's version-specific configuration. The Lambda runtime makes environment variables available to your code and sets additional environment variables that contain information about the function and invocation request.

**Note**  
To increase security, we recommend that you use AWS Secrets Manager instead of environment variables to store database credentials and other sensitive information like API keys or authorization tokens. For more information, see [Use Secrets Manager secrets in Lambda functions](with-secrets-manager.md).

Environment variables are not evaluated before the function invocation. Any value you define is considered a literal string and not expanded. Perform the variable evaluation in your function code.

## Creating Lambda environment variables
<a name="create-environment-variables"></a>

You can configure environment variables in Lambda using the Lambda console, the AWS Command Line Interface (AWS CLI), AWS Serverless Application Model (AWS SAM), or using an AWS SDK.

------
#### [ Console ]

You define environment variables on the unpublished version of your function. When you publish a version, the environment variables are locked for that version along with other [version-specific configuration settings](configuration-versions.md).

You create an environment variable for your function by defining a key and a value. Your function uses the name of the key to retrieve the value of the environment variable.

**To set environment variables in the Lambda console**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose the **Configuration** tab, then choose **Environment variables**.

1. Under **Environment variables**, choose **Edit**.

1. Choose **Add environment variable**.

1. Enter a key and value.

**Requirements**
   + Keys start with a letter and are at least two characters.
   + Keys only contain letters, numbers, and the underscore character (`_`).
   + Keys aren't [reserved by Lambda](#configuration-envvars-runtime).
   + The total size of all environment variables doesn't exceed 4 KB.

1. Choose **Save**.

**To generate a list of environment variables in the console code editor**

You can generate a list of environment variables in the Lambda code editor. This is a quick way to reference your environment variables while you code.

1. Choose the **Code** tab.

1. Scroll down to the **ENVIRONMENT VARIABLES** section of the code editor. Existing environment variables are listed here:  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/env-var.png)

1. To create new environment variables, choose the choose the plus sign (![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/add-plus.png)):  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/create-env-var.png)

Environment variables remain encrypted when listed in the console code editor. If you enabled encryption helpers for encryption in transit, then those settings remain unchanged. For more information, see [Securing Lambda environment variables](configuration-envvars-encryption.md).

The environment variables list is read-only and is available only on the Lambda console. This file is not included when you download the function's .zip file archive, and you can't add environment variables by uploading this file.

------
#### [ AWS CLI ]

The following example sets two environment variables on a function named `my-function`.

```
aws lambda update-function-configuration \
  --function-name my-function \
  --environment "Variables={BUCKET=amzn-s3-demo-bucket,KEY=file.txt}"
```

When you apply environment variables with the `update-function-configuration` command, the entire contents of the `Variables` structure is replaced. To retain existing environment variables when you add a new one, include all existing values in your request.

To get the current configuration, use the `get-function-configuration` command.

```
aws lambda get-function-configuration \
  --function-name my-function
```

You should see the following output:

```
{
    "FunctionName": "my-function",
    "FunctionArn": "arn:aws:lambda:us-east-2:111122223333:function:my-function",
    "Runtime": "nodejs24.x",
    "Role": "arn:aws:iam::111122223333:role/lambda-role",
    "Environment": {
        "Variables": {
            "BUCKET": "amzn-s3-demo-bucket",
            "KEY": "file.txt"
        }
    },
    "RevisionId": "0894d3c1-2a3d-4d48-bf7f-abade99f3c15",
    ...
}
```

You can pass the revision ID from the output of `get-function-configuration` as a parameter to `update-function-configuration`. This ensures that the values don't change between when you read the configuration and when you update it.

To configure a function's encryption key, set the `KMSKeyARN` option.

```
aws lambda update-function-configuration \
  --function-name my-function \
  --kms-key-arn arn:aws:kms:us-east-2:111122223333:key/055efbb4-xmpl-4336-ba9c-538c7d31f599
```

------
#### [ AWS SAM ]

You can use the [AWS Serverless Application Model](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/serverless-getting-started.html ) to configure environment variables for your function. Update the [Environment](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/sam-resource-function.html#sam-function-environment) and [Variables](https://docs.aws.amazon.com//AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-environment.html#cfn-lambda-function-environment-variables) properties in your `template.yaml` file and then run [sam deploy](https://docs.aws.amazon.com//serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html).

**Example template.yaml**  

```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An AWS Serverless Application Model template describing your function.
Resources:
  my-function:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Description: ''
      MemorySize: 128
      Timeout: 120
      Handler: index.handler
      Runtime: nodejs24.x
      Architectures:
        - x86_64
      EphemeralStorage:
        Size: 10240
      Environment:
        Variables:
          BUCKET: amzn-s3-demo-bucket
          KEY: file.txt
      # Other function properties...
```

------
#### [ AWS SDKs ]

To manage environment variables using an AWS SDK, use the following API operations.
+ [UpdateFunctionConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateFunctionConfiguration.html)
+ [GetFunctionConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_GetFunctionConfiguration.html)
+ [CreateFunction](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html)

To learn more, refer to the [AWS SDK documentation](https://aws.amazon.com/developer/tools/) for your preferred programming language.

------

## Example scenario for environment variables
<a name="configuration-envvars-example"></a>

You can use environment variables to customize function behavior in your test environment and production environment. For example, you can create two functions with the same code but different configurations. One function connects to a test database, and the other connects to a production database. In this situation, you use environment variables to pass the hostname and other connection details for the database to the function. 

The following example shows how to define the database host and database name as environment variables.

![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/console-env.png)


If you want your test environment to generate more debug information than the production environment, you could set an environment variable to configure your test environment to use more verbose logging or more detailed tracing.

For example, in your test environment, you could set an environment variable with the key `LOG_LEVEL` and a value indicating a log level of debug or trace. In your Lambda function's code, you can then use this environment variable to set the log level.

The following code examples in Python and Node.js illustrate how you can achieve this. These examples assume your environment variable has a value of `DEBUG` in Python or `debug` in Node.js.

------
#### [ Python ]

**Example Python code to set log level**  

```
import os
import logging

# Initialize the logger
logger = logging.getLogger()

# Get the log level from the environment variable and default to INFO if not set
log_level = os.environ.get('LOG_LEVEL', 'INFO')

# Set the log level
logger.setLevel(log_level)

def lambda_handler(event, context):
    # Produce some example log outputs
    logger.debug('This is a log with detailed debug information - shown only in test environment')
    logger.info('This is a log with standard information - shown in production and test environments')
```

------
#### [ Node.js (ES module format) ]

**Example Node.js code to set log level**  
This example uses the `winston` logging library. Use npm to add this library to your function's deployment package. For more information, see [Creating a .zip deployment package with dependencies](nodejs-package.md#nodejs-package-create-dependencies).  

```
import winston from 'winston';

// Initialize the logger using the log level from environment variables, defaulting to INFO if not set
const logger = winston.createLogger({
   level: process.env.LOG_LEVEL || 'info',
   format: winston.format.json(),
   transports: [new winston.transports.Console()]
});

export const handler = async (event) => {
   // Produce some example log outputs
   logger.debug('This is a log with detailed debug information - shown only in test environment');
   logger.info('This is a log with standard information - shown in production and test environment');
   
};
```

------

## Retrieving Lambda environment variables
<a name="retrieve-environment-variables"></a>

To retrieve environment variables in your function code, use the standard method for your programming language.

------
#### [ Node.js ]

```
let region = process.env.AWS_REGION
```

------
#### [ Python ]

```
import os
  region = os.environ['AWS_REGION']
```

**Note**  
In some cases, you may need to use the following format:  

```
region = os.environ.get('AWS_REGION')
```

------
#### [ Ruby ]

```
region = ENV["AWS_REGION"]
```

------
#### [ Java ]

```
String region = System.getenv("AWS_REGION");
```

------
#### [ Go ]

```
var region = os.Getenv("AWS_REGION")
```

------
#### [ C\$1 ]

```
string region = Environment.GetEnvironmentVariable("AWS_REGION");
```

------
#### [ PowerShell ]

```
$region = $env:AWS_REGION
```

------

Lambda stores environment variables securely by encrypting them at rest. You can [configure Lambda to use a different encryption key](configuration-envvars-encryption.md), encrypt environment variable values on the client side, or set environment variables in an CloudFormation template with AWS Secrets Manager.

## Defined runtime environment variables
<a name="configuration-envvars-runtime"></a>

Lambda [runtimes](lambda-runtimes.md) set several environment variables during initialization. Most of the environment variables provide information about the function or runtime. The keys for these environment variables are *reserved* and cannot be set in your function configuration.

**Reserved environment variables**
+ `_HANDLER` – The handler location configured on the function.
+ `_X_AMZN_TRACE_ID` – The [X-Ray tracing header](services-xray.md). This environment variable changes with each invocation.
  + This environment variable is not defined for OS-only runtimes (the `provided` runtime family). You can set `_X_AMZN_TRACE_ID` for custom runtimes using the `Lambda-Runtime-Trace-Id` response header from the [Next invocation](runtimes-api.md#runtimes-api-next).
  + For Java runtime versions 17 and later, this environment variable is not used. Instead, Lambda stores tracing information in the `com.amazonaws.xray.traceHeader` system property.
+ `AWS_DEFAULT_REGION` – The default AWS Region where the Lambda function is executed.
+ `AWS_REGION` – The AWS Region where the Lambda function is executed. If defined, this value overrides the `AWS_DEFAULT_REGION`.
  + For more information about using the AWS Region environment variables with AWS SDKs, see [AWS Region](https://docs.aws.amazon.com/sdkref/latest/guide/feature-region.html#feature-region-sdk-compat) in the *AWS SDKs and Tools Reference Guide*.
+ `AWS_EXECUTION_ENV` – The [runtime identifier](lambda-runtimes.md), prefixed by `AWS_Lambda_` (for example, `AWS_Lambda_java8`). This environment variable is not defined for OS-only runtimes (the `provided` runtime family).
+ `AWS_LAMBDA_FUNCTION_NAME` – The name of the function.
+ `AWS_LAMBDA_FUNCTION_MEMORY_SIZE` – The amount of memory available to the function in MB.
+ `AWS_LAMBDA_FUNCTION_VERSION` – The version of the function being executed.
+ `AWS_LAMBDA_INITIALIZATION_TYPE` – The initialization type of the function, which is `on-demand`, `provisioned-concurrency`, `snap-start`, or `lambda-managed-instances`. For information, see [Configuring provisioned concurrency](provisioned-concurrency.md), [Improving startup performance with Lambda SnapStart](snapstart.md), or [Lambda Managed Instances](lambda-managed-instances.md).
+ `AWS_LAMBDA_LOG_GROUP_NAME`, `AWS_LAMBDA_LOG_STREAM_NAME` – The name of the Amazon CloudWatch Logs group and stream for the function. The `AWS_LAMBDA_LOG_GROUP_NAME` and `AWS_LAMBDA_LOG_STREAM_NAME` [environment variables](#configuration-envvars-runtime) are not available in Lambda SnapStart functions.
+ `AWS_ACCESS_KEY`, `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` – The access keys obtained from the function's [execution role](lambda-intro-execution-role.md).
+ `AWS_LAMBDA_RUNTIME_API` – ([Custom runtime](runtimes-custom.md)) The host and port of the [runtime API](runtimes-api.md).
+ `LAMBDA_TASK_ROOT` – The path to your Lambda function code.
+ `LAMBDA_RUNTIME_DIR` – The path to runtime libraries.
+ `AWS_LAMBDA_MAX_CONCURRENCY` – (Lambda Managed Instances only) The maximum number of concurrent invocations Lambda will send to one execution environment.
+ `AWS_LAMBDA_METADATA_API` – The [metadata endpoint](configuration-metadata-endpoint.md) server address in the format `{ipv4_address}:{port}` (for example, `169.254.100.1:9001`).
+ `AWS_LAMBDA_METADATA_TOKEN` – A unique authentication token for the current execution environment used to authenticate requests to the [metadata endpoint](configuration-metadata-endpoint.md). Lambda generates this token automatically at initialization.

The following additional environment variables aren't reserved and can be extended in your function configuration.

**Unreserved environment variables**
+ `LANG` – The locale of the runtime (`en_US.UTF-8`).
+ `PATH` – The execution path (`/usr/local/bin:/usr/bin/:/bin:/opt/bin`).
+ `LD_LIBRARY_PATH` – The system library path (`/var/lang/lib:/lib64:/usr/lib64:$LAMBDA_RUNTIME_DIR:$LAMBDA_RUNTIME_DIR/lib:$LAMBDA_TASK_ROOT:$LAMBDA_TASK_ROOT/lib:/opt/lib`).
+ `NODE_PATH` – ([Node.js](lambda-nodejs.md)) The Node.js library path (`/opt/nodejs/node12/node_modules/:/opt/nodejs/node_modules:$LAMBDA_RUNTIME_DIR/node_modules`).
+ `NODE_OPTIONS` – ([Node.js](lambda-nodejs.md)) For Node.js runtimes, you can use `NODE_OPTIONS` to re-enable experimental features that Lambda disables by default.
+ `PYTHONPATH` – ([Python](lambda-python.md)) The Python library path (`$LAMBDA_RUNTIME_DIR`).
+ `GEM_PATH` – ([Ruby](lambda-ruby.md)) The Ruby library path (`$LAMBDA_TASK_ROOT/vendor/bundle/ruby/3.3.0:/opt/ruby/gems/3.3.0`).
+ `AWS_XRAY_CONTEXT_MISSING` – For X-Ray tracing, Lambda sets this to `LOG_ERROR` to avoid throwing runtime errors from the X-Ray SDK.
+ `AWS_XRAY_DAEMON_ADDRESS` – For X-Ray tracing, the IP address and port of the X-Ray daemon.
+ `AWS_LAMBDA_DOTNET_PREJIT` – ([.NET](lambda-csharp.md)) Set this variable to enable or disable .NET specific runtime optimizations. Values include `always`, `never`, and `provisioned-concurrency`. For more information, see [Configuring provisioned concurrency for a function](provisioned-concurrency.md).
+ `TZ` – The environment's time zone (`:UTC`). The execution environment uses NTP to synchronize the system clock.

The sample values shown reflect the latest runtimes. The presence of specific variables or their values can vary on earlier runtimes.

# Securing Lambda environment variables
<a name="configuration-envvars-encryption"></a>

For securing your environment variables, you can use server-side encryption to protect your data at rest and client-side encryption to protect your data in transit.

**Note**  
To increase database security, we recommend that you use AWS Secrets Manager instead of environment variables to store database credentials. For more information, see [Use Secrets Manager secrets in Lambda functions](with-secrets-manager.md).

**Security at rest**  
Lambda always provides server-side encryption at rest with an AWS KMS key. By default, Lambda uses an AWS managed key. If this default behavior suits your workflow, you don't need to set up anything else. Lambda creates the AWS managed key in your account and manages the permissions for you. AWS doesn't charge you to use this key.

If you prefer, you can provide an AWS KMS customer managed key instead. You might do this to have control over rotation of the KMS key or to meet the requirements of your organization for managing KMS keys. When you use a customer managed key, only users in your account with access to the KMS key can view or manage environment variables on the function.

Customer managed keys incur standard AWS KMS charges. For more information, see [AWS Key Management Service pricing](https://aws.amazon.com/kms/pricing/).

**Security in transit**  
For additional security, you can enable helpers for encryption in transit, which ensures that your environment variables are encrypted client-side for protection in transit.

**To configure encryption for your environment variables**

1. Use the AWS Key Management Service (AWS KMS) to create any customer managed keys for Lambda to use for server-side and client-side encryption. For more information, see [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.

1. Using the Lambda console, navigate to the **Edit environment variables** page.

   1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

   1. Choose a function.

   1. Choose **Configuration**, then choose **Environment variables** from the left navigation bar.

   1. In the **Environment variables** section, choose **Edit**.

   1. Expand **Encryption configuration**.

1. (Optional) Enable console encryption helpers to use client-side encryption to protect your data in transit.

   1. Under **Encryption in transit**, choose **Enable helpers for encryption in transit**.

   1. For each environment variable that you want to enable console encryption helpers for, choose **Encrypt** next to the environment variable.

   1.  Under AWS KMS key to encrypt in transit, choose a customer managed key that you created at the beginning of this procedure.

   1. Choose **Execution role policy** and copy the policy. This policy grants permission to your function's execution role to decrypt the environment variables.

      Save this policy to use in the last step of this procedure.

   1. Add code to your function that decrypts the environment variables. To see an example, choose **Decrypt secrets snippet**.

1. (Optional) Specify your customer managed key for encryption at rest.

   1. Choose **Use a customer master key**.

   1. Choose a customer managed key that you created at the beginning of this procedure.

1. Choose **Save**.

1. Set up permissions.

   If you're using a customer managed key with server-side encryption, grant permissions to any users or roles that you want to be able to view or manage environment variables on the function. For more information, see [Managing permissions to your server-side encryption KMS key](#managing-permissions-to-your-server-side-encryption-key).

   If you're enabling client-side encryption for security in transit, your function needs permission to call the `kms:Decrypt` API operation. Add the policy that you saved previously in this procedure to the function's [execution role](lambda-intro-execution-role.md).

## Managing permissions to your server-side encryption KMS key
<a name="managing-permissions-to-your-server-side-encryption-key"></a>

No AWS KMS permissions are required for your user or the function's execution role to use the default encryption key. To use a customer managed key, you need permission to use the key. Lambda uses your permissions to create a grant on the key. This allows Lambda to use it for encryption.
+ `kms:ListAliases` – To view keys in the Lambda console.
+ `kms:CreateGrant`, `kms:Encrypt` – To configure a customer managed key on a function.
+ `kms:Decrypt` – To view and manage environment variables that are encrypted with a customer managed key.

You can get these permissions from your AWS account or from a key's resource-based permissions policy. `ListAliases` is provided by the [managed policies for Lambda](access-control-identity-based.md). Key policies grant the remaining permissions to users in the **Key users** group.

Users without `Decrypt` permissions can still manage functions, but they can't view environment variables or manage them in the Lambda console. To prevent a user from viewing environment variables, add a statement to the user's permissions that denies access to the default key, a customer managed key, or all keys.

**Example IAM policy – Deny access by key ARN**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Deny",
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": "arn:aws:kms:us-east-2:111122223333:key/3be10e2d-xmpl-4be4-bc9d-0405a71945cc"
        }
    ]
}
```

For details on managing key permissions, see [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*.

# Giving Lambda functions access to resources in an Amazon VPC
<a name="configuration-vpc"></a>

With Amazon Virtual Private Cloud (Amazon VPC), you can create private networks in your AWS account to host resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Relational Database Service (Amazon RDS) instances, and Amazon ElastiCache instances. You can give your Lambda function access to resources hosted in an Amazon VPC by attaching your function to the VPC through the private subnets that contain the resources. Follow the instructions in the following sections to attach a Lambda function to an Amazon VPC using the Lambda console, the AWS Command Line Interface (AWS CLI), or AWS SAM.

**Note**  
Every Lambda function runs inside a VPC that is owned and managed by the Lambda service. These VPCs are maintained automatically by Lambda and are not visible to customers. Configuring your function to access other AWS resources in an Amazon VPC has no effect on the Lambda-managed VPC your function runs inside.

**Topics**
+ [Required IAM permissions](#configuration-vpc-permissions)
+ [Attaching Lambda functions to an Amazon VPC in your AWS account](#configuration-vpc-attaching)
+ [Internet access when attached to a VPC](#configuration-vpc-internet-access)
+ [IPv6 support](#configuration-vpc-ipv6)
+ [Best practices for using Lambda with Amazon VPCs](#configuration-vpc-best-practice)
+ [Understanding Hyperplane Elastic Network Interfaces (ENIs)](#configuration-vpc-enis)
+ [Using IAM condition keys for VPC settings](#vpc-conditions)
+ [VPC tutorials](#vpc-tutorials)

## Required IAM permissions
<a name="configuration-vpc-permissions"></a>

To attach a Lambda function to an Amazon VPC in your AWS account, Lambda needs permissions to create and manage the network interfaces it uses to give your function access to the resources in the VPC.

The network interfaces that Lambda creates are known as Hyperplane Elastic Network Interfaces, or Hyperplane ENIs. To learn more about these network interfaces, see [Understanding Hyperplane Elastic Network Interfaces (ENIs)](#configuration-vpc-enis).

You can give your function the permissions it needs by attaching the AWS managed policy [AWSLambdaVPCAccessExecutionRole](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaVPCAccessExecutionRole.html) to your function's execution role. When you create a new function in the Lambda console and attach it to a VPC, Lambda automatically adds this permissions policy for you.

If you prefer to create your own IAM permissions policy, make sure to add all of the following permissions and allow them on all resources (`"Resource": "*"`):
+ ec2:CreateNetworkInterface
+ ec2:DescribeNetworkInterfaces
+ ec2:DescribeSubnets
+ ec2:DeleteNetworkInterface
+ ec2:AssignPrivateIpAddresses
+ ec2:UnassignPrivateIpAddresses

Note that your function's role only needs these permissions to create the network interfaces, not to invoke your function. You can still invoke your function successfully when it’s attached to an Amazon VPC, even if you remove these permissions from your function’s execution role. 

To attach your function to a VPC, Lambda also needs to verify network resources using your IAM user role. Ensure that your user role has the following IAM permissions:
+ **ec2:DescribeSecurityGroups**
+ **ec2:DescribeSubnets**
+ **ec2:DescribeVpcs**
+ **ec2:GetSecurityGroupsForVpc**

**Note**  
The Amazon EC2 permissions that you grant to your function's execution role are used by the Lambda service to attach your function to a VPC. However, you're also implicitly granting these permissions to your function's code. This means that your function code is able to make these Amazon EC2 API calls. For advice on following security best practices, see [Security best practices](#configuration-vpc-best-practice-security).

## Attaching Lambda functions to an Amazon VPC in your AWS account
<a name="configuration-vpc-attaching"></a>

Attach your function to an Amazon VPC in your AWS account by using the Lambda console, the AWS CLI or AWS SAM. If you're using the AWS CLI or AWS SAM, or attaching an existing function to a VPC using the Lambda console, make sure that your function's execution role has the necessary permissions listed in the previous section.

Lambda functions can't connect directly to a VPC with [ dedicated instance tenancy](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html). To connect to resources in a dedicated VPC, [peer it to a second VPC with default tenancy](https://aws.amazon.com/premiumsupport/knowledge-center/lambda-dedicated-vpc/).

------
#### [ Lambda console ]

**To attach a function to an Amazon VPC when you create it**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console and choose **Create function**.

1. Under **Basic information**, for **Function name**, enter a name for your function.

1. Configure VPC settings for the function by doing the following:

   1. Expand **Advanced settings**.

   1. Select **Enable VPC**, and then select the VPC you want to attach the function to.

   1. (Optional) To allow [outbound IPv6 traffic](#configuration-vpc-ipv6), select **Allow IPv6 traffic for dual-stack subnets**.

   1. Choose the subnets and security groups to create the network interface for. If you selected **Allow IPv6 traffic for dual-stack subnets**, all selected subnets must have an IPv4 CIDR block and an IPv6 CIDR block.
**Note**  
To access private resources, connect your function to private subnets. If your function needs internet access, see [Enable internet access for VPC-connected Lambda functions](configuration-vpc-internet.md). Connecting a function to a public subnet doesn't give it internet access or a public IP address. 

1. Choose **Create function**.

**To attach an existing function to an Amazon VPC**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console and select your function.

1. Choose the **Configuration** tab, then choose **VPC**.

1. Choose **Edit**.

1. Under **VPC**, select the Amazon VPC you want to attach your function to.

1. (Optional) To allow [outbound IPv6 traffic](#configuration-vpc-ipv6), select **Allow IPv6 traffic for dual-stack subnets**. 

1. Choose the subnets and security groups to create the network interface for. If you selected **Allow IPv6 traffic for dual-stack subnets**, all selected subnets must have an IPv4 CIDR block and an IPv6 CIDR block.
**Note**  
To access private resources, connect your function to private subnets. If your function needs internet access, see [Enable internet access for VPC-connected Lambda functions](configuration-vpc-internet.md). Connecting a function to a public subnet doesn't give it internet access or a public IP address. 

1. Choose **Save**.

------
#### [ AWS CLI ]

**To attach a function to an Amazon VPC when you create it**
+ To create a Lambda function and attach it to a VPC, run the following CLI `create-function` command.

  ```
  aws lambda create-function --function-name my-function \
  --runtime nodejs24.x --handler index.js --zip-file fileb://function.zip \
  --role arn:aws:iam::123456789012:role/lambda-role \
  --vpc-config Ipv6AllowedForDualStack=true,SubnetIds=subnet-071f712345678e7c8,subnet-07fd123456788a036,SecurityGroupIds=sg-085912345678492fb
  ```

  Specify your own subnets and security groups and set `Ipv6AllowedForDualStack` to `true` or `false` according to your use case.

**To attach an existing function to an Amazon VPC**
+ To attach an existing function to a VPC, run the following CLI `update-function-configuration` command.

  ```
  aws lambda update-function-configuration --function-name my-function \
  --vpc-config Ipv6AllowedForDualStack=true, SubnetIds=subnet-071f712345678e7c8,subnet-07fd123456788a036,SecurityGroupIds=sg-085912345678492fb
  ```

**To unattach your function from a VPC**
+ To unattach your function from a VPC, run the following `update-function-configuration`CLI command with an empty list of VPC subnets and security groups.

  ```
  aws lambda update-function-configuration --function-name my-function \
  --vpc-config SubnetIds=[],SecurityGroupIds=[]
  ```

------
#### [ AWS SAM ]

**To attach your function to a VPC**
+ To attach a Lambda function to an Amazon VPC, add the `VpcConfig` property to your function definition as shown in the following example template. For more information about this property, see [AWS::Lambda::Function VpcConfig](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-vpcconfig.html) in the *CloudFormation User Guide* (the AWS SAM `VpcConfig` property is passed directly to the `VpcConfig` property of an CloudFormation `AWS::Lambda::Function` resource).

  ```
  AWSTemplateFormatVersion: '2010-09-09'
  Transform: AWS::Serverless-2016-10-31
  
  Resources:
    MyFunction:
      Type: AWS::Serverless::Function
      Properties:
        CodeUri: ./lambda_function/
        Handler: lambda_function.handler
        Runtime: python3.12
        VpcConfig:
          SecurityGroupIds:
            - !Ref MySecurityGroup
          SubnetIds:
            - !Ref MySubnet1
            - !Ref MySubnet2
        Policies:
          - AWSLambdaVPCAccessExecutionRole
  
    MySecurityGroup:
      Type: AWS::EC2::SecurityGroup
      Properties:
        GroupDescription: Security group for Lambda function
        VpcId: !Ref MyVPC
  
    MySubnet1:
      Type: AWS::EC2::Subnet
      Properties:
        VpcId: !Ref MyVPC
        CidrBlock: 10.0.1.0/24
  
    MySubnet2:
      Type: AWS::EC2::Subnet
      Properties:
        VpcId: !Ref MyVPC
        CidrBlock: 10.0.2.0/24
  
    MyVPC:
      Type: AWS::EC2::VPC
      Properties:
        CidrBlock: 10.0.0.0/16
  ```

  For more information about configuring your VPC in AWS SAM, see [AWS::EC2::VPC](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-vpc.html) in the *CloudFormation User Guide*.

------

## Internet access when attached to a VPC
<a name="configuration-vpc-internet-access"></a>

By default, Lambda functions have access to the public internet. When you attach your function to a VPC, it can only access resources available within that VPC. To give your function access to the internet, you also need to configure the VPC to have internet access. To learn more, see [Enable internet access for VPC-connected Lambda functions](configuration-vpc-internet.md).

## IPv6 support
<a name="configuration-vpc-ipv6"></a>

Your function can connect to resources in dual-stack VPC subnets over IPv6. This option is turned off by default. To allow outbound IPv6 traffic, use the console or the `--vpc-config Ipv6AllowedForDualStack=true` option with the [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) or [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command.

**Note**  
To allow outbound IPv6 traffic in a VPC, all of the subnets that are connected to the function must be dual-stack subnets. Lambda doesn't support outbound IPv6 connections for IPv6-only subnets in a VPC or outbound IPv6 connections for functions that are not connected to a VPC.

You can update your function code to explicitly connect to subnet resources over IPv6. The following Python example opens a socket and connects to an IPv6 server.

**Example — Connect to IPv6 server**  

```
def connect_to_server(event, context):
    server_address = event['host']
    server_port = event['port']
    message = event['message']
    run_connect_to_server(server_address, server_port, message)

def run_connect_to_server(server_address, server_port, message):
    sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM, 0)
    try:
        # Send data
        sock.connect((server_address, int(server_port), 0, 0))
        sock.sendall(message.encode())
        BUFF_SIZE = 4096
        data = b''
        while True:
            segment = sock.recv(BUFF_SIZE)
            data += segment
            # Either 0 or end of data
            if len(segment) < BUFF_SIZE:
                break
        return data
    finally:
        sock.close()
```

## Best practices for using Lambda with Amazon VPCs
<a name="configuration-vpc-best-practice"></a>

To ensure that your Lambda VPC configuration meets best practice guidelines, follow the advice in the following sections.

### Security best practices
<a name="configuration-vpc-best-practice-security"></a>

To attach your Lambda function to a VPC, you need to give your function’s execution role a number of Amazon EC2 permissions. These permissions are required to create the network interfaces your function uses to access the resources in the VPC. However, these permissions are also implicitly granted to your function’s code. This means that your function code has permission to make these Amazon EC2 API calls.

To follow the principle of least-privilege access, add a deny policy like the following example to your function’s execution role. This policy prevents your function code from making calls to the Amazon EC2 APIs, while still allowing the Lambda service to manage VPC resources on your behalf. The policy uses the `lambda:SourceFunctionArn` condition key, which only applies to API calls made by your function code during execution. For more information, see [Using source function ARN to control function access behavior](permissions-source-function-arn.md).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [ 
                 "ec2:CreateNetworkInterface",
                 "ec2:DeleteNetworkInterface",
                 "ec2:DescribeNetworkInterfaces",
                 "ec2:DescribeSubnets",
                 "ec2:DetachNetworkInterface",
                 "ec2:AssignPrivateIpAddresses",
                 "ec2:UnassignPrivateIpAddresses"
            ],
            "Resource": [ "*" ],
            "Condition": {
                "ArnEquals": {
                    "lambda:SourceFunctionArn": [
                        "arn:aws:lambda:us-west-2:123456789012:function:my_function"
                    ]
                }
            }
        }
    ]
}
```

------

AWS provides *[security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)* and *[network Access Control Lists (ACLs)](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html)* to increase security in your VPC. Security groups control inbound and outbound traffic for your resources, and network ACLs control inbound and outbound traffic for your subnets. Security groups provide enough access control for most subnets. You can use network ACLs if you want an additional layer of security for your VPC. For general guidelines on security best practices when using Amazon VPCs, see [Security best practices for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-best-practices.html) in the *Amazon Virtual Private Cloud User Guide*.

### Performance best practices
<a name="configuration-vpc-best-practice-performance"></a>

When you attach your function to a VPC, Lambda checks to see if there is an available network resource (Hyperplane ENI) it can use to connect to. Hyperplane ENIs are associated with a particular combination of security groups and VPC subnets. If you’ve already attached one function to a VPC, specifying the same subnets and security groups when you attach another function means that Lambda can share the network resources and avoid the need to create a new Hyperplane ENI. For more information about Hyperplane ENIs and their lifecycle, see [Understanding Hyperplane Elastic Network Interfaces (ENIs)](#configuration-vpc-enis).

## Understanding Hyperplane Elastic Network Interfaces (ENIs)
<a name="configuration-vpc-enis"></a>

A Hyperplane ENI is a managed resource that acts as a network interface between your Lambda function and the resources you want your function to connect to. The Lambda service creates and manages these ENIs automatically when you attach your function to a VPC.

Hyperplane ENIs are not directly visible to you, and you don’t need to configure or manage them. However, knowing how they work can help you to understand your function’s behavior when you attach it to a VPC.

The first time you attach a function to a VPC using a particular subnet and security group combination, Lambda creates a Hyperplane ENI. Other functions in your account that use the same subnet and security group combination can also use this ENI. Wherever possible, Lambda reuses existing ENIs to optimize resource utilization and minimize the creation of new ENIs. Each Hyperplane ENI supports up to 65,000 connections/ports. If the number of connections exceeds this limit, Lambda scales the number of ENIs automatically based on network traffic and concurrency requirements.

For new functions, while Lambda is creating a Hyperplane ENI, your function remains in the Pending state and you can’t invoke it. Your function transitions to the Active state only when the Hyperplane ENI is ready, which can take several minutes. For existing functions, you can’t perform additional operations that target the function, such as creating versions or updating the function’s code, but you can continue to invoke previous versions of the function.

As part of managing the ENI lifecycle, Lambda may delete and recreate ENIs to load balance network traffic across ENIs or to address issues found in ENI health-checks. Additionally, if a Lambda function remains idle for 14 days, Lambda reclaims any unused Hyperplane ENIs and sets the function state to `Inactive`. The next invocation attempt will fail, and the function re-enters the Pending state until Lambda completes the creation or allocation of a Hyperplane ENI. We recommend that your design doesn't rely on the persistence of ENIs.

When you update a function to remove its VPC configuration, Lambda requires up to 20 minutes to delete the attached Hyperplane ENI. Lambda only deletes the ENI if no other function (or published function version) is using that Hyperplane ENI. 

Lambda relies on permissions in the function [ execution role](lambda-intro-execution-role.md) to delete the Hyperplane ENI. If you delete the execution role before Lambda deletes the Hyperplane ENI, Lambda won't be able to delete the Hyperplane ENI. You can manually perform the deletion.

## Using IAM condition keys for VPC settings
<a name="vpc-conditions"></a>

You can use Lambda-specific condition keys for VPC settings to provide additional permission controls for your Lambda functions. For example, you can require that all functions in your organization are connected to a VPC. You can also specify the subnets and security groups that the function's users can and can't use.

Lambda supports the following condition keys in IAM policies:
+ **lambda:VpcIds** – Allow or deny one or more VPCs.
+ **lambda:SubnetIds** – Allow or deny one or more subnets.
+ **lambda:SecurityGroupIds** – Allow or deny one or more security groups.

The Lambda API operations [CreateFunction](https://docs.aws.amazon.com/lambda/latest/api/API_CreateFunction.html) and [UpdateFunctionConfiguration](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateFunctionConfiguration.html) support these condition keys. For more information about using condition keys in IAM policies, see [IAM JSON Policy Elements: Condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in the *IAM User Guide*.

**Tip**  
If your function already includes a VPC configuration from a previous API request, you can send an `UpdateFunctionConfiguration` request without the VPC configuration.

### Example policies with condition keys for VPC settings
<a name="vpc-condition-examples"></a>

The following examples demonstrate how to use condition keys for VPC settings. After you create a policy statement with the desired restrictions, append the policy statement for the target user or role.

#### Ensure that users deploy only VPC-connected functions
<a name="vpc-condition-example-1"></a>

To ensure that all users deploy only VPC-connected functions, you can deny function create and update operations that don't include a valid VPC ID. 

Note that VPC ID is not an input parameter to the `CreateFunction` or `UpdateFunctionConfiguration` request. Lambda retrieves the VPC ID value based on the subnet and security group parameters.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "EnforceVPCFunction",
      "Action": [
          "lambda:CreateFunction",
          "lambda:UpdateFunctionConfiguration"
       ],
      "Effect": "Deny",
      "Resource": "*",
      "Condition": {
        "Null": {
           "lambda:VpcIds": "true"
        }
      }
    }
  ]
}
```

------

#### Deny users access to specific VPCs, subnets, or security groups
<a name="vpc-condition-example-2"></a>

To deny users access to specific VPCs, use `StringEquals` to check the value of the `lambda:VpcIds` condition. The following example denies users access to `vpc-1` and `vpc-2`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EnforceOutOfVPC",
            "Action": [
                "lambda:CreateFunction",
                "lambda:UpdateFunctionConfiguration"
            ],
            "Effect": "Deny",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "lambda:VpcIds": [
                        "vpc-1",
                        "vpc-2"
                    ]
                }
            }
        }
    ]
}
```

------

To deny users access to specific subnets, use `StringEquals` to check the value of the `lambda:SubnetIds` condition. The following example denies users access to `subnet-1` and `subnet-2`.

```
{
      "Sid": "EnforceOutOfSubnet",
      "Action": [
          "lambda:CreateFunction",
          "lambda:UpdateFunctionConfiguration"
       ],
      "Effect": "Deny",
      "Resource": "*",
      "Condition": {
        "ForAnyValue:StringEquals": {
            "lambda:SubnetIds": ["subnet-1", "subnet-2"]
        }
      }
    }
```

To deny users access to specific security groups, use `StringEquals` to check the value of the `lambda:SecurityGroupIds` condition. The following example denies users access to `sg-1` and `sg-2`.

```
{
      "Sid": "EnforceOutOfSecurityGroups",
      "Action": [
          "lambda:CreateFunction",
          "lambda:UpdateFunctionConfiguration"
       ],
      "Effect": "Deny",
      "Resource": "*",
      "Condition": {
        "ForAnyValue:StringEquals": {
            "lambda:SecurityGroupIds": ["sg-1", "sg-2"]
        }
      }
    }
  ]
}
```

#### Allow users to create and update functions with specific VPC settings
<a name="vpc-condition-example-3"></a>

To allow users to access specific VPCs, use `StringEquals` to check the value of the `lambda:VpcIds` condition. The following example allows users to access `vpc-1` and `vpc-2`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EnforceStayInSpecificVpc",
            "Action": [
                "lambda:CreateFunction",
                "lambda:UpdateFunctionConfiguration"
            ],
            "Effect": "Allow",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "lambda:VpcIds": [
                        "vpc-1",
                        "vpc-2"
                    ]
                }
            }
        }
    ]
}
```

------

To allow users to access specific subnets, use `StringEquals` to check the value of the `lambda:SubnetIds` condition. The following example allows users to access `subnet-1` and `subnet-2`.

```
{
      "Sid": "EnforceStayInSpecificSubnets",
      "Action": [
          "lambda:CreateFunction",
          "lambda:UpdateFunctionConfiguration"
       ],
      "Effect": "Allow",
      "Resource": "*",
      "Condition": {
        "ForAllValues:StringEquals": {
            "lambda:SubnetIds": ["subnet-1", "subnet-2"]
        }
      }
    }
```

To allow users to access specific security groups, use `StringEquals` to check the value of the `lambda:SecurityGroupIds` condition. The following example allows users to access `sg-1` and `sg-2`.

```
{
      "Sid": "EnforceStayInSpecificSecurityGroup",
      "Action": [
          "lambda:CreateFunction",
          "lambda:UpdateFunctionConfiguration"
       ],
      "Effect": "Allow",
      "Resource": "*",
      "Condition": {
        "ForAllValues:StringEquals": {
            "lambda:SecurityGroupIds": ["sg-1", "sg-2"]
        }
      }
    }
  ]
}
```

## VPC tutorials
<a name="vpc-tutorials"></a>

In the following tutorials, you connect a Lambda function to resources in your VPC.
+ [Tutorial: Using a Lambda function to access Amazon RDS in an Amazon VPC](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-lambda-tutorial.html)
+ [Tutorial: Configuring a Lambda function to access Amazon ElastiCache in an Amazon VPC](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/LambdaRedis.html)

# Giving Lambda functions access to a resource in an Amazon VPC in another account
<a name="configuration-vpc-cross-account"></a>

You can give your AWS Lambda function access to a resource in a Amazon VPC in Amazon Virtual Private Cloud managed by another account, without exposing either VPC to the internet. This access pattern allows you to share data with other organizations using AWS. Using this access pattern, you can share data between VPCs with a greater level of security and performance than over the internet. Configure your Lambda function to use a Amazon VPC peering connection to access these resources.

**Warning**  
When you allow access between accounts or VPCs, check that your plan meets the security requirements of the respective organizations that manage these accounts. Following the instructions in this document will affect the security posture of your resources.

In this tutorial, you connect two accounts together with a peering connection using IPv4. You configure a Lambda function that is not already connected to a Amazon VPC. You configure DNS resolution to connect your function to resources that do not provide static IPs. To adapt these instructions to other peering scenarios, consult the [VPC Peering Guide](https://docs.aws.amazon.com//vpc/latest/peering/what-is-vpc-peering.html).

## Prerequisites
<a name="w2aac35c61b9"></a>

To give a Lambda function access to a resource in another acccount, you must have:
+ A Lambda function, configured to authenticate with and then read from your resource.
+ A resource in another account, such as an Amazon RDS cluster, available through Amazon VPC.
+ Credentials for your Lambda function's account and your resource's account. If you are not authorized to use your resource's account, contact an authorized user to prepare that account.
+ Permission to create and update a VPC (and supporting Amazon VPC resources) to associate with your Lambda function.
+ Permission to update the execution role and VPC configuration for your Lambda function.
+ Permission to create a VPC peering connection in your Lambda function's account.
+ Permission to accept a VPC peering connection in your resource's account.
+ Permission to update the configuration of your resource's VPC (and supporting Amazon VPC resources).
+ Permission to invoke your Lambda function.

## Create an Amazon VPC in your function's account
<a name="w2aac35c61c11"></a>

Create an Amazon VPC, subnets, route tables, and a security group in your Lambda function's account. 

**To create a VPC, subnets, and other VPC resources using the console**

1. Open the Amazon VPC Console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. On the dashboard, choose **Create VPC**.

1. For **IPv4 CIDR block**, provide a private CIDR block. Your CIDR block must not overlap with blocks used in your resource's VPC. Don't pick a block your resources' VPC uses to assign IPs to resources or a block already defined in the route tables in your resources VPC. For more information about defining appropriate CIDR blocks, see [VPC CIDR blocks](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-cidr-blocks.html).

1. Choose **Customize AZs**.

1. Select the same AZs as your resource.

1. For **Number of public subnets**, choose **0**.

1. For **VPC endpoints**, choose **None**.

1. Choose **Create VPC**.

## Grant VPC permissions to your function's execution role
<a name="w2aac35c61c13"></a>

Attach [AWSLambdaVPCAccessExecutionRole](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaVPCAccessExecutionRole.html) to your function’s execution role to allow it to connect to VPCs. 

**To grant VPC permissions to your function's execution role**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the name of your function.

1. Choose **Configuration**.

1. Choose **Permissions**.

1. Under **Role name**, choose the execution role.

1. In the **Permissions policies** section, choose **Add permissions**.

1. In the dropdown list, choose **Attach policies**.

1. In the search box, enter `AWSLambdaVPCAccessExecutionRole`.

1. To the left of the policy name, choose the checkbox.

1. Choose **Add permissions**.

**To attach your function to your Amazon VPC**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the name of your function.

1. Choose the **Configuration** tab, then choose **VPC**.

1. Choose **Edit**.

1. Under **VPC**, select your VPC

1. Under **Subnets**, choose your subnets.

1. Under **Security groups**, choose the default security group for your VPC.

1. Choose **Save**.

## Create a VPC peering connection request
<a name="w2aac35c61c17"></a>

Create a VPC peering connection request from your function's VPC (the requester VPC) to your resource's VPC (the accepter VPC).

**To request a VPC peering connection from your function's VPC**

1. Open the [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Peering connections**.

1. Choose **Create peering connection**.

1. For **VPC ID (Requester)**, select your function's VPC.

1. For **Account ID**, enter the ID of your resource's account.

1. For **VPC ID (Accepter)**, enter your resource's VPC.

## Prepare your resource's account
<a name="w2aac35c61c19"></a>

To create your peering connection and prepare your resource's VPC to use the connection, log in to your resource's account with a role that holds the permissions listed in the prerequisites. The steps to log in may be different based on how the account is secured. For more information about how to sign in to an AWS account, see the [AWS Sign-in User Guide](https://docs.aws.amazon.com//signin/latest/userguide/what-is-sign-in.html). In your resource's account, perform the following procedures.

**To accept the VPC peering connection request**

1. Open the [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Peering connections**.

1. Select the pending VPC peering connection (the status is pending-acceptance).

1. Choose **Actions**

1. From the dropdown list, choose **Accept request**.

1. When prompted for confirmation, choose **Accept request**.

1. Choose **Modify my route tables now** to add a route to the main route table for your VPC so that you can send and receive traffic across the peering connection.

Inspect the route tables for the resource's VPC. The route generated by Amazon VPC might not establish connectivity, based on how your resource's VPC is set up. Check for conflicts between the new route and existing configuration for the VPC. For more information about troubleshooting, see [Troubleshoot a VPC peering connection](https://docs.aws.amazon.com/vpc/latest/peering/troubleshoot-vpc-peering-connections.html) in the *Amazon Virtual Private Cloud VPC Peering Guide*.

**To update the security group for your resource**

1. Open the [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Security groups**.

1. Select the security group for your resource.

1. Choose **Actions**.

1. From the dropdown list, choose **Edit inbound rules**.

1. Choose **Add rule**.

1. For **Source** enter your function's account ID and security group ID, separated by a forward slash (for example, 111122223333/sg-1a2b3c4d).

1. Choose **Edit outbound rules**.

1. Check whether outbound traffic is restricted. Default VPC settings allow all outbound traffic. If outbound traffic is restricted, continue to the next step.

1. Choose **Add rule**.

1. For **Destination** enter your function's account ID and security group ID, separated by a forward slash (for example, 111122223333/sg-1a2b3c4d).

1. Choose **Save rules**.

**To enable DNS resolution for your peering connection**

1. Open the [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Peering connections**.

1. Select your peering connection.

1. Choose **Actions**.

1. Choose **Edit DNS settings**.

1. Below **Accepter DNS resolution**, select **Allow requester VPC to resolve DNS of accepter VPC hosts to private IP**.

1. Choose **Save changes**.

## Update VPC configuration in your function's account
<a name="w2aac35c61c21"></a>

Log in to your function's account, then update the VPC configuration.

**To add a route for your VPC peering connection**

1. Open the [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Route tables**.

1. Select the check box next to the name of the route table for the subnet you associated with your function.

1. Choose **Actions**.

1. Choose **Edit routes**.

1. Choose **Add route**.

1. For **Destination**, enter the CIDR block for your resource's VPC.

1. For **Target**, select your VPC peering connection.

1. Choose **Save changes**.

For more information about considerations you may encounter while updating your route tables, consult [Update your route tables for a VPC peering connection](https://docs.aws.amazon.com//vpc/latest/peering/vpc-peering-routing.html).

**To update the security group for your Lambda function**

1. Open the [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Security groups**.

1. Choose **Actions**.

1. Choose **Edit inbound rules**.

1. Choose **Add rule**.

1. For **Source** enter your resource's account ID and security group ID, separated by a forward slash (for example, 111122223333/sg-1a2b3c4d).

1. Choose **Save rules**.

**To enable DNS resolution for your peering connection**

1. Open the [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Peering connections**.

1. Select your peering connection.

1. Choose **Actions**.

1. Choose **Edit DNS settings**.

1. Below **Requester DNS resolution**, select **Allow accepter VPC to resolve DNS of requester VPC hosts to private IP**.

1. Choose **Save changes**.

## Test your function
<a name="w2aac35c61c23"></a>

**To create a test event and inspect your function's output**

1. In the **Code source** pane, choose **Test**.

1. Select **Create new event**.

1. In the **Event JSON** panel, replace the default values with an input appropriate for your Lambda function.

1. Choose **Invoke**.

1. In the **Execution results** tab, confirm that **Response** contains your expected output.

Additionally, you can check your function's logs to verify the logs are as you expect.

**To view your function's invocation records in CloudWatch Logs**

1. Choose the **Monitor** tab.

1. Choose **View CloudWatch logs**.

1. In the **Log streams** tab, choose the log stream for your function's invocation.

1. Confirm your logs are as you expect.

# Enable internet access for VPC-connected Lambda functions
<a name="configuration-vpc-internet"></a>

By default, Lambda functions run in a Lambda-managed VPC that has internet access. To access resources in a VPC in your account, you can add a VPC configuration to a function. This restricts the function to resources within that VPC, unless the VPC has internet access. This page explains how to provide internet access to VPC-connected Lambda functions.

## I don't have a VPC yet
<a name="new-vpc"></a>

### Create the VPC
<a name="create-vpc-internet"></a>

The **Create VPC workflow** creates all VPC resources required for a Lambda function to access the public internet from a private subnet, including subnets, NAT gateway, internet gateway, and route table entries.

**To create the VPC**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. On the dashboard, choose **Create VPC**.

1. For **Resources to create**, choose **VPC and more**.

1. **Configure the VPC**

   1. For **Name tag auto-generation**, enter a name for the VPC.

   1. For **IPv4 CIDR block**, you can keep the default suggestion, or alternatively you can enter the CIDR block required by your application or network.

   1. If your application communicates by using IPv6 addresses, choose **IPv6 CIDR block**, **Amazon-provided IPv6 CIDR block**.

1. **Configure the subnets**

   1. For **Number of Availability Zones**, choose **2**. We recommend at least two AZs for high availability.

   1. For **Number of public subnets**, choose **2**.

   1. For **Number of private subnets**, choose **2**.

   1. You can keep the default CIDR block for the public subnet, or alternatively you can expand **Customize subnet CIDR blocks** and enter a CIDR block. For more information, see [Subnet CIDR blocks](https://docs.aws.amazon.com/vpc/latest/userguide/subnet-sizing.html) .

1. For **NAT gateways**, choose **1 per AZ** to improve resiliency.

1. For **Egress only internet gateway**, choose **Yes** if you opted to include an IPv6 CIDR block.

1. For **VPC endpoints**, keep the default (**S3 Gateway**). There is no cost for this option. For more information, see [Types of VPC endpoints for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-endpoints-for-s3).

1. For **DNS options**, keep the default settings.

1. Choose **Create VPC**.

### Configure the Lambda function
<a name="vpc-function-internet-create"></a>

**To configure a VPC when you create a function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose **Create function**.

1. Under **Basic information**, for **Function name**, enter a name for your function.

1. Expand **Advanced settings**.

1. Select **Enable VPC**, and then choose a VPC.

1. (Optional) To allow [outbound IPv6 traffic](configuration-vpc.md#configuration-vpc-ipv6), select **Allow IPv6 traffic for dual-stack subnets**.

1. For **Subnets**, select all private subnets. The private subnets can access the internet through the NAT gateway. Connecting a function to a public subnet doesn't give it internet access.
**Note**  
If you selected **Allow IPv6 traffic for dual-stack subnets**, all selected subnets must have an IPv4 CIDR block and an IPv6 CIDR block.

1. For **Security groups**, select a security group that allows outbound traffic.

1. Choose **Create function**.

Lambda automatically creates an execution role with the [AWSLambdaVPCAccessExecutionRole](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaVPCAccessExecutionRole.html) AWS managed policy. The permissions in this policy are required only to create elastic network interfaces for the VPC configuration, not to invoke your function. To apply least-privilege permissions, you can remove the **AWSLambdaVPCAccessExecutionRole** policy from your execution role after you create the function and VPC configuration. For more information, see [Required IAM permissions](configuration-vpc.md#configuration-vpc-permissions).

**To configure a VPC for an existing function**

To add a VPC configuration to an existing function, the function's execution role must have [ permission to create and manage elastic network interfaces](configuration-vpc.md#configuration-vpc-permissions). The [AWSLambdaVPCAccessExecutionRole](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaVPCAccessExecutionRole.html) AWS managed policy includes the required permissions. To apply least-privilege permissions, you can remove the **AWSLambdaVPCAccessExecutionRole** policy from your execution role after you create the VPC configuration.

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose the **Configuration** tab, and then choose **VPC**.

1. Under **VPC**, choose **Edit**.

1. Select the VPC.

1. (Optional) To allow [outbound IPv6 traffic](configuration-vpc.md#configuration-vpc-ipv6), select **Allow IPv6 traffic for dual-stack subnets**.

1. For **Subnets**, select all private subnets. The private subnets can access the internet through the NAT gateway. Connecting a function to a public subnet doesn't give it internet access.
**Note**  
If you selected **Allow IPv6 traffic for dual-stack subnets**, all selected subnets must have an IPv4 CIDR block and an IPv6 CIDR block.

1. For **Security groups**, select a security group that allows outbound traffic.

1. Choose **Save**.

### Test the function
<a name="vpc-function-internet-test"></a>

Use the following sample code to confirm that your VPC-connected function can reach the public internet. If successful, the code returns a `200` status code. If unsuccessful, the function times out.

------
#### [ Node.js ]

1. In the **Code source** pane on the Lambda console, paste the following code into the **index.mjs** file. The function makes an HTTP GET request to a public endpoint and returns the HTTP response code to test if the function has access to the public internet.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/code-source-nodejs.png)  
**Example — HTTP request with async/await**  

   ```
   const url = "https://aws.amazon.com/";
   
   export const handler = async(event) => {
       try {
           const res = await fetch(url);
           console.info("status", res.status);
           return res.status;
       }
       catch (e) {
           console.error(e);
           return 500;
       }
   };
   ```

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

1. Choose the **Test** tab.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-tab.png)

1. Choose **Test**.

1. The function returns a `200` status code. This means that the function has outbound internet access.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-successful-200.png)

   If the function can't reach the public internet, you get an error message like this:

   ```
   {
     "errorMessage": "2024-04-11T17:22:20.857Z abe12jlc-640a-8157-0249-9be825c2y110 Task timed out after 3.01 seconds"
   }
   ```

------
#### [ Python ]

1. In the **Code source** pane on the Lambda console, paste the following code into the **lambda\$1function.py** file. The function makes an HTTP GET request to a public endpoint and returns the HTTP response code to test if the function has access to the public internet.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/code-source-python.png)

   ```
   import urllib.request
   
   def lambda_handler(event, context):
       try:
           response = urllib.request.urlopen('https://aws.amazon.com')
           status_code = response.getcode()
           print('Response Code:', status_code)
           return status_code
       except Exception as e:
           print('Error:', e)
           raise e
   ```

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

1. Choose the **Test** tab.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-tab.png)

1. Choose **Test**.

1. The function returns a `200` status code. This means that the function has outbound internet access.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-successful-200.png)

   If the function can't reach the public internet, you get an error message like this:

   ```
   {
     "errorMessage": "2024-04-11T17:22:20.857Z abe12jlc-640a-8157-0249-9be825c2y110 Task timed out after 3.01 seconds"
   }
   ```

------

## I already have a VPC
<a name="existing-vpc"></a>

If you already have a VPC but you need to configure public internet access for a Lambda function, follow these steps. This procedure assumes that your VPC has at least two subnets. If you don't have two subnets, see [Create a subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the *Amazon VPC User Guide*.

### Verify the route table configuration
<a name="vpc-internet-routes"></a>

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose the **VPC ID**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/vpc-id.png)

1. Scroll down to the **Resource map** section. Note the route table mappings. Open each route table that is mapped to a subnet.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/route-table-associations.png)

1. Scroll down to the **Routes** tab. Review the routes to determine if your VPC has both of the following route tables. Each of these requirements must be satisfied by a separate route table.
   + Internet-bound traffic (`0.0.0.0/0` for IPv4, `::/0` for IPv6) is routed to an internet gateway (`igw-xxxxxxxxxx`). This means that the subnet associated with the route table is a public subnet.
**Note**  
If your subnet doesn't have an IPv6 CIDR block, you will only see the IPv4 route (`0.0.0.0/0`).  
**Example public subnet route table**    
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/routes-public.png)
   + Internet-bound traffic for IPv4 (`0.0.0.0/0`) is routed to a NAT gateway (`nat-xxxxxxxxxx`) that is associated with a public subnet. This means that the subnet is a private subnet that can access the internet through the NAT gateway.
**Note**  
If your subnet has an IPv6 CIDR block, the route table must also route internet-bound IPv6 traffic (`::/0`) to an egress-only internet gateway (`eigw-xxxxxxxxxx`). If your subnet doesn't have an IPv6 CIDR block, you will only see the IPv4 route (`0.0.0.0/0`).  
**Example private subnet route table**    
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/routes-private.png)

1. Repeat the previous step until you have reviewed each route table associated with a subnet in your VPC and confirmed that you have a route table with an internet gateway and a route table with a NAT gateway.

   If you don't have two route tables, one with a route to an internet gateway and one with a route to a NAT gateway, follow these steps to create the missing resources and route table entries.

### Create a route table
<a name="create-route-table"></a>

Follow these steps to create a route table and associate it with a subnet.

**To create a custom route table using the Amazon VPC console**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Route tables**.

1. Choose **Create route table**.

1. (Optional) For **Name**, enter a name for your route table. 

1. For **VPC**, choose your VPC. 

1. (Optional) To add a tag, choose **Add new tag** and enter the tag key and tag value.

1. Choose **Create route table**.

1. On the **Subnet associations** tab, choose **Edit subnet associations**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/route-table-subnet.png)

1. Select the check box for the subnet to associate with the route table.

1. Choose **Save associations**.

### Create an internet gateway
<a name="create-igw"></a>

Follow these steps to create an internet gateway, attach it to your VPC, and add it to your public subnet's route table.

**To create an internet gateway**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Internet gateways**.

1. Choose **Create internet gateway**.

1. (Optional) Enter a name for your internet gateway.

1. (Optional) To add a tag, choose **Add new tag** and enter the tag key and value.

1. Choose **Create internet gateway**.

1. Choose **Attach to a VPC** from the banner at the top of the screen, select an available VPC, and then choose **Attach internet gateway**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/igw-attach-vpc.png)

1. Choose the **VPC ID**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/igw-subnet-1.png)

1. Choose the **VPC ID** again to open the VPC details page.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/igw-your-vpcs.png)

1. Scroll down to the **Resource map** section and then choose a subnet. The subnet details are displayed in a new tab.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/vpc-subnets.png)

1. Choose the link under **Route table**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/subnet-route-table.png)

1. Choose the **Route table ID** to open the route table details page.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/route-table-id.png)

1. Under **Routes**, choose **Edit routes**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/edit-routes.png)

1. Choose **Add route**, and then enter `0.0.0.0/0` in the **Destination** box.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/create-route-1.png)

1. For **Target**, select **Internet gateway**, and then choose the internet gateway that you created earlier. If your subnet has an IPv6 CIDR block, you must also add a route for `::/0` to the same internet gateway.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/create-route-2.png)

1. Choose **Save changes**.

### Create a NAT gateway
<a name="create-nat-gateway"></a>

Follow these steps to create a NAT gateway, associate it with a public subnet, and then add it to your private subnet's route table.

**To create a NAT gateway and associate it with a public subnet**

1. In the navigation pane, choose **NAT gateways**.

1. Choose **Create NAT gateway**.

1. (Optional) Enter a name for your NAT gateway.

1. For **Subnet**, select a public subnet in your VPC. (A public subnet is a subnet that has a direct route to an internet gateway in its route table.)
**Note**  
NAT gateways are associated with a public subnet, but the route table entry is in the private subnet.

1. For **Elastic IP allocation ID**, select an elastic IP address or choose **Allocate Elastic IP**.

1. Choose **Create NAT gateway**.

**To add a route to the NAT gateway in the private subnet's route table**

1. In the navigation pane, choose **Subnets**.

1. Select a private subnet in your VPC. (A private subnet is a subnet that doesn't have a route to an internet gateway in its route table.)

1. Choose the link under **Route table**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/subnet-route-table.png)

1. Choose the **Route table ID** to open the route table details page.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/route-table-id.png)

1. Scroll down and choose the **Routes** tab, then choose **Edit routes**  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/route-table-edit-routes.png)

1. Choose **Add route**, and then enter `0.0.0.0/0` in the **Destination** box.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/create-route-1.png)

1. For **Target**, select **NAT gateway**, and then choose the NAT gateway that you created earlier.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/create-route-nat.png)

1. Choose **Save changes**.

### Create an egress-only internet gateway (IPv6 only)
<a name="create-egress-gateway"></a>

Follow these steps to create an egress-only internet gateway and add it to your private subnet's route table.

**To create an egress-only internet gateway**

1. In the navigation pane, choose **Egress-only internet gateways**.

1. Choose **Create egress only internet gateway**.

1. (Optional) Enter a name.

1. Select the VPC in which to create the egress-only internet gateway. 

1. Choose **Create egress only internet gateway**.

1. Choose the link under **Attached VPC ID**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/eigw-details.png)

1. Choose the link under **VPC ID** to open the VPC details page.

1. Scroll down to the **Resource map** section and then choose a private subnet. (A private subnet is a subnet that doesn't have a route to an internet gateway in its route table.) The subnet details are displayed in a new tab.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/vpc-subnet-private.png)

1. Choose the link under **Route table**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/private-subnet-route-table.png)

1. Choose the **Route table ID** to open the route table details page.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/route-table-id.png)

1. Under **Routes**, choose **Edit routes**.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/edit-routes.png)

1. Choose **Add route**, and then enter `::/0` in the **Destination** box.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/create-route-1.png)

1. For **Target**, select **Egress Only Internet Gateway**, and then choose the gateway that you created earlier.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/eigw-route.png)

1. Choose **Save changes**.

### Configure the Lambda function
<a name="vpc-function-internet-create-existing"></a>

**To configure a VPC when you create a function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose **Create function**.

1. Under **Basic information**, for **Function name**, enter a name for your function.

1. Expand **Advanced settings**.

1. Select **Enable VPC**, and then choose a VPC.

1. (Optional) To allow [outbound IPv6 traffic](configuration-vpc.md#configuration-vpc-ipv6), select **Allow IPv6 traffic for dual-stack subnets**.

1. For **Subnets**, select all private subnets. The private subnets can access the internet through the NAT gateway. Connecting a function to a public subnet doesn't give it internet access.
**Note**  
If you selected **Allow IPv6 traffic for dual-stack subnets**, all selected subnets must have an IPv4 CIDR block and an IPv6 CIDR block.

1. For **Security groups**, select a security group that allows outbound traffic.

1. Choose **Create function**.

Lambda automatically creates an execution role with the [AWSLambdaVPCAccessExecutionRole](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaVPCAccessExecutionRole.html) AWS managed policy. The permissions in this policy are required only to create elastic network interfaces for the VPC configuration, not to invoke your function. To apply least-privilege permissions, you can remove the **AWSLambdaVPCAccessExecutionRole** policy from your execution role after you create the function and VPC configuration. For more information, see [Required IAM permissions](configuration-vpc.md#configuration-vpc-permissions).

**To configure a VPC for an existing function**

To add a VPC configuration to an existing function, the function's execution role must have [ permission to create and manage elastic network interfaces](configuration-vpc.md#configuration-vpc-permissions). The [AWSLambdaVPCAccessExecutionRole](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSLambdaVPCAccessExecutionRole.html) AWS managed policy includes the required permissions. To apply least-privilege permissions, you can remove the **AWSLambdaVPCAccessExecutionRole** policy from your execution role after you create the VPC configuration.

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose the **Configuration** tab, and then choose **VPC**.

1. Under **VPC**, choose **Edit**.

1. Select the VPC.

1. (Optional) To allow [outbound IPv6 traffic](configuration-vpc.md#configuration-vpc-ipv6), select **Allow IPv6 traffic for dual-stack subnets**.

1. For **Subnets**, select all private subnets. The private subnets can access the internet through the NAT gateway. Connecting a function to a public subnet doesn't give it internet access.
**Note**  
If you selected **Allow IPv6 traffic for dual-stack subnets**, all selected subnets must have an IPv4 CIDR block and an IPv6 CIDR block.

1. For **Security groups**, select a security group that allows outbound traffic.

1. Choose **Save**.

### Test the function
<a name="vpc-function-internet-test-existing"></a>

Use the following sample code to confirm that your VPC-connected function can reach the public internet. If successful, the code returns a `200` status code. If unsuccessful, the function times out.

------
#### [ Node.js ]

1. In the **Code source** pane on the Lambda console, paste the following code into the **index.mjs** file. The function makes an HTTP GET request to a public endpoint and returns the HTTP response code to test if the function has access to the public internet.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/code-source-nodejs.png)  
**Example — HTTP request with async/await**  

   ```
   const url = "https://aws.amazon.com/";
   
   export const handler = async(event) => {
       try {
           const res = await fetch(url);
           console.info("status", res.status);
           return res.status;
       }
       catch (e) {
           console.error(e);
           return 500;
       }
   };
   ```

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

1. Choose the **Test** tab.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-tab.png)

1. Choose **Test**.

1. The function returns a `200` status code. This means that the function has outbound internet access.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-successful-200.png)

   If the function can't reach the public internet, you get an error message like this:

   ```
   {
     "errorMessage": "2024-04-11T17:22:20.857Z abe12jlc-640a-8157-0249-9be825c2y110 Task timed out after 3.01 seconds"
   }
   ```

------
#### [ Python ]

1. In the **Code source** pane on the Lambda console, paste the following code into the **lambda\$1function.py** file. The function makes an HTTP GET request to a public endpoint and returns the HTTP response code to test if the function has access to the public internet.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/code-source-python.png)

   ```
   import urllib.request
   
   def lambda_handler(event, context):
       try:
           response = urllib.request.urlopen('https://aws.amazon.com')
           status_code = response.getcode()
           print('Response Code:', status_code)
           return status_code
       except Exception as e:
           print('Error:', e)
           raise e
   ```

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

1. Choose the **Test** tab.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-tab.png)

1. Choose **Test**.

1. The function returns a `200` status code. This means that the function has outbound internet access.  
![\[\]](http://docs.aws.amazon.com/lambda/latest/dg/images/test-successful-200.png)

   If the function can't reach the public internet, you get an error message like this:

   ```
   {
     "errorMessage": "2024-04-11T17:22:20.857Z abe12jlc-640a-8157-0249-9be825c2y110 Task timed out after 3.01 seconds"
   }
   ```

------

# Connecting inbound interface VPC endpoints for Lambda
<a name="configuration-vpc-endpoints"></a>

If you use Amazon Virtual Private Cloud (Amazon VPC) to host your AWS resources, you can establish a connection between your VPC and Lambda. You can use this connection to invoke your Lambda function without crossing the public internet.

To establish a private connection between your VPC and Lambda, create an [interface VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html). Interface endpoints are powered by [AWS PrivateLink](https://aws.amazon.com/privatelink), which enables you to privately access Lambda APIs without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC don't need public IP addresses to communicate with Lambda APIs. Traffic between your VPC and Lambda does not leave the AWS network.

Each interface endpoint is represented by one or more [elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) in your subnets. A network interface provides a private IP address that serves as an entry point for traffic to Lambda.

**Topics**
+ [Considerations for Lambda interface endpoints](#vpc-endpoint-considerations)
+ [Creating an interface endpoint for Lambda](#vpc-endpoint-create)
+ [Creating an interface endpoint policy for Lambda](#vpc-endpoint-policy)

## Considerations for Lambda interface endpoints
<a name="vpc-endpoint-considerations"></a>

Before you set up an interface endpoint for Lambda, be sure to review [Interface endpoint properties and limitations](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#vpce-interface-limitations) in the *Amazon VPC User Guide*.

You can call any of the Lambda API operations from your VPC. For example, you can invoke the Lambda function by calling the `Invoke` API from within your VPC. For the full list of Lambda APIs, see [Actions](https://docs.aws.amazon.com/lambda/latest/dg/API_Operations.html) in the Lambda API reference.

`use1-az3` is a limited capacity Region for Lambda VPC functions. You shouldn't use subnets in this availability zone with your Lambda functions because this can result in reduced zonal redundancy in the event of an outage.

### Keep-alive for persistent connections
<a name="vpc-endpoint-considerations-keepalive"></a>

Lambda purges idle connections over time, so you must use a keep-alive directive to maintain persistent connections. Attempting to reuse an idle connection when invoking a function results in a connection error. To maintain your persistent connection, use the keep-alive directive associated with your runtime. For an example, see [Reusing Connections with Keep-Alive in Node.js](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-reusing-connections.html) in the *AWS SDK for JavaScript Developer Guide*.

### Billing Considerations
<a name="vpc-endpoint-considerations-billing"></a>

There is no additional cost to access a Lambda function through an interface endpoint. For more Lambda pricing information, see [AWS Lambda Pricing](https://aws.amazon.com/lambda/pricing/).

Standard pricing for AWS PrivateLink applies to interface endpoints for Lambda. Your AWS account is billed for every hour an interface endpoint is provisioned in each Availability Zone and for data processed through the interface endpoint. For more interface endpoint pricing information, see [AWS PrivateLink pricing](https://aws.amazon.com/privatelink/pricing/).

### VPC Peering Considerations
<a name="vpc-endpoint-considerations-peering"></a>

You can connect other VPCs to the VPC with interface endpoints using [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html). VPC peering is a networking connection between two VPCs. You can establish a VPC peering connection between your own two VPCs, or with a VPC in another AWS account. The VPCs can also be in two different AWS Regions.

Traffic between peered VPCs stays on the AWS network and does not traverse the public internet. Once VPCs are peered, resources like Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Relational Database Service (Amazon RDS) instances, or VPC-enabled Lambda functions in both VPCs can access the Lambda API through interface endpoints created in the one of the VPCs.

## Creating an interface endpoint for Lambda
<a name="vpc-endpoint-create"></a>

You can create an interface endpoint for Lambda using either the Amazon VPC console or the AWS Command Line Interface (AWS CLI). For more information, see [Creating an interface endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#create-interface-endpoint) in the *Amazon VPC User Guide*.

**To create an interface endpoint for Lambda (console)**

1. Open the [Endpoints page](https://console.aws.amazon.com/vpc/home?#Endpoints) of the Amazon VPC console.

1. Choose **Create Endpoint**.

1. For **Service category**, verify that **AWS services** is selected.

1. For **Service Name**, choose **com.amazonaws.*region*.lambda**. Verify that the **Type** is **Interface**.

1. Choose a VPC and subnets.

1. To enable private DNS for the interface endpoint, select the **Enable DNS Name** check box. We recommend that you enable private DNS names for your VPC endpoints for AWS services. This ensures that requests that use the public service endpoints, such as requests made through an AWS SDK, resolve to your VPC endpoint.

1. For **Security group**, choose one or more security groups.

1. Choose **Create endpoint**.

To use the private DNS option, you must set the `enableDnsHostnames` and `enableDnsSupportattributes` of your VPC. For more information, see [Viewing and updating DNS support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating) in the *Amazon VPC User Guide*. If you enable private DNS for the interface endpoint, you can make API requests to Lambda using its default DNS name for the Region, for example, `lambda.us-east-1.amazonaws.com`. For more service endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) in the *AWS General Reference*.

For more information, see [Accessing a service through an interface endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html#access-service-though-endpoint) in the *Amazon VPC User Guide*.

For information about creating and configuring an endpoint using CloudFormation, see the [AWS::EC2::VPCEndpoint](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-vpcendpoint.html) resource in the *AWS CloudFormation User Guide*.

**To create an interface endpoint for Lambda (AWS CLI)**  
Use the [create-vpc-endpoint](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/create-vpc-endpoint.html) command and specify the VPC ID, VPC endpoint type (interface), service name, subnets that will use the endpoint, and security groups to associate with the endpoint's network interfaces. For example:

```
aws ec2 create-vpc-endpoint 
  --vpc-id vpc-ec43eb89
  --vpc-endpoint-type Interface
  --service-name com.amazonaws.us-east-1.lambda
  --subnet-id subnet-abababab
  --security-group-id sg-1a2b3c4d
```

## Creating an interface endpoint policy for Lambda
<a name="vpc-endpoint-policy"></a>

To control who can use your interface endpoint and which Lambda functions the user can access, you can attach an endpoint policy to your endpoint. The policy specifies the following information:
+ The principal that can perform actions.
+ The actions that the principal can perform.
+ The resources on which the principal can perform actions.

For more information, see [Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html) in the *Amazon VPC User Guide*.

**Example: Interface endpoint policy for Lambda actions**  
The following is an example of an endpoint policy for Lambda. When attached to an endpoint, this policy allows user `MyUser` to invoke the function `my-function`.

**Note**  
You need to include both the qualified and the unqualified function ARN in the resource.

```
{
   "Statement":[
      {
         "Principal":
         { 
             "AWS": "arn:aws:iam::111122223333:user/MyUser" 
         },
         "Effect":"Allow",
         "Action":[
            "lambda:InvokeFunction"
         ],
         "Resource": [
               "arn:aws:lambda:us-east-2:123456789012:function:my-function",
               "arn:aws:lambda:us-east-2:123456789012:function:my-function:*"
            ]
      }
   ]
}
```

# Configuring file system access for Lambda functions
<a name="configuration-filesystem"></a>

You can configure a Lambda function to mount a file system to a local directory. Lambda supports the following file system types:
+ **[Amazon Elastic File System (Amazon EFS)](configuration-filesystem-efs.md)** – Serverless file system that scales automatically with your workloads.
+ **[Amazon S3 Files](configuration-filesystem-s3files.md)** – Serverless file system for mounting your Amazon S3 bucket. Amazon S3 Files provides access to your Amazon S3 objects as files using standard file system operations such as read and write on the local mount path.

**Note**  
A Lambda function can use either Amazon EFS or Amazon S3 Files, but not both. If your function is already configured with one file system type, you must remove it before configuring the other.

# Configuring Amazon EFS file system access
<a name="configuration-filesystem-efs"></a>

You can configure a function to mount an Amazon Elastic File System (Amazon EFS) file system to a local directory. Amazon EFS is a serverless file system that scales automatically with your workloads. With Amazon EFS, your function code can access and modify shared resources safely and at high concurrency.

**Topics**
+ [Execution role and user permissions](#configuration-filesystem-efs-permissions)
+ [Configuring a file system and access point](#configuration-filesystem-efs-setup)
+ [Connecting to a file system (console)](#configuration-filesystem-efs-config)
+ [Supported Regions](#configuration-filesystem-efs-regions)

## Execution role and user permissions
<a name="configuration-filesystem-efs-permissions"></a>

If the file system doesn't have a user-configured AWS Identity and Access Management (IAM) policy, EFS uses a default policy that grants full access to any client that can connect to the file system using a file system mount target. If the file system has a user-configured IAM policy, your function's execution role must have the correct `elasticfilesystem` permissions.

**Execution role permissions**
+ **elasticfilesystem:ClientMount**
+ **elasticfilesystem:ClientWrite (not required for read-only connections)**

These permissions are included in the **AmazonElasticFileSystemClientReadWriteAccess** managed policy. Additionally, your execution role must have the [permissions required to connect to the file system's VPC](configuration-vpc.md#configuration-vpc-permissions).

When you configure a file system, Lambda uses your permissions to verify mount targets. To configure a function to connect to a file system, your user needs the following permissions:

**User permissions**
+ **elasticfilesystem:DescribeMountTargets**

## Configuring a file system and access point
<a name="configuration-filesystem-efs-setup"></a>

Create a file system in Amazon EFS with a mount target in every Availability Zone that your function connects to. For performance and resilience, use at least two Availability Zones. For example, in a simple configuration you could have a VPC with two private subnets in separate Availability Zones. The function connects to both subnets and a mount target is available in each. Ensure that NFS traffic (port 2049) is allowed by the security groups used by the function and mount targets.

**Note**  
When you create a file system, you choose a performance mode that can't be changed later. **General purpose** mode has lower latency, and **Max I/O** mode supports a higher maximum throughput and IOPS. For help choosing, see [Amazon EFS performance](https://docs.aws.amazon.com/efs/latest/ug/performance.html) in the *Amazon Elastic File System User Guide*.

An access point connects each instance of the function to the right mount target for the Availability Zone it connects to. For best performance, create an access point with a non-root path, and limit the number of files that you create in each directory. The following example creates a directory named `my-function` on the file system and sets the owner ID to 1001 with standard directory permissions (755).

**Example access point configuration**  
+ **Name** – `files`
+ **User ID** – `1001`
+ **Group ID** – `1001`
+ **Path** – `/my-function`
+ **Permissions** – `755`
+ **Owner user ID** – `1001`
+ **Group user ID** – `1001`

When a function uses the access point, it is given user ID 1001 and has full access to the directory.

For more information, see the following topics in the *Amazon Elastic File System User Guide*:
+ [Creating resources for Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/creating-using.html)
+ [Working with users, groups, and permissions](https://docs.aws.amazon.com/efs/latest/ug/accessing-fs-nfs-permissions.html)

## Connecting to a file system (console)
<a name="configuration-filesystem-efs-config"></a>

A function connects to a file system over the local network in a VPC. The subnets that your function connects to can be the same subnets that contain mount points for your file system, or subnets in the same Availability Zone that can route NFS traffic (port 2049) to the file system.

**Note**  
If your function is not already connected to a VPC, see [Giving Lambda functions access to resources in an Amazon VPC](configuration-vpc.md).

**To configure EFS file system access**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose **Configuration** and then choose **File systems**.

1. Under **File system**, choose **Add file system**.

1. Select **EFS**.

1. Configure the following properties:
   + **EFS file system** – The access point for a file system in the same VPC.
   + **Local mount path** – The location where the file system is mounted on the Lambda function, starting with `/mnt/`.

**Pricing**  
Amazon EFS charges for storage and throughput, with rates that vary by storage class. For details, see [Amazon EFS pricing](https://aws.amazon.com/efs/pricing).  
Lambda charges for data transfer between VPCs. This only applies if your function's VPC is peered to another VPC with a file system. The rates are the same as for Amazon EC2 data transfer between VPCs in the same Region. For details, see [Lambda pricing](https://aws.amazon.com/lambda/pricing).

## Supported Regions
<a name="configuration-filesystem-efs-regions"></a>

Amazon EFS for Lambda is available in all [commercial Regions](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#region) except Asia Pacific (New Zealand), Asia Pacific (Taipei), Asia Pacific (Malaysia), Asia Pacific (Thailand), and Canada West (Calgary).

# Configuring Amazon S3 Files access
<a name="configuration-filesystem-s3files"></a>

Amazon S3 Files delivers a shared file system that connects any AWS compute resource directly with your data in Amazon S3. Amazon S3 Files provides access to your Amazon S3 objects as files using standard file system operations such as read and write on the local mount path. Learn more about [Amazon S3 Files](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files.html).

**Topics**
+ [Prerequisites and setup](#configuration-filesystem-s3files-setup)
+ [Execution role and user permissions](#configuration-filesystem-s3files-permissions)
+ [Connecting to a file system (console)](#configuration-filesystem-s3files-config)

## Prerequisites and setup
<a name="configuration-filesystem-s3files-setup"></a>

Before you set up Amazon S3 Files with your Lambda function, make sure you have the following:
+ An Amazon S3 file system and mount targets in available state in the same account and AWS Region as your Lambda function.
+ A Lambda function in the same VPC as the mount target. You must have a mount target in each subnet where your function is deployed.
+ Security groups that allow NFS traffic (port 2049) between your Lambda function and the mount targets. [Learn more about configuring security groups](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files-prereq-policies.html#s3-files-prereq-security-groups).

For more information, see the following topics in the *Amazon S3 User Guide*:
+ [Getting started with Amazon S3 Files](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files-getting-started.html)
+ [Amazon S3 Files prerequisites](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files-prereq-policies.html)
+ [Amazon S3 Files best practices](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files-best-practices.html)

## Execution role and user permissions
<a name="configuration-filesystem-s3files-permissions"></a>

Your function's execution role must have the following permissions to access an Amazon S3 Files file system:

**Execution role permissions**
+ **s3files:ClientMount** – Required to mount the file system.
+ **s3files:ClientWrite** – Required for read-write access. Not needed for read-only connections.

These permissions are included in the [AmazonS3FilesClientReadWriteAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesClientReadWriteAccess.html) managed policy. Additionally, your execution role must have the [permissions required to connect to the file system's VPC](configuration-vpc.md#configuration-vpc-permissions).

**Note**  
Amazon S3 Files optimizes throughput by reading directly from Amazon S3. Direct reads from Amazon S3 are supported only for functions configured with 512 MB or more of memory.

Your function also needs the following permissions to read directly from Amazon S3:
+ **s3:GetObject**
+ **s3:GetObjectVersion**

For more information about required permissions, see [IAM permissions for Amazon S3 Files](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files-prereq-policies.html#s3-files-prereq-iam) in the *Amazon S3 User Guide*.

When you configure a file system in the console, Lambda uses your permissions to verify mount targets and access points. To configure a function to connect to a file system, your user needs the following permissions:

**User permissions**
+ **s3files:ListFileSystems**
+ **s3files:ListAccessPoints**
+ **s3files:GetFileSystem**
+ **s3files:GetAccessPoint**
+ **s3files:CreateAccessPoint** – Needed if attaching the file system to the function from the console.

The following example policy grants your function's execution role permissions to mount an Amazon S3 file system with read-write access and read directly from Amazon S3.

```
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "S3FilesLambdaAccess",
            "Effect": "Allow",
            "Action": [
                "s3files:ClientMount",
                "s3files:ClientWrite"
            ],
            "Resource": "*"
        },
        {
            "Sid": "S3DirectRead",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion"
            ],
            "Resource": "arn:aws:s3:::bucket-name/*"
        },
        {
            "Sid": "S3FilesConsoleSetup",
            "Effect": "Allow",
            "Action": [
                "s3files:ListFileSystems",
                "s3files:ListAccessPoints",
                "s3files:GetFileSystem",
                "s3files:GetAccessPoint",
                "s3files:CreateAccessPoint"
            ],
            "Resource": "*"
        }
    ]
}
```

## Connecting to a file system (console)
<a name="configuration-filesystem-s3files-config"></a>

A function connects to a file system over the local network in a VPC. The subnets that your function connects to can be the same subnets that contain mount points for your file system, or subnets in the same Availability Zone that can route NFS traffic (port 2049) to the file system.

**Note**  
If your function is not already connected to a VPC, see [Giving Lambda functions access to resources in an Amazon VPC](configuration-vpc.md).

**To configure S3 Files access**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose **Configuration**, then choose **File systems**.

1. Choose **Add file system** (or **Edit** to modify an existing configuration).

1. Select **S3 Files**.

1. Configure the following properties:
   + **S3 file system** – Choose a file system from the dropdown.
   + **Access point** (optional) – Choose an access point. If the file system has no access points, Lambda automatically creates one when you save (UID/GID 1000:1000, root directory `/lambda`, permissions 755). If access points exist, you must select one.
   + **Local mount path** – The location where the file system is mounted on the Lambda function, starting with `/mnt/`.

1. Choose **Save**.

Your file system will be attached the next time you invoke your Lambda function.

# Create an alias for a Lambda function
<a name="configuration-aliases"></a>

You can create aliases for your Lambda function. A Lambda alias is a pointer to a function version that you can update. The function's users can access the function version using the alias Amazon Resource Name (ARN). When you deploy a new version, you can update the alias to use the new version, or split traffic between two versions.

------
#### [ Console ]

**To create an alias using the console**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose **Aliases** and then choose **Create alias**.

1. On the **Create alias** page, do the following:

   1. Enter a **Name** for the alias.

   1. (Optional) Enter a **Description** for the alias.

   1. For **Version**, choose a function version that you want the alias to point to.

   1. (Optional) To configure routing on the alias, expand **Weighted alias**. For more information, see [Implement Lambda canary deployments using a weighted alias](configuring-alias-routing.md).

   1. Choose **Save**.

------
#### [ AWS CLI ]

To create an alias using the AWS Command Line Interface (AWS CLI), use the [create-alias](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-alias.html) command.

```
aws lambda create-alias \
  --function-name my-function \
  --name alias-name \
  --function-version version-number \
  --description " "
```

To change an alias to point a new version of the function, use the [update-alias](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-alias.html) command.

```
aws lambda update-alias \
  --function-name my-function \
  --name alias-name \
  --function-version version-number
```

To delete an alias, use the [delete-alias](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/delete-alias.html) command.

```
aws lambda delete-alias \
  --function-name my-function \
  --name alias-name
```

 The AWS CLI commands in the preceding steps correspond to the following Lambda API operations:
+ [CreateAlias](https://docs.aws.amazon.com/lambda/latest/api/API_CreateAlias.html)
+ [UpdateAlias](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateAlias.html)
+ [DeleteAlias](https://docs.aws.amazon.com/lambda/latest/api/API_DeleteAlias.html)

------

# Using Lambda aliases in event sources and permissions policies
<a name="using-aliases"></a>

Each alias has a unique ARN. An alias can point only to a function version, not to another alias. You can update an alias to point to a new version of the function.

Event sources such as Amazon Simple Storage Service (Amazon S3) invoke your Lambda function. These event sources maintain a mapping that identifies the function to invoke when events occur. If you specify a Lambda function alias in the mapping configuration, you don't need to update the mapping when the function version changes. For more information, see [How Lambda processes records from stream and queue-based event sources](invocation-eventsourcemapping.md).

In a resource policy, you can grant permissions for event sources to use your Lambda function. If you specify an alias ARN in the policy, you don't need to update the policy when the function version changes.

## Resource policies
<a name="versioning-permissions-alias"></a>

You can use a [resource-based policy](access-control-resource-based.md) to give a service, resource, or account access to your function. The scope of that permission depends on whether you apply it to an alias, a version, or the entire function. For example, if you use an alias name (such as `helloworld:PROD`), the permission allows you to invoke the `helloworld` function using the alias ARN (`helloworld:PROD`).

If you attempt to invoke the function without an alias or a specific version, then you get a permission error. This permission error still occurs even if you attempt to directly invoke the function version associated with the alias.

For example, the following AWS CLI command grants Amazon S3 permissions to invoke the PROD alias of the `helloworld` function when Amazon S3 is acting on behalf of `amzn-s3-demo-bucket`.

```
aws lambda add-permission \
  --function-name helloworld \
  --qualifier PROD \
  --statement-id 1 \
  --principal s3.amazonaws.com \
  --action lambda:InvokeFunction \
  --source-arn arn:aws:s3:::amzn-s3-demo-bucket \
  --source-account 123456789012
```

For more information about using resource names in policies, see [Fine-tuning the Resources and Conditions sections of policies](lambda-api-permissions-ref.md).

# Implement Lambda canary deployments using a weighted alias
<a name="configuring-alias-routing"></a>

You can use a weighted alias to split traffic between two different [versions](configuration-versions.md) of the same function. With this approach, you can test new versions of your functions with a small percentage of traffic and quickly roll back if necessary. This is known as a [canary deployment](https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/canary-deployments.html). Canary deployments differ from blue/green deployments by exposing the new version to only a portion of requests rather than switching all traffic at once.

You can point an alias to a maximum of two Lambda function versions. The versions must meet the following criteria:
+ Both versions must have the same [execution role](lambda-intro-execution-role.md).
+ Both versions must have the same [dead-letter queue](invocation-async-retain-records.md#invocation-dlq) configuration, or no dead-letter queue configuration.
+ Both versions must be published. The alias cannot point to `$LATEST`.

**Note**  
Lambda uses a simple probabilistic model to distribute the traffic between the two function versions. At low traffic levels, you might see a high variance between the configured and actual percentage of traffic on each version. If your function uses provisioned concurrency, you can avoid [spillover invocations](monitoring-metrics-types.md#invocation-metrics) by configuring a higher number of provisioned concurrency instances during the time that alias routing is active. 

## Create a weighted alias
<a name="create-weighted-alias"></a>

------
#### [ Console ]

**To configure routing on an alias using the console**
**Note**  
Verify that the function has at least two published versions. To create additional versions, follow the instructions in [Creating function versions](configuration-versions.md#configuration-versions-config).

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose **Aliases** and then choose **Create alias**.

1. On the **Create alias** page, do the following:

   1. Enter a **Name** for the alias.

   1. (Optional) Enter a **Description** for the alias.

   1. For **Version**, choose the first function version that you want the alias to point to.

   1. Expand **Weighted alias**.

   1. For **Additional version**, choose the second function version that you want the alias to point to.

   1. For **Weight (%)**, enter a weight value for the function. *Weight* is the percentage of traffic that is assigned to that version when the alias is invoked. The first version receives the residual weight. For example, if you specify 10 percent to **Additional version**, the first version is assigned 90 percent automatically.

   1. Choose **Save**.

------
#### [ AWS CLI ]

Use the [create-alias](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-alias.html) and [update-alias](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-alias.html) AWS CLI commands to configure the traffic weights between two function versions. When you create or update the alias, you specify the traffic weight in the `routing-config` parameter.

The following example creates a Lambda function alias named **routing-alias** that points to version 1 of the function. Version 2 of the function receives 3 percent of the traffic. The remaining 97 percent of traffic is routed to version 1.

```
aws lambda create-alias \
  --name routing-alias \
  --function-name my-function \
  --function-version 1  \
  --routing-config AdditionalVersionWeights={"2"=0.03}
```

Use the `update-alias` command to increase the percentage of incoming traffic to version 2. In the following example, you increase the traffic to 5 percent.

```
aws lambda update-alias \
  --name routing-alias \
  --function-name my-function \
  --routing-config AdditionalVersionWeights={"2"=0.05}
```

To route all traffic to version 2, use the `update-alias` command to change the `function-version` property to point the alias to version 2. The command also resets the routing configuration.

```
aws lambda update-alias \
  --name routing-alias \
  --function-name my-function  \
  --function-version 2 \
  --routing-config AdditionalVersionWeights={}
```

 The AWS CLI commands in the preceding steps correspond to the following Lambda API operations:
+ [CreateAlias](https://docs.aws.amazon.com/lambda/latest/api/API_CreateAlias.html)
+ [UpdateAlias](https://docs.aws.amazon.com/lambda/latest/api/API_UpdateAlias.html)

------

## Determining which version was invoked
<a name="determining-routing-version"></a>

When you configure traffic weights between two function versions, there are two ways to determine the Lambda function version that has been invoked:
+ **CloudWatch Logs** – Lambda automatically emits a `START` log entry that contains the invoked version ID for every function invocation. Example:

  `START RequestId: 1dh194d3759ed-4v8b-a7b4-1e541f60235f Version: 2` 

  For alias invocations, Lambda uses the `ExecutedVersion` dimension to filter the metric data by the invoked version. For more information, see [Viewing metrics for Lambda functions](monitoring-metrics-view.md).
+ **Response payload (synchronous invocations)** – Responses to synchronous function invocations include an `x-amz-executed-version` header to indicate which function version has been invoked.

## Create a rolling deployment with weighted aliases
<a name="lambda-rolling-deployments"></a>

Use AWS CodeDeploy and AWS Serverless Application Model (AWS SAM) to create a rolling deployment that automatically detects changes to your function code, deploys a new version of your function, and gradually increase the amount of traffic flowing to the new version. The amount of traffic and rate of increase are parameters that you can configure.

In a rolling deployment, AWS SAM performs these tasks:
+ Configures your Lambda function and creates an alias. The weighted alias routing configuration is the underlying capability that implements the rolling deployment.
+ Creates a CodeDeploy application and deployment group. The deployment group manages the rolling deployment and the rollback, if needed.
+ Detects when you create a new version of your Lambda function.
+ Triggers CodeDeploy to start the deployment of the new version.

### Example AWS SAM template
<a name="sam-template"></a>

The following example shows an [AWS SAM template](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-basics.html) for a simple rolling deployment. 

```
AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: A sample SAM template for deploying Lambda functions

Resources:
# Details about the myDateTimeFunction Lambda function
  myDateTimeFunction:
    Type: [AWS::Serverless::Function](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html)
    Properties:
      Handler: myDateTimeFunction.handler
      Runtime: nodejs24.x
# Creates an alias named "live" for the function, and automatically publishes when you update the function.
      AutoPublishAlias: live
      DeploymentPreference:
# Specifies the deployment configuration
          Type: Linear10PercentEvery2Minutes
```

This template defines a Lambda function named `myDateTimeFunction` with the following properties. 

**AutoPublishAlias **  
The `AutoPublishAlias` property creates an alias named `live`. In addition, the AWS SAM framework automatically detects when you save new code for the function. The framework then publishes a new function version and updates the `live` alias to point to the new version.

**DeploymentPreference**  
The `DeploymentPreference` property determines the rate at which the CodeDeploy application shifts traffic from the original version of the Lambda function to the new version. The value `Linear10PercentEvery2Minutes` shifts an additional ten percent of the traffic to the new version every two minutes.   
For a list of the predefined deployment configurations, see [Deployment configurations](https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html). 

For more information on how to create rolling deployments with CodeDeploy and AWS SAM, see the following:
+ [Tutorial: Deploy an updated Lambda function with CodeDeploy and the AWS Serverless Application Model](https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorial-lambda-sam.html)
+ [Deploying serverless applications gradually with AWS SAM](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html)

# Manage Lambda function versions
<a name="configuration-versions"></a>

You can use versions to manage the deployment of your functions. For example, you can publish a new version of a function for beta testing without affecting users of the stable production version. Lambda creates a new version of your function each time that you publish the function. The new version is a copy of the unpublished version of the function. The unpublished version is named `$LATEST`.

Importantly, any time you deploy your function code, you overwrite the current code in `$LATEST`. To save the current iteration of `$LATEST`, create a new function version. If `$LATEST` is identical to a previously published version, you won't be able to create a new version until you deploy changes to `$LATEST`. These changes can include updating the code, or modifying the function configuration settings.

After you publish a function version, its code, runtime, architecture, memory, layers, and most other configuration settings are immutable. This means that you can't change these settings without publishing a new version from `$LATEST`. You can configure the following items for a published function version:
+ [Triggers](lambda-services.md#lambda-invocation-trigger)
+ [Destinations](invocation-async-retain-records.md#create-destination)
+ [Provisioned concurrency](provisioned-concurrency.md)
+ [Asynchronous invocation](invocation-async.md)
+ [Database connections and proxies](services-rds.md#rds-configuration)

**Note**  
When using [runtime management controls](runtimes-update.md#runtime-management-controls) with **Auto** mode, the runtime version used by the function version is updated automatically. When using **Function update** or **Manual** mode, the runtime version is not updated. For more information, see [Understanding how Lambda manages runtime version updates](runtimes-update.md).

**Topics**
+ [Creating function versions](#configuration-versions-config)
+ [Using versions](#versioning-versions-using)
+ [Granting permissions](#versioning-permissions)

## Creating function versions
<a name="configuration-versions-config"></a>

You can change the function code and settings only on the unpublished version of a function. When you publish a version, Lambda locks the code and most of the settings to maintain a consistent experience for users of that version.

You can create a function version using the Lambda console.

**To create a new function version**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose a function and then choose the **Versions** tab.

1. On the versions configuration page, choose **Publish new version**.

1. (Optional) Enter a version description.

1. Choose **Publish**.

Alternatively, you can publish a version of a function using the [PublishVersion](https://docs.aws.amazon.com/lambda/latest/api/API_PublishVersion.html) API operation.

The following AWS CLI command publishes a new version of a function. The response returns configuration information about the new version, including the version number and the function ARN with the version suffix.

```
aws lambda publish-version --function-name my-function
```

You should see the following output:

```
{
  "FunctionName": "my-function",
  "FunctionArn": "arn:aws:lambda:us-east-2:123456789012:function:my-function:1",
  "Version": "1",
  "Role": "arn:aws:iam::123456789012:role/lambda-role",
  "Handler": "function.handler",
  "Runtime": "nodejs24.x",
  ...
}
```

**Note**  
Lambda assigns monotonically increasing sequence numbers for versioning. Lambda never reuses version numbers, even after you delete and recreate a function.

## Using versions
<a name="versioning-versions-using"></a>

You can reference your Lambda function using either a qualified ARN or an unqualified ARN.
+ **Qualified ARN** – The function ARN with a version suffix. The following example refers to version 42 of the `helloworld` function.

  ```
  arn:aws:lambda:aws-region:acct-id:function:helloworld:42
  ```
+ **Unqualified ARN** – The function ARN without a version suffix.

  ```
  arn:aws:lambda:aws-region:acct-id:function:helloworld
  ```

You can use a qualified or an unqualified ARN in all relevant API operations. However, you can't use an unqualified ARN to create an alias.

If you decide not to publish function versions, you can invoke the function using either the qualified or unqualified ARN in your [event source mapping](invocation-eventsourcemapping.md). When you invoke a function using an unqualified ARN, Lambda implicitly invokes `$LATEST`. 

The qualified ARN for each Lambda function version is unique. After you publish a version, you can't change the ARN or the function code.

Lambda publishes a new function version only if the code has never been published, or if the code has changed from the last published version. If there is no change, the function version remains at the last published version.

When you publish a version, Lambda creates an immutable snapshot of your function's code and configuration. Not all configuration changes trigger the publication of a new version. The following configuration changes qualify a function for version publication:
+ Function code
+ Environment variables
+ Runtime
+ Handler
+ Layers
+ Memory size
+ Timeout
+ VPC configuration
+ Dead Letter Queue (DLQ) configuration
+ IAM role
+ Description
+ Architecture (x86\$164 or arm64)
+ Ephemeral storage size
+ Package type
+ Logging configuration
+ File system configuration
+ SnapStart
+ Tracing configuration

Operational settings such as [reserved concurrency](configuration-concurrency.md) don't trigger the publication of a new version when changed.

## Granting permissions
<a name="versioning-permissions"></a>

You can use a [resource-based policy](access-control-resource-based.md) or an [identity-based policy](access-control-identity-based.md) to grant access to your function. The scope of the permission depends on whether you apply the policy to a function or to one version of a function. For more information about function resource names in policies, see [Fine-tuning the Resources and Conditions sections of policies](lambda-api-permissions-ref.md). 

You can simplify the management of event sources and AWS Identity and Access Management (IAM) policies by using function aliases. For more information, see [Create an alias for a Lambda function](configuration-aliases.md).

# Using tags on Lambda functions
<a name="configuration-tags"></a>

You can tag functions to organize and manage your resources. Tags are free-form key-value pairs associated with your resources that are supported across AWS services. For more information about use cases for tags, see [Common tagging strategies](https://docs.aws.amazon.com//tag-editor/latest/userguide/best-practices-and-strats.html#tag-strategies) in the *Tagging AWS Resources and Tag Editor Guide*. 

Tags apply at the function level, not to versions or aliases. Tags are not part of the version-specific configuration that AWS Lambda creates a snapshot of when you publish a version. You can use the Lambda API to view and update tags. You can also view and update tags while managing a specific function in the Lambda console.

**Topics**
+ [Permissions required for working with tags](#fxn-tags-required-permissions)
+ [Using tags with the Lambda console](#using-tags-with-the-console)
+ [Using tags with the AWS CLI](#configuration-tags-cli)

## Permissions required for working with tags
<a name="fxn-tags-required-permissions"></a>

To allow an AWS Identity and Access Management (IAM) identity (user, group, or role) to read or set tags on a resource, grant it the corresponding permissions:
+ **lambda:ListTags**–When a resource has tags, grant this permission to anyone who needs to call `ListTags` on it. For tagged functions, this permission is also necessary for `GetFunction`.
+ **lambda:TagResource**–Grant this permission to anyone who needs to call `TagResource` or perform a tag on create.

Optionally, consider granting the **lambda:UntagResource** permission as well to allow `UntagResource` calls to the resource.

For more information, see [Identity-based IAM policies for Lambda](access-control-identity-based.md).

## Using tags with the Lambda console
<a name="using-tags-with-the-console"></a>

You can use the Lambda console to create functions that have tags, add tags to existing functions, and filter functions by tags that you add.

**To add tags when you create a function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose **Create function**.

1. Choose **Author from scratch** or **Container image**. 

1. Under **Basic information**, set up your function. For more information about configuring functions, see [Configuring AWS Lambda functions](lambda-functions.md). 

1. Expand **Advanced settings**, and then select **Enable tags**.

1. Choose **Add new tag**, and then enter a **Key** and an optional **Value**. To add more tags, repeat this step.

1. Choose **Create function**.

**To add tags to an existing function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the name of a function.

1. Choose **Configuration**, and then choose **Tags**.

1. Under **Tags**, choose **Manage tags**.

1. Choose **Add new tag**, and then enter a **Key** and an optional **Value**. To add more tags, repeat this step.

1. Choose **Save**.

**To filter functions with tags**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the search box to see a list of function properties and tag keys.

1. Choose a tag key to see a list of values that are in use in the current AWS Region.

1. Select **Use: "tag-name"** to see all functions tagged with this key, or choose an **Operator** to further filter by value.

1. Select your tag value to filter by a combination of tag key and value.

The search bar also supports searching for tag keys. Enter `tag` to see only a list of tag keys, or enter the name of a key to find it in the list.

## Using tags with the AWS CLI
<a name="configuration-tags-cli"></a>

You can add and remove tags on existing Lambda resources, including functions, with the Lambda API. You can also add tags when creating a function, which allows you to keep a resource tagged through its entire lifecycle.

### Updating tags with the Lambda tag APIs
<a name="tags-fxn-api-config"></a>

You can add and remove tags for supported Lambda resources through the [TagResource](https://docs.aws.amazon.com/lambda/latest/api/API_TagResource.html) and [UntagResource](https://docs.aws.amazon.com/lambda/latest/api/API_UntagResource.html) API operations.

You can call these operations using the AWS CLI. To add tags to an existing resource, use the `tag-resource` command. This example adds two tags, one with the key *Department* and one with the key *CostCenter*.

```
aws lambda tag-resource \
--resource arn:aws:lambda:us-east-2:123456789012:resource-type:my-resource \
--tags Department=Marketing,CostCenter=1234ABCD
```

To remove tags, use the `untag-resource` command. This example removes the tag with the key *Department*.

```
aws lambda untag-resource --resource arn:aws:lambda:us-east-1:123456789012:resource-type:resource-identifier \
--tag-keys Department
```

### Adding tags when creating a function
<a name="creating-tags-when-you-create-a-function-cli"></a>

To create a new Lambda function with tags, use the [CreateFunction](https://docs.aws.amazon.com//lambda/latest/api/API_CreateFunction.html) API operation. Specify the `Tags` parameter. You can call this operation with the `create-function` CLI command and the --tags option. Before using the tags parameter with `CreateFunction`, ensure that your role has permission to tag resources alongside the usual permissions needed for this operation. For more information about permissions for tagging, see [Permissions required for working with tags](#fxn-tags-required-permissions). This example adds two tags, one with the key *Department* and one with the key *CostCenter*.

```
aws lambda create-function --function-name my-function
--handler index.js --runtime nodejs24.x \
--role arn:aws:iam::123456789012:role/lambda-role \
--tags Department=Marketing,CostCenter=1234ABCD
```

### Viewing tags on a function
<a name="viewing-tags-on-a-function-cli"></a>

To view the tags that are applied to a specific Lambda resource, use the `ListTags` API operation. For more information, see [ListTags](https://docs.aws.amazon.com/lambda/latest/api/API_ListTags.html).

You can call this operation with the `list-tags` AWS CLI command by providing an ARN (Amazon Resource Name).

```
aws lambda list-tags --resource arn:aws:lambda:us-east-1:123456789012:resource-type:resource-identifier
```

You can view the tags that are applied to a specific resource with the [GetFunction](https://docs.aws.amazon.com/lambda/latest/api/API_GetFunction.html) API operation. Comparable functionality is not available for other resource types.

You can call this operation with the `get-function` CLI command:

```
aws lambda get-function --function-name my-function
```

### Filtering resources by tag
<a name="tags-fxn-filtering"></a>

You can use the AWS Resource Groups Tagging API [GetResources](https://docs.aws.amazon.com/resourcegroupstagging/latest/APIReference/API_GetResources.html) API operation to filter your resources by tags. The `GetResources` operation receives up to 10 filters, with each filter containing a tag key and up to 10 tag values. You provide `GetResources` with a `ResourceType` to filter by specific resource types.

You can call this operation using the `get-resources` AWS CLI command. For examples of using `get-resources`, see [get-resources](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/resourcegroupstaggingapi/get-resources.html#examples) in the *AWS CLI Command Reference*. 

# Response streaming for Lambda functions
<a name="configuration-response-streaming"></a>

Lambda functions can natively stream response payloads back to clients through [Lambda function URLs](urls-configuration.md) or by using the [InvokeWithResponseStream](https://docs.aws.amazon.com/lambda/latest/api/API_InvokeWithResponseStream.html) API (via the AWS SDK or direct API calls). Your Lambda function can also stream response payloads through the [Amazon API Gateway proxy integration](https://docs.aws.amazon.com/apigateway/latest/developerguide/response-transfer-mode-lambda.html), which uses the [InvokeWithResponseStream](https://docs.aws.amazon.com/lambda/latest/api/API_InvokeWithResponseStream.html) API to invoke your function. Response streaming can benefit latency sensitive applications by improving time to first byte (TTFB) performance. This is because you can send partial responses back to the client as they become available. Additionally, response streaming functions can return payloads up to 200 MB, compared to the 6 MB maximum for buffered responses. Streaming a response also means that your function doesn’t need to fit the entire response in memory. For very large responses, this can reduce the amount of memory you need to configure for your function. 

**Note**  
Lambda response streaming is not yet available in all AWS Regions. Please refer to Builder Center's [AWS Capabilities by Region](https://builder.aws.com/build/capabilities) for feature availability by Region.

The speed at which Lambda streams your responses depends on the response size. The streaming rate for the first 6 MB of your function’s response is uncapped. For responses larger than 6 MB, the remainder of the response is subject to a bandwidth cap. For more information on streaming bandwidth, see [Bandwidth limits for response streaming](#config-rs-bandwidth-cap).

Streaming responses incur cost and streamed responses are not interrupted or stopped when the invoking client connection is broken. Customers will be billed for the full function duration, so customers should exercise caution when configuring long function timeouts.

Lambda supports response streaming on Node.js managed runtimes. For other languages, including Python, you can [use a custom runtime with a custom Runtime API integration](runtimes-custom.md#runtimes-custom-response-streaming) to stream responses or use the [Lambda Web Adapter](https://github.com/awslabs/aws-lambda-web-adapter).

**Note**  
When testing your function through the Lambda console, you'll always see responses as buffered.

**Topics**
+ [Bandwidth limits for response streaming](#config-rs-bandwidth-cap)
+ [VPC compatibility with response streaming](#config-rs-vpc-compatibility)
+ [Writing response streaming-enabled Lambda functions](config-rs-write-functions.md)
+ [Invoking a response streaming enabled function using Lambda function URLs](config-rs-invoke-furls.md)
+ [Tutorial: Creating a response streaming Lambda function with a function URL](response-streaming-tutorial.md)

## Bandwidth limits for response streaming
<a name="config-rs-bandwidth-cap"></a>

The first 6 MB of your function’s response payload has uncapped bandwidth. After this initial burst, Lambda streams your response at a maximum rate of 2 MBps. If your function responses never exceed 6 MB, then this bandwidth limit never applies. 

**Note**  
Bandwidth limits only apply to your function’s response payload, and not to network access by your function.

The rate of uncapped bandwidth varies depending on a number of factors, including your function’s processing speed. You can normally expect a rate higher than 2 MBps for the first 6 MB of your function’s response. If your function is streaming a response to a destination outside of AWS, the streaming rate also depends on the speed of the external internet connection. 

## VPC compatibility with response streaming
<a name="config-rs-vpc-compatibility"></a>

When using Lambda functions in a VPC environment, there are important considerations for response streaming:
+ Lambda function URLs do not support response streaming within a VPC environment.
+ You can use response streaming within a VPC by invoking your Lambda function through the AWS SDK using the `InvokeWithResponseStream` API. This requires setting up the appropriate VPC endpoints for Lambda.
+ For VPC environments, you'll need to create an interface VPC endpoint for Lambda to enable communication between your resources in the VPC and the Lambda service.

A typical architecture for response streaming in a VPC might include:

```
Client in VPC -> Interface VPC endpoint for Lambda -> Lambda function -> Response streaming back through the same path
```

# Writing response streaming-enabled Lambda functions
<a name="config-rs-write-functions"></a>

Writing the handler for response streaming functions is different than typical handler patterns. When writing streaming functions, be sure to do the following:
+ Wrap your function with the `awslambda.streamifyResponse()` decorator. The `awslambda` global object is provided by Lambda's Node.js runtime environment.
+ End the stream gracefully to ensure that all data processing is complete.

## Configuring a handler function to stream responses
<a name="config-rs-write-functions-handler"></a>

To indicate to the runtime that Lambda should stream your function's responses, you must wrap your function with the `streamifyResponse()` decorator. This tells the runtime to use the proper logic path for streaming responses and enables the function to stream responses.

The `streamifyResponse()` decorator accepts a function that accepts the following parameters:
+ `event` – Provides information about the function URL's invocation event, such as the HTTP method, query parameters, and the request body.
+ `responseStream` – Provides a writable stream.
+ `context` – Provides methods and properties with information about the invocation, function, and execution environment.

The `responseStream` object is a [Node.js `writableStream`](https://nodesource.com/blog/understanding-streams-in-nodejs/). As with any such stream, you should use the `pipeline()` method.

**Note**  
The `awslambda` global object is automatically provided by Lambda's Node.js runtime and no import is required.

**Example response streaming-enabled handler**  

```
import { pipeline } from 'node:stream/promises';
import { Readable } from 'node:stream';

export const echo = awslambda.streamifyResponse(async (event, responseStream, _context) => {
  // As an example, convert event to a readable stream.
  const requestStream = Readable.from(Buffer.from(JSON.stringify(event)));

  await pipeline(requestStream, responseStream);
});
```

While `responseStream` offers the `write()` method to write to the stream, we recommend that you use [https://nodejs.org/api/stream.html#streampipelinesource-transforms-destination-callback](https://nodejs.org/api/stream.html#streampipelinesource-transforms-destination-callback) wherever possible. Using `pipeline()` ensures that the writable stream is not overwhelmed by a faster readable stream.

## Ending the stream
<a name="config-rs-write-functions-end"></a>

Make sure that you properly end the stream before the handler returns. The `pipeline()` method handles this automatically.

For other use cases, call the `responseStream.end()` method to properly end a stream. This method signals that no more data should be written to the stream. This method isn't required if you write to the stream with `pipeline()` or `pipe()`.

Starting with Node.js 24, Lambda no longer waits for unresolved promises to complete after your handler returns or the response stream ends. If your function depends on additional asynchronous operations, such as timers or fetches, you should `await` them in your handler.

**Example ending a stream with pipeline()**  

```
import { pipeline } from 'node:stream/promises';

export const handler = awslambda.streamifyResponse(async (event, responseStream, _context) => {
  await pipeline(requestStream, responseStream);
});
```

**Example ending a stream without pipeline()**  

```
export const handler = awslambda.streamifyResponse(async (event, responseStream, _context) => {
  responseStream.write("Hello ");
  responseStream.write("world ");
  responseStream.write("from ");
  responseStream.write("Lambda!");
  responseStream.end();
});
```

# Invoking a response streaming enabled function using Lambda function URLs
<a name="config-rs-invoke-furls"></a>

**Note**  
Your Lambda function can now stream response payloads through the [Amazon API Gateway proxy integration](https://docs.aws.amazon.com/apigateway/latest/developerguide/response-transfer-mode-lambda.html).

You can invoke response streaming enabled functions by changing the invoke mode of your function's URL. The invoke mode determines which API operation Lambda uses to invoke your function. The available invoke modes are:
+ `BUFFERED` – This is the default option. Lambda invokes your function using the `Invoke` API operation. Invocation results are available when the payload is complete. The maximum payload size is 6 MB.
+ `RESPONSE_STREAM` – Enables your function to stream payload results as they become available. Lambda invokes your function using the `InvokeWithResponseStream` API operation. The maximum response payload size is 200 MB.

You can still invoke your function without response streaming by directly calling the `Invoke` API operation. However, Lambda streams all response payloads for invocations that come through the function's URL until you change the invoke mode to `BUFFERED`.

------
#### [ Console ]

**To set the invoke mode of a function URL (console)**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the name of the function that you want to set the invoke mode for.

1. Choose the **Configuration** tab, and then choose **Function URL**.

1. Choose **Edit**, then choose **Additional settings**.

1. Under **Invoke mode**, choose your desired invoke mode.

1. Choose **Save**.

------
#### [ AWS CLI ]

**To set the invoke mode of a function's URL (AWS CLI)**

```
aws lambda update-function-url-config \
  --function-name my-function \
  --invoke-mode RESPONSE_STREAM
```

------
#### [ CloudFormation ]

**To set the invoke mode of a function's URL (CloudFormation)**

```
MyFunctionUrl:
  Type: AWS::Lambda::Url
  Properties:
    AuthType: AWS_IAM
    InvokeMode: RESPONSE_STREAM
```

------

For more information about configuring function URLs, see [Lambda function URLs](urls-configuration.md).

# Tutorial: Creating a response streaming Lambda function with a function URL
<a name="response-streaming-tutorial"></a>

In this tutorial, you create a Lambda function defined as a .zip file archive with a function URL endpoint that returns a response stream. For more information about configuring function URLs, see [Function URLs](urls-configuration.md).

## Prerequisites
<a name="response-streaming-prepare"></a>

This tutorial assumes that you have some knowledge of basic Lambda operations and the Lambda console. If you haven't already, follow the instructions in [Create a Lambda function with the console](getting-started.md#getting-started-create-function) to create your first Lambda function.

To complete the following steps, you need the [AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). Commands and the expected output are listed in separate blocks:

```
aws --version
```

You should see the following output:

```
aws-cli/2.13.27 Python/3.11.6 Linux/4.14.328-248.540.amzn2.x86_64 exe/x86_64.amzn.2
```

For long commands, an escape character (`\`) is used to split a command over multiple lines.

On Linux and macOS, use your preferred shell and package manager.

**Note**  
In Windows, some Bash CLI commands that you commonly use with Lambda (such as `zip`) are not supported by the operating system's built-in terminals. To get a Windows-integrated version of Ubuntu and Bash, [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10). Example CLI commands in this guide use Linux formatting. Commands which include inline JSON documents must be reformatted if you are using the Windows CLI. 

## Create an execution role
<a name="response-streaming-create-iam-role"></a>

Create the [execution role](lambda-intro-execution-role.md) that gives your Lambda function permission to access AWS resources.

**To create an execution role**

1. Open the [Roles page](https://console.aws.amazon.com/iam/home#/roles) of the AWS Identity and Access Management (IAM) console.

1. Choose **Create role**.

1. Create a role with the following properties:
   + **Trusted entity type** – **AWS service**
   + **Use case** – **Lambda**
   + **Permissions** – **AWSLambdaBasicExecutionRole**
   + **Role name** – **response-streaming-role**

The **AWSLambdaBasicExecutionRole** policy has the permissions that the function needs to write logs to Amazon CloudWatch Logs. After you create the role, note down the its Amazon Resource Name (ARN). You'll need it in the next step.

## Create a response streaming function (AWS CLI)
<a name="response-streaming-tutorial-create-function-cli"></a>

Create a response streaming Lambda function with a function URL endpoint using the AWS Command Line Interface (AWS CLI).

**To create a function that can stream responses**

1. Copy the following code example into a file named `index.js`. This function streams three responses, separated by 1 second.

   ```
   exports.handler = awslambda.streamifyResponse(
   	async (event, responseStream, _context) => {
   		// Metadata is a JSON serializable JS object. Its shape is not defined here.
   		const metadata = {
   		statusCode: 200,
   		headers: {
   			"Content-Type": "application/json",
   			"CustomHeader": "outerspace"
   		}
   		};
   	
   		// Assign to the responseStream parameter to prevent accidental reuse of the non-wrapped stream.
   		responseStream = awslambda.HttpResponseStream.from(responseStream, metadata);
   	
   		responseStream.write("Streaming with Helper \n");
   		await new Promise(r => setTimeout(r, 1000));
   		responseStream.write("Hello 0 \n");
   		await new Promise(r => setTimeout(r, 1000));
   		responseStream.write("Hello 1 \n");
   		await new Promise(r => setTimeout(r, 1000));
   		responseStream.write("Hello 2 \n");
   		await new Promise(r => setTimeout(r, 1000));
   		responseStream.end();
   		await responseStream.finished();
   	}
     );
   ```

1. Create a deployment package.

   ```
   zip function.zip index.js
   ```

1. Create a Lambda function with the `create-function` command. Replace the value of `--role` with the role ARN from the previous step. This command sets the function timeout to 10 seconds, which allows the function to stream three responses.

   ```
   aws lambda create-function \
     --function-name my-streaming-function \
     --runtime nodejs24.x \
     --zip-file fileb://function.zip \
     --handler index.handler \
     --timeout 10 \
     --role arn:aws:iam::123456789012:role/response-streaming-role
   ```

**To create a function URL**

1. Add a resource-based policy to your function that grants `lambda:InvokeFunctionUrl` and `lambda:InvokeFunction` permissions. Each statement must be added in a separate command. Replace the value of `--principal` with your AWS account ID.

   ```
   aws lambda add-permission \
     --function-name my-streaming-function \
     --action lambda:InvokeFunctionUrl \
     --statement-id UrlPolicyInvokeURL \
     --principal 123456789012 \
     --function-url-auth-type AWS_IAM
   ```

   ```
   aws lambda add-permission \
       --function-name my-streaming-function \
       --action lambda:InvokeFunction \
       --statement-id UrlPolicyInvokeFunction \
       --principal 123456789012
   ```

1. Create a URL endpoint for the function with the `create-function-url-config` command.

   ```
   aws lambda create-function-url-config \
     --function-name my-streaming-function \
     --auth-type AWS_IAM \
     --invoke-mode RESPONSE_STREAM
   ```
**Note**  
If you get an error about `--invoke-mode`, you might need to upgrade to a [newer version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

## Test the function URL endpoint
<a name="response-streaming-tutorial-test"></a>

Test your integration by invoking your function. You can open your function's URL in a browser, or you can use curl.

```
curl --request GET "https://abcdefghijklm7nop7qrs740abcd.lambda-url.us-east-1.on.aws/" --user "AKIAIOSFODNN7EXAMPLE" --aws-sigv4 "aws:amz:us-east-1:lambda" --no-buffer
```

Our function URL uses the `IAM_AUTH` authentication type. This means that you need to sign requests with both your [AWS access key and secret key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). In the previous command, replace `AKIAIOSFODNN7EXAMPLE` with the AWS access key ID. Enter your AWS secret key when prompted. If you don't have your AWS secret key, you can [use temporary AWS credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html) instead.

You should see a response like this:

```
Streaming with Helper 
Hello 0 
Hello 1
Hello 2
```

## Clean up your resources
<a name="cleanup"></a>

You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS account.

**To delete the execution role**

1. Open the [Roles page](https://console.aws.amazon.com/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the Lambda function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

# Using the Lambda metadata endpoint
<a name="configuration-metadata-endpoint"></a>

The Lambda metadata endpoint lets your functions discover which Availability Zone (AZ) they are running in, enabling you to optimize latency by routing to same-AZ resources like Amazon ElastiCache and Amazon RDS endpoints, and to implement AZ-aware resilience patterns.

The endpoint returns metadata in a simple JSON format through a localhost HTTP API within the execution environment and is accessible to both runtimes and extensions.

**Topics**
+ [Getting started](#metadata-endpoint-getting-started)
+ [Understanding Availability Zone IDs](#metadata-endpoint-az-ids)
+ [API reference](#metadata-endpoint-api-reference)

## Getting started
<a name="metadata-endpoint-getting-started"></a>

[Powertools for AWS Lambda](https://docs.aws.amazon.com/powertools/) provides a utility for accessing the Lambda metadata endpoint in Python, TypeScript, Java, and .NET. The utility caches the response after the first call and handles SnapStart cache invalidation automatically.

Use the Powertools for AWS Lambda metadata utility or call the metadata endpoint directly

------
#### [ Python ]

Install the Powertools package:

```
pip install "aws-lambda-powertools"
```

Use the metadata utility in your handler:

**Example Retrieving AZ ID with Powertools (Python)**  

```
from aws_lambda_powertools.utilities.lambda_metadata import get_lambda_metadata

def handler(event, context):
    metadata = get_lambda_metadata()
    az_id = metadata.availability_zone_id  # e.g., "use1-az1"

    return {"az_id": az_id}
```

------
#### [ TypeScript ]

Install the Powertools package:

```
npm install @aws-lambda-powertools/commons
```

Use the metadata utility in your handler:

**Example Retrieving AZ ID with Powertools (TypeScript)**  

```
import { getMetadata } from '@aws-lambda-powertools/commons/utils/metadata';

const metadata = await getMetadata();

export const handler = async () => {
  const { AvailabilityZoneID: azId } = metadata;
  return azId;
};
```

------
#### [ Java ]

Add the Powertools dependency to your `pom.xml`:

```
<dependencies>
    <dependency>
        <groupId>software.amazon.lambda</groupId>
        <artifactId>powertools-lambda-metadata</artifactId>
        <version>2.10.0</version>
    </dependency>
</dependencies>
```

Use the metadata client in your handler:

**Example Retrieving AZ ID with Powertools (Java)**  

```
import software.amazon.lambda.powertools.metadata.LambdaMetadata;
import software.amazon.lambda.powertools.metadata.LambdaMetadataClient;

public class App implements RequestHandler<Object, String> {

    @Override
    public String handleRequest(Object input, Context context) {
        LambdaMetadata metadata = LambdaMetadataClient.get();
        String azId = metadata.getAvailabilityZoneId(); // e.g., "use1-az1"

        return "{\"azId\": \"" + azId + "\"}";
    }
}
```

------
#### [ .NET ]

Install the Powertools package:

```
dotnet add package AWS.Lambda.Powertools.Metadata
```

Use the metadata class in your handler:

**Example Retrieving AZ ID with Powertools (.NET)**  

```
using AWS.Lambda.Powertools.Metadata;

public class Function
{
    public string Handler(object input, ILambdaContext context)
    {
        var azId = LambdaMetadata.AvailabilityZoneId;
        return $"Running in AZ: {azId}";
    }
}
```

------
#### [ All Runtimes ]

All Lambda runtimes support the metadata endpoint, including custom runtimes and container images. Use the following example to access the metadata API directly from your function using the environment variables that Lambda automatically sets in the execution environment.

**Example Accessing the metadata endpoint directly**  

```
# Variables are automatically set by Lambda
METADATA_ENDPOINT="http://${AWS_LAMBDA_METADATA_API}/2026-01-15/metadata/execution-environment"

# Make the request
RESPONSE=$(curl -s -H "Authorization: Bearer ${AWS_LAMBDA_METADATA_TOKEN}" "$METADATA_ENDPOINT")

# Parse the AZ ID
AZ_ID=$(echo "$RESPONSE" | jq -r '.AvailabilityZoneID')

echo "Function is running in AZ ID: $AZ_ID"
```

------

## Understanding Availability Zone IDs
<a name="metadata-endpoint-az-ids"></a>

AZ IDs (for example, `use1-az1`) always refer to the same physical location across all AWS accounts, while AZ names (for example, `us-east-1a`) may map to different physical infrastructure in each AWS account in certain regions. For more information, see [AZ IDs for cross-account consistency](https://docs.aws.amazon.com/global-infrastructure/latest/regions/az-ids.html).

**Converting AZ ID to AZ name:**

To convert an AZ ID to an AZ name, use the Amazon EC2 [DescribeAvailabilityZones](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAvailabilityZones.html) API. To use this API, add the `ec2:DescribeAvailabilityZones` permission to your function's execution role.

## API reference
<a name="metadata-endpoint-api-reference"></a>

### Environment variables
<a name="metadata-endpoint-env-vars"></a>

Lambda automatically sets the following environment variables in every execution environment:
+ `AWS_LAMBDA_METADATA_API` – The metadata server address in the format `{ipv4_address}:{port}` (for example, `169.254.100.1:9001`).
+ `AWS_LAMBDA_METADATA_TOKEN` – A unique authentication token for the current execution environment. Lambda generates this token automatically at initialization. Include it in all metadata API requests.

### Endpoint
<a name="metadata-endpoint-url"></a>

`GET http://${AWS_LAMBDA_METADATA_API}/2026-01-15/metadata/execution-environment`

### Request
<a name="metadata-endpoint-request"></a>

**Required headers:**
+ `Authorization` – The token value from the `AWS_LAMBDA_METADATA_TOKEN` environment variable with the Bearer scheme: `Bearer <token>`. This token-based authentication provides defense in depth protection against Server-Side Request Forgery (SSRF) vulnerabilities. Each execution environment receives a unique, randomly generated token at initialization.

### Response
<a name="metadata-endpoint-response"></a>

**Status:** `200 OK`

**Content-Type:** `application/json`

**Cache-Control:** `private, max-age=43200, immutable`

The response is immutable within an execution environment. Clients should cache the response and respect the `Cache-Control` TTL. For SnapStart functions, the TTL is reduced during initialization so that clients refresh metadata after restore when the execution environment may be in a different AZ. If you use Powertools, caching and SnapStart invalidation are handled automatically.

**Body:**

```
{
  "AvailabilityZoneID": "use1-az1"
}
```

The `AvailabilityZoneID` field contains the unique identifier for the Availability Zone where the execution environment is running.

**Note**  
Additional fields may be added to the response in future updates. Clients should ignore unknown fields and not fail if new fields appear.

### Error responses
<a name="metadata-endpoint-errors"></a>
+ **401 Unauthorized** – The `Authorization` header is missing or contains an invalid token. Verify you are passing `Bearer ${AWS_LAMBDA_METADATA_TOKEN}`.
+ **405 Method Not Allowed** – Request method is not `GET`.
+ **500 Internal Server Error** – Server-side processing error.