

# Lambda Managed Instances
<a name="lambda-managed-instances"></a>

Lambda Managed Instances enables you to run Lambda functions on your current-generation Amazon EC2 instances, including Graviton4, network-optimized instances, and other specialized compute options, without managing instance lifecycles, operating system and language runtime patching, routing, load balancing, or scaling policies. With Lambda Managed Instances, you benefit from EC2 pricing advantages, including EC2 Savings Plans and Reserved Instances.

For a list of supported instance types, go to the [AWS Lambda Pricing](https://aws.amazon.com/lambda/pricing/#:~:text=EPU%20pricing%20applies.-,Management%20Fees,-Pricing%20Example%3A%20High) page and select your AWS Region.

## Key capabilities
<a name="lambda-managed-instances-key-capabilities"></a>

Lambda Managed Instances provides the following capabilities:
+ **Choose suitable instances** - Select [appropriate instances](https://aws.amazon.com/lambda/pricing/#:~:text=EPU%20pricing%20applies.-,Management%20Fees,-Pricing%20Example%3A%20High) based on performance and cost requirements, including access to the latest CPUs like Graviton4, configurable memory-CPU ratios, and high-bandwidth networking.
+ **Automatic provisioning** - AWS automatically provisions suitable instances and spins up function execution environments.
+ **Dynamic scaling** - Instances scale dynamically based on your function traffic patterns.
+ **Fully managed experience** - AWS handles infrastructure management, scaling, patching, and routing, with the same extensive event-source integrations you're familiar with.

## When to use Lambda Managed Instances
<a name="lambda-managed-instances-when-to-use"></a>

Consider Lambda Managed Instances for the following use cases:
+ **High volume-predictable workloads** - Ideal for steady-state workloads without unexpected traffic spikes. Lambda Managed Instances scale to handle traffic doubling within five minutes by default.
+ **Performance-critical applications** - Access to latest CPUs, varying memory-CPU ratios, and high network throughput
+ **Regulatory requirements** - Granular governance needs with control over VPC and instance placement
+ **Variety of applications** - Event-driven applications, media/data processing, web applications, and legacy workloads migrating to serverless

## How it works
<a name="lambda-managed-instances-how-it-works"></a>

Lambda Managed Instances uses capacity providers as the foundation for running your functions:

1. **Create a capacity provider** - Define where your functions run by specifying VPC configuration and optionally, instance requirements, and scaling configuration

1. **Create your function** - Create Lambda functions as usual and attach them to a capacity provider

1. **Publish a function version** - Function versions become active on capacity provider instances once published

When you publish a function version with a capacity provider, Lambda launches Managed Instances in your account. It launches three instances by default for AZ resiliency and starts three execution environments before marking your function version ACTIVE. If you attach a function to an existing capacity provider that is already running other functions, Lambda may not spin up new instances if the available instances already have capacity to accommodate the new function's execution environments.

## Concurrency model
<a name="lambda-managed-instances-concurrency-model"></a>

Lambda Managed Instances support multi-concurrent invocations, where one execution environment can handle multiple invocations at the same time. This differs from the Lambda (default) compute type, which provides a single concurrency model where one execution environment can run a maximum of one invoke at a time. Multi-concurrency yields better utilization of your underlying EC2 instances and is especially beneficial for IO-heavy applications like web services or batch jobs. This change in execution model means that thread safety, state management, and context isolation must be handled differently depending on the runtime.

## Tenancy and isolation
<a name="lambda-managed-instances-tenancy-isolation"></a>

Lambda (default) compute type is multi-tenant, making use of Firecracker microVM technology to provide isolation between execution environments running on shared Lambda fleets. Lambda Managed Instances run in your account, providing the latest EC2 hardware and pricing options. Managed Instances use containers running on EC2 Nitro instances to provide isolation rather than Firecracker. Capacity providers serve as the security boundary for Lambda functions. Functions execute in containers within instances.

### Understanding managed instances
<a name="lambda-managed-instances-understanding"></a>

Lambda Managed Instances functions run on EC2 managed instances in your account. These instances are fully managed by Lambda, which means you have restricted permissions on them compared to standard EC2 instances. You can identify Lambda Managed Instances in your account by:
+ The presence of the `Operator` field in EC2 `DescribeInstances` output
+ The `aws:lambda:capacity-provider` tag on the instance

You cannot perform standard EC2 operations directly on these instances, such as terminating them manually. To destroy managed instances, delete the associated capacity provider. Lambda will then terminate the instances as part of the capacity provider deletion process.

## Pricing
<a name="lambda-managed-instances-pricing"></a>

Lambda Managed Instances uses EC2-based pricing with a 15% management fee on top of the EC2 instance cost. This pricing model supports EC2 Savings Plans, Reserved Instances and any other pricing discounts applied to your EC2 usage. Refer to pricing page for additional details: [https://aws.amazon.com/lambda/pricing/](https://aws.amazon.com/lambda/pricing/)

**Important:** EC2 pricing discounts only apply to the underlying EC2 compute, not to the management fee.

## How Lambda Managed Instances differs from the Lambda (default) compute type
<a name="lambda-managed-instances-comparison"></a>

Lambda Managed Instances changes how Lambda processes requests compared to Lambda (default).

**Key differences:**


|  | Lambda (default) | Lambda Managed Instances | 
| --- | --- | --- | 
| Concurrency model | Single concurrency model where one execution environment can support a maximum of one invocation at a time | Multi-concurrent invocations where one execution environment can handle multiple invocations simultaneously, increasing throughput especially for IO-heavy applications | 
| Tenancy and isolation | Multi-tenant, using Firecracker microVM technology to provide isolation between execution environments running on shared Lambda fleets | Run in your account, using EC2 Nitro to provide isolation. Capacity providers serve as the security boundary, with functions executing in containers within instances | 
| Pricing model | Per-request duration pricing | Instance-based pricing with EC2 pricing models, including On-Demand and Reserved Instances, and savings options such as Compute Savings Plans | 
| Scaling behavior | Scales when there is no free execution environment to handle an incoming invocation (cold start). Scales to zero without traffic | Scales asynchronously based on CPU resource utilization only, without cold starts. Scales to minimum execution environments configured without traffic | 
| Best suited for | Functions with bursty traffic that can handle some cold-start time, or applications without sustained load that benefit from scale to zero | High volume predictable traffic functions when you want the flexibility, pricing plans, and hardware options of EC2 | 

## Next steps
<a name="lambda-managed-instances-next-steps"></a>
+ Learn about [capacity providers for Lambda Managed Instances](lambda-managed-instances-capacity-providers.md)
+ Understand [scaling for Lambda Managed Instances](lambda-managed-instances-scaling.md)
+ Review runtime-specific guides for [Java](lambda-managed-instances-java-runtime.md), [Node.js](lambda-managed-instances-nodejs-runtime.md), and [Python](lambda-managed-instances-python-runtime.md)
+ Configure [VPC connectivity for your capacity providers](lambda-managed-instances-networking.md)
+ Understand [security and permissions for Lambda Managed Instances](lambda-managed-instances-security.md)

# Getting started with Lambda Managed Instances
<a name="lambda-managed-instances-getting-started"></a>

## Creating a Lambda Managed Instance function (console)
<a name="lambda-managed-instances-getting-started-console"></a>

You can use the Lambda console to create a Managed Instance function that runs on Amazon EC2 instances managed by a capacity provider.

**Important:** Before creating a Managed Instance function, you must first create a capacity provider. These functions require a capacity provider to define the Amazon EC2 infrastructure that will run your functions.

**To create a Lambda Managed Instance function (console)**

1. Open the Lambda console.

1. Choose **Capacity providers** from the left navigation pane.

1. Choose **Create capacity provider**.

1. In the **Capacity provider settings** section, enter a name for your capacity provider.

1. Select VPC and permissions for your capacity provider. You can either use an existing or create a new one. For information about creating the required operator role, see [Lambda Operator role for Lambda Managed Instances](lambda-managed-instances-operator-role.md).

1. Expand **Advanced settings**.

1. Define your **Instance requirements** by choosing the processor architecture and instance types.

1. Under **Auto scaling**, specify the maximum number of EC2 vCPUs for your capacity provider. You can also choose **Manual instance scaling mode** to set your own scaling value for precise control.

1. Choose **Create capacity provider** to create a new one.

1. Next, choose **Create function**.

1. Select **Author from scratch**.

1. In the **Basic information** pane, provide a **Function name**.

1. For **Runtime**, choose any of the supported Runtimes.

1. Choose the **Architecture** for your function (same as the one you selected for capacity provider). By default, **x86\$164**.

1. Under **Permissions**, ensure you have permission for the chosen **Execution role**. Otherwise, you can create a new role.

1. Under **Additional configurations**, pick the **Compute type** as **Lambda Managed Instances**.

1. Capacity provider ARN of the capacity provider you created in the previous steps should be pre-selected.

1. Choose **Memory size** and **Execution environment memory (GiB) per vCPU ratio**.

1. Choose **Create function**.

Your Lambda Managed Instance function is created and will provision capacity on your specified capacity provider. Function creation typically takes several minutes. Once complete, you can edit your function code and run your first test.

## Creating a Lambda Managed Instance function (AWS CLI)
<a name="lambda-managed-instances-getting-started-cli"></a>

### Prerequisites
<a name="lambda-managed-instances-prerequisites"></a>

Before you begin, make sure you have the following:
+ **AWS CLI** – Install and configure the AWS CLI. For more information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
+ **IAM permissions** – Your IAM user or role must have permissions to create Lambda functions, capacity providers, and pass IAM roles. Note that you'll also need `iam:CreateServiceLinkedRole` if it's the first time creating a capacity provider in the account or if the Service Linked Role (SLR) was deleted.

### Step 1: Create the required IAM roles
<a name="lambda-managed-instances-step1-iam"></a>

Lambda Managed Instances require two IAM roles: an execution role for your function and an operator role for the capacity provider. The operator role allows Lambda to launch, terminate, and monitor Amazon EC2 instances on your behalf. The function execution role grants the function permissions to access other AWS services and resources.

**To create the Lambda execution role**

1. Create a trust policy document that allows Lambda to assume the role:

   ```
   cat > lambda-trust-policy.json << 'EOF'
   {
     "Version": "2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "Service": "lambda.amazonaws.com"
         },
         "Action": "sts:AssumeRole"
       }
     ]
   }
   EOF
   ```

1. Create the execution role:

   ```
   aws iam create-role \
     --role-name MyLambdaExecutionRole \
     --assume-role-policy-document file://lambda-trust-policy.json
   ```

1. Attach the basic execution policy:

   ```
   aws iam attach-role-policy \
     --role-name MyLambdaExecutionRole \
     --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
   ```

**To create the capacity provider operator role**

1. Create a trust policy document that allows Lambda to assume the operator role:

   ```
   cat > operator-trust-policy.json << 'EOF'
   {
     "Version": "2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "Service": "lambda.amazonaws.com"
         },
         "Action": "sts:AssumeRole"
       }
     ]
   }
   EOF
   ```

1. Create the operator role:

   ```
   aws iam create-role \
     --role-name MyCapacityProviderOperatorRole \
     --assume-role-policy-document file://operator-trust-policy.json
   ```

1. Attach the required EC2 permissions policy:

   ```
   aws iam attach-role-policy \
     --role-name MyCapacityProviderOperatorRole \
     --policy-arn arn:aws:iam::aws:policy/AWSLambdaManagedEC2ResourceOperator
   ```

### Step 2: Set up VPC resources
<a name="lambda-managed-instances-step2-vpc"></a>

Lambda Managed Instances run in your VPC and require a subnet and security group.

**To create VPC resources**

1. Create a VPC:

   ```
   VPC_ID=$(aws ec2 create-vpc \
     --cidr-block 10.0.0.0/16 \
     --query 'Vpc.VpcId' \
     --output text)
   ```

1. Create a subnet:

   ```
   SUBNET_ID=$(aws ec2 create-subnet \
     --vpc-id $VPC_ID \
     --cidr-block 10.0.1.0/24 \
     --query 'Subnet.SubnetId' \
     --output text)
   ```

1. Create a security group:

   ```
   SECURITY_GROUP_ID=$(aws ec2 create-security-group \
     --group-name my-capacity-provider-sg \
     --description "Security group for Lambda Managed Instances" \
     --vpc-id $VPC_ID \
     --query 'GroupId' \
     --output text)
   ```

**Note:** Your Lambda Managed Instances functions require VPC configuration to access resources outside the VPC and to transmit telemetry data to CloudWatch Logs and X-Ray. For configuration details, see [Networking for Lambda Managed Instances](lambda-managed-instances-networking.md).

### Step 3: Create a capacity provider
<a name="lambda-managed-instances-step3-capacity-provider"></a>

A capacity provider manages the EC2 instances that run your Lambda functions.

**To create a capacity provider**

```
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

aws lambda create-capacity-provider \
  --capacity-provider-name my-capacity-provider \
  --vpc-config SubnetIds=[$SUBNET_ID],SecurityGroupIds=[$SECURITY_GROUP_ID] \
  --permissions-config CapacityProviderOperatorRoleArn=arn:aws:iam::${ACCOUNT_ID}:role/MyCapacityProviderOperatorRole \
  --instance-requirements Architectures=[x86_64] \
  --capacity-provider-scaling-config MaxVCpuCount=30
```

This command creates a capacity provider with the following configuration:
+ **VPC configuration** – Specifies the subnet and security group for the EC2 instances
+ **Permissions** – Defines the IAM role that Lambda uses to manage EC2 instances
+ **Instance requirements** – Specifies the x86\$164 architecture
+ **Scaling configuration** – Sets a maximum of 30 vCPUs for the capacity provider

### Step 4: Create a Lambda function with inline code
<a name="lambda-managed-instances-step4-function"></a>

**To create a function with inline code**

1. First, create a simple Python function and package it inline:

   ```
   # Create a temporary directory for the function code
   mkdir -p /tmp/my-lambda-function
   cd /tmp/my-lambda-function
   
   # Create a simple Python handler
   cat > lambda_function.py << 'EOF'
   import json
   
   def lambda_handler(event, context):
       return {
           'statusCode': 200,
           'body': json.dumps({
               'message': 'Hello from Lambda Managed Instances!',
               'event': event
           })
       }
   EOF
   
   # Create a ZIP file
   zip function.zip lambda_function.py
   ```

1. Create the Lambda function using the inline ZIP file:

   ```
   ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
   REGION=$(aws configure get region)
   
   aws lambda create-function \
     --function-name my-managed-instance-function \
     --package-type Zip \
     --runtime python3.13 \
     --handler lambda_function.lambda_handler \
     --zip-file fileb:///tmp/my-lambda-function/function.zip \
     --role arn:aws:iam::${ACCOUNT_ID}:role/MyLambdaExecutionRole \
     --architectures x86_64 \
     --memory-size 2048 \
     --ephemeral-storage Size=512 \
     --capacity-provider-config LambdaManagedInstancesCapacityProviderConfig={CapacityProviderArn=arn:aws:lambda:${REGION}:${ACCOUNT_ID}:capacity-provider:my-capacity-provider}
   ```

   The function is created with:
   + **Runtime** – Python 3.13
   + **Handler** – The `lambda_handler` function in `lambda_function.py`
   + **Memory** – 2048 MB
   + **Ephemeral storage** – 512 MB
   + **Capacity provider** – Links to the capacity provider you created

### Step 5: Publish a function version
<a name="lambda-managed-instances-step5-publish"></a>

To run your function on Lambda Managed Instances, you must publish a version.

**To publish a function version**

```
aws lambda publish-version \
  --function-name my-managed-instance-function
```

This command publishes version 1 of your function and deploys it to the capacity provider.

### Step 6: Invoke your function
<a name="lambda-managed-instances-step6-invoke"></a>

After publishing, you can invoke your function.

**To invoke your function**

```
aws lambda invoke \
  --function-name my-managed-instance-function:1 \
  --payload '{"name": "World"}' \
  response.json

# View the response
cat response.json
```

The function runs on the EC2 instances managed by your capacity provider and returns a response.

### Clean up
<a name="lambda-managed-instances-cleanup"></a>

To avoid incurring charges, delete the resources you created:

1. Delete the function:

   ```
   aws lambda delete-function --function-name my-managed-instance-function
   ```

1. Delete the capacity provider:

   ```
   aws lambda delete-capacity-provider --capacity-provider-name my-capacity-provider
   ```

1. Delete the VPC resources:

   ```
   aws ec2 delete-security-group --group-id $SECURITY_GROUP_ID
   aws ec2 delete-subnet --subnet-id $SUBNET_ID
   aws ec2 delete-vpc --vpc-id $VPC_ID
   ```

1. Delete the IAM roles:

   ```
   aws iam detach-role-policy \
     --role-name MyLambdaExecutionRole \
     --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
   aws iam detach-role-policy \
     --role-name MyCapacityProviderOperatorRole \
     --policy-arn arn:aws:iam::aws:policy/AWSLambdaManagedEC2ResourceOperator
   
   aws iam delete-role --role-name MyLambdaExecutionRole
   aws iam delete-role --role-name MyCapacityProviderOperatorRole
   ```

# Core concepts
<a name="lambda-managed-instances-core-concepts"></a>

Lambda Managed Instances introduces several core concepts that differ from traditional Lambda functions. Understanding these concepts is essential for effectively deploying and managing your functions on EC2 infrastructure.

**Capacity providers** form the foundation of Lambda Managed Instances. A capacity provider defines the compute infrastructure where your functions execute, including VPC configuration, instance requirements, and scaling policies. Capacity providers also serve as the security boundary for your functions, meaning all functions assigned to the same capacity provider must be mutually trusted.

**Scaling behavior** differs significantly from traditional Lambda functions. Instead of scaling on-demand when invocations arrive, Managed Instances scale asynchronously based on CPU resource utilization. This approach eliminates cold starts but requires planning for traffic growth. If your traffic more than doubles within 5 minutes, you may experience throttles as Lambda scales up capacity to meet demand.

**Security and permissions** require careful consideration. You need operator role permissions to allow Lambda to manage EC2 resources in your capacity providers. Additionally, users need the `lambda:PassCapacityProvider` permission to assign functions to capacity providers, acting as a security gate to control which functions can run on specific infrastructure.

**Multi-concurrent execution** is a key characteristic of Managed Instances. Each execution environment can handle multiple invocations simultaneously, maximizing resource utilization for IO-heavy applications. This differs from traditional Lambda where each environment processes one request at a time. This execution model requires attention to thread safety, state management, and context isolation depending on your runtime.

The following sections provide detailed information about each core concept.

# Capacity providers
<a name="lambda-managed-instances-capacity-providers"></a>

A capacity provider is the foundation for running Lambda Managed Instances. It acts as the security boundary for your functions and defines the compute resources that Lambda will provision and manage on your behalf.

When you create a capacity provider, you specify:
+ **VPC configuration** - The subnets and security groups where instances will run
+ **Permissions** - IAM roles for Lambda to manage EC2 resources
+ **Instance requirements** (optional) - Architecture and [instance type](https://aws.amazon.com/lambda/pricing/#:~:text=EPU%20pricing%20applies.-,Management%20Fees,-Pricing%20Example%3A%20High) preferences
+ **Scaling configuration** (optional) - How Lambda scales your instances

## Understanding capacity providers as security boundary
<a name="lambda-managed-instances-capacity-provider-security-boundary"></a>

Capacity providers serve as the security boundary for Lambda functions within your VPC, replacing Firecracker-based isolation. Functions execute in containers within instances, but containers do not provide strong security isolation between functions, unlike Firecracker microVMs.

**Key security concepts:**
+ **Capacity Provider:** The security boundary that defines trust levels for Lambda functions
+ **Container Isolation:** Containers are NOT a security provider - do not rely on them for security between untrusted workloads
+ **Trust Separation:** Separate workloads that are not mutually trusted by using different capacity providers

## Creating a capacity provider
<a name="lambda-managed-instances-creating-capacity-provider"></a>

You can create a capacity provider using the AWS CLI, AWS Management Console, or AWS SDKs.

**Using AWS CLI:**

```
aws lambda create-capacity-provider \
  --capacity-provider-name my-capacity-provider \
  --vpc-config SubnetIds=subnet-12345,subnet-67890,subnet-11111,SecurityGroupIds=sg-12345 \
  --permissions-config CapacityProviderOperatorRoleArn=arn:aws:iam::123456789012:role/MyOperatorRole \
  --instance-requirements Architectures=x86_64 \
  --capacity-provider-scaling-config ScalingMode=Auto
```

### Required parameters
<a name="lambda-managed-instances-capacity-provider-required-params"></a>

**CapacityProviderName**
+ A unique name for your capacity provider
+ Must be unique within your AWS account

**VpcConfig**
+ **SubnetIds** (required): At least one subnet, maximum of 16. Use subnets across multiple Availability Zones for resiliency
+ **SecurityGroupIds** (optional): Security groups for your instances. Defaults to the VPC default security group if not specified

**PermissionsConfig**
+ **CapacityProviderOperatorRoleArn** (required): IAM role that allows Lambda to manage EC2 resources in your capacity provider

### Optional parameters
<a name="lambda-managed-instances-capacity-provider-optional-params"></a>

**InstanceRequirements**

Specify the architecture and [instance types](https://aws.amazon.com/lambda/pricing/#:~:text=EPU%20pricing%20applies.-,Management%20Fees,-Pricing%20Example%3A%20High) for your capacity provider:
+ **Architectures**: Choose `x86_64` or `arm64`. Default is `x86_64`
+ **AllowedInstanceTypes**: Specify allowed instance types. Example: `m5.8xlarge`
+ **ExcludedInstanceTypes**: Specify excluded instance types using wildcards. You can specify only one of AllowedInstanceTypes or ExcludedInstanceTypes

By default, Lambda chooses optimal instance types for your workload. We recommend letting Lambda Managed Instances choose instance types for you, as restricting the number of possible instance types may result in lower availability.

**CapacityProviderScalingConfig**

Configure how Lambda scales your instances:
+ **ScalingMode**: Set to `Auto` for automatic scaling or `Manual` for manual control. Default is `Auto`
+ **MaxVCpuCount**: Maximum number of vCPUs for the capacity provider. Default is 400.
+ **ScalingPolicies**: Define target tracking scaling policies for CPU and memory utilization

**KmsKeyArn**

Specify a AWS KMS key for EBS encryption. Defaults to AWS managed key if not specified.

**Tags**

Add tags to organize and manage your capacity providers.

## Managing capacity providers
<a name="lambda-managed-instances-managing-capacity-providers"></a>

### Updating a capacity provider
<a name="lambda-managed-instances-updating-capacity-provider"></a>

You can update certain properties of a capacity provider using the `UpdateCapacityProvider` API.

```
aws lambda update-capacity-provider \
  --capacity-provider-name my-capacity-provider \
  --capacity-provider-scaling-config ScalingMode=Auto
```

### Deleting a capacity provider
<a name="lambda-managed-instances-deleting-capacity-provider"></a>

You can delete a capacity provider when it's no longer needed using the `DeleteCapacityProvider` API.

```
aws lambda delete-capacity-provider \
  --capacity-provider-name my-capacity-provider
```

**Note:** You cannot delete a capacity provider that has function versions attached to it.

### Viewing capacity provider details
<a name="lambda-managed-instances-viewing-capacity-provider"></a>

Retrieve information about a capacity provider using the `GetCapacityProvider` API.

```
aws lambda get-capacity-provider \
  --capacity-provider-name my-capacity-provider
```

## Capacity provider states
<a name="lambda-managed-instances-capacity-provider-states"></a>

A capacity provider can be in one of the following states:
+ **Pending**: The capacity provider is being created
+ **Active**: The capacity provider is ready to use
+ **Failed**: The capacity provider creation failed
+ **Deleting**: The capacity provider is being deleted

## Quotas
<a name="lambda-managed-instances-capacity-provider-quotas"></a>
+ **Maximum capacity providers per account**: 1,000
+ **Maximum function versions per capacity provider**: 100 (cannot be increased)

## Best practices
<a name="lambda-managed-instances-capacity-provider-best-practices"></a>

1. **Separate by trust level**: Create different capacity providers for workloads with different security requirements

1. **Use descriptive names**: Name capacity providers to clearly indicate their intended use and trust level (e.g., `production-trusted`, `dev-sandbox`)

1. **Use multiple Availability Zones**: Specify subnets across multiple AZs for high availability

1. **Let Lambda choose instance types**: Unless you have specific hardware requirements, allow Lambda to select optimal instance types for availability

1. **Monitor usage**: Use AWS CloudTrail to monitor capacity provider assignments and access patterns

## Next steps
<a name="lambda-managed-instances-capacity-provider-next-steps"></a>
+ Learn about [scaling Lambda Managed Instances](lambda-managed-instances-scaling.md)
+ Understand [security and permissions for Lambda Managed Instances](lambda-managed-instances-security.md)
+ Configure [VPC connectivity for your capacity providers](lambda-managed-instances-networking.md)
+ Review runtime-specific guides for [Java](lambda-managed-instances-java-runtime.md), [Node.js](lambda-managed-instances-nodejs-runtime.md), and [Python](lambda-managed-instances-python-runtime.md)

# Scaling Lambda Managed Instances
<a name="lambda-managed-instances-scaling"></a>

Lambda Managed Instances does not scale when invocations arrive and does not support cold starts. Instead, it scales asynchronously using resource consumption signals. Managed Instances currently scales based on CPU resource utilization and multi-concurrency saturation.

**Key differences:**
+ **Lambda (default):** Scales when there is no free execution environment to handle an incoming invocation (cold start)
+ **Lambda Managed Instances:** Scales asynchronously based on CPU resource utilization and multi-concurrency saturation of execution environments

If your traffic more than doubles within 5 minutes, you may see throttles as Lambda scales up instances and execution environments to meet demand.

## The scaling lifecycle
<a name="lambda-managed-instances-scaling-lifecycle"></a>

Lambda Managed Instances uses a distributed architecture to manage scaling:

**Components:**
+ **Managed Instances** - Run in your account in the subnets you provide
+ **Router and Scaler** - Shared Lambda components that route invocations and manage scaling
+ **Lambda Agent** - Runs on each Managed Instance to manage execution environment lifecycle and monitor resource consumption

**How it works:**

1. When you publish a function version with a capacity provider, Lambda launches Managed Instances in your account. It launches three by default for AZ resiliency and starts three execution environments before marking your function version ACTIVE.

1. Each Managed Instance can run execution environments for multiple functions mapped to the same capacity provider.

1. As traffic flows into your application, execution environments consume resources. The Lambda Agent notifies the Scaler, which decides whether to scale new execution environments or Managed Instances.

1. If Router attempts to send an invocation to an execution environment with high resource consumption, the Lambda Agent on that instance notifies it to retry on another.

1. As traffic decreases, the Lambda Agent notifies Scaler, which makes a decision to scale down execution environments and scale in Managed Instances.

## Adjusting scaling behavior
<a name="lambda-managed-instances-adjusting-scaling"></a>

You can customize the scaling behavior of Managed Instances through four controls:

### Function level controls
<a name="lambda-managed-instances-function-level-controls"></a>

#### 1. Function memory and vCPUs
<a name="lambda-managed-instances-function-memory-vcpus"></a>

Choose the memory size and vCPU allocation for your function. The smallest supported function size is 2GB and 1vCPU.

**Considerations:**
+ Pick a memory and vCPU setting that will support multi-concurrent executions of your function
+ You cannot configure a function with less than 1 vCPU because functions running on Managed Instances should support multi-concurrent workloads
+ You cannot choose less than 2GB because this matches the 2 to 1 memory to vCPU ratio of c instances, which have the lowest ratio
+ For Python applications, you may need to choose a higher ratio of memory to vCPUs, such as 4 to 1 or 8 to 1, because of the way Python handles multi-concurrency
+ If you are running CPU-intensive operations or perform little IO, you should choose more than one vCPU

#### 2. Maximum concurrency
<a name="lambda-managed-instances-maximum-concurrency"></a>

Set the maximum concurrency per execution environment.

**Default behavior:** Lambda chooses sensible defaults that balance resource consumption and throughput that work for a wide variety of applications.

**Adjustment guidelines:**
+ **Increase concurrency:** If your function invocations use very little CPU, you can increase maximum concurrency up to a maximum of 64 per vCPU
+ **Decrease concurrency:** If your application consumes a large amount of memory and very little CPU, you can reduce your maximum concurrency

**Important:** Since Lambda Managed Instances are meant for multi-concurrent applications, execution environments with very low concurrency may experience throttles when scaling.

### Capacity provider level controls
<a name="lambda-managed-instances-capacity-provider-level-controls"></a>

#### 3. Target resource utilization
<a name="lambda-managed-instances-target-resource-utilization"></a>

Choose your own target for CPU utilization consumption.

**Default behavior:** Lambda maintains enough headroom for your traffic to double within 5 minutes without throttles.

**Optimization options:**
+ If your workload is very steady or if your application is not sensitive to throttles, you may set the target to a high level to achieve higher utilization and lower costs
+ If you want to maintain headroom for bursts of traffic, you can set resource targets to a low level, which will require more capacity

#### 4. Instance type selection
<a name="lambda-managed-instances-instance-type-selection"></a>

Set allowed or excluded instance types.

**Default behavior:** Lambda chooses the best instance types for your workload. We recommend letting Lambda Managed Instances choose instance types for you, as restricting the number of possible instance types may result in lower availability.

**Custom configuration:**
+ **Specific hardware requirements:** Set allowed instance types to a list of compatible instances. For example, if you have an application that requires high network bandwidth, you can select several n instance types
+ **Cost optimization:** For testing or development environments, you might choose smaller instance types, like m7a.large instance types

## Next steps
<a name="lambda-managed-instances-scaling-next-steps"></a>
+ Learn about [capacity providers for Lambda Managed Instances](lambda-managed-instances-capacity-providers.md)
+ Review runtime-specific guides for handling multi-concurrency
+ Configure [VPC connectivity for your capacity providers](lambda-managed-instances-networking.md)
+ Monitor scaling metrics to optimize scaling behavior

# Security and permissions
<a name="lambda-managed-instances-security"></a>

Lambda Managed Instances use **capacity providers as trust boundaries**. Functions execute in containers within these instances, but containers do not provide security isolation between workloads. All functions assigned to the same capacity provider must be mutually trusted.

## Key Security Concepts
<a name="lambda-managed-instances-key-security-concepts"></a>
+ **Capacity Provider**: The security boundary that defines trust levels for Lambda functions
+ **Container Isolation**: Containers are not a security boundary - do not rely on them for security between untrusted workloads
+ **Trust Separation**: Separate workloads that are not mutually trusted by using different capacity providers

## Required Permissions
<a name="lambda-managed-instances-required-permissions"></a>

### PassCapacityProvider Action
<a name="lambda-managed-instances-pass-capacity-provider"></a>

Users need the `lambda:PassCapacityProvider` permission to assign functions to capacity providers. This permission acts as a security gate, ensuring only authorized users can place functions in specific capacity providers.

Account administrators control which functions can use specific capacity providers through the `lambda:PassCapacityProvider` IAM action. This action is required when:
+ Creating functions that use Lambda Managed Instances
+ Updating function configurations to use a capacity provider
+ Deploying functions via infrastructure as code

**Example IAM Policy**

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "lambda:PassCapacityProvider",
      "Resource": "arn:aws:lambda:*:*:capacity-provider:trusted-workloads-*"
    }
  ]
}
```

### Service-Linked Role
<a name="lambda-managed-instances-service-linked-role"></a>

AWS Lambda uses the `AWSServiceRoleForLambda` service-linked role to manage Lambda Managed Instances ec2 resources in your capacity providers.

## Best Practices
<a name="lambda-managed-instances-security-best-practices"></a>

1. **Separate by Trust Level**: Create different capacity providers for workloads with different security requirements

1. **Use Descriptive Names**: Name capacity providers to clearly indicate their intended use and trust level (e.g., `production-trusted`, `dev-sandbox`)

1. **Apply Least Privilege**: Grant `PassCapacityProvider` permissions only for necessary capacity providers

1. **Monitor Usage**: Use AWS CloudTrail to monitor capacity provider assignments and access patterns

## Next steps
<a name="lambda-managed-instances-security-next-steps"></a>
+ Learn about [capacity providers for Lambda Managed Instances](lambda-managed-instances-capacity-providers.md)
+ Understand [scaling for Lambda Managed Instances](lambda-managed-instances-scaling.md)
+ Configure [VPC connectivity for your capacity providers](lambda-managed-instances-networking.md)
+ Review runtime-specific guides for [Java](lambda-managed-instances-java-runtime.md), [Node.js](lambda-managed-instances-nodejs-runtime.md), and [Python](lambda-managed-instances-python-runtime.md)

# Lambda operator role for Lambda Managed Instances
<a name="lambda-managed-instances-operator-role"></a>

When you use Lambda Managed Instances, Lambda needs permissions to manage compute capacity in your account. The operator role provides these permissions through IAM policies that allow Lambda to manage EC2 instances in the capacity provider.

Lambda assumes the operator role when performing these management operations, similar to how Lambda assumes an execution role when your function runs.

## Creating an operator role
<a name="lambda-managed-instances-creating-operator-role"></a>

You can create an operator role in the IAM console or with the AWS CLI. The role must include:
+ **Permissions policy** – Grants permissions to manage capacity providers and associated resources
+ **Trust policy** – Allows the Lambda service (`lambda.amazonaws.com`) to assume the role

### Permissions policy
<a name="lambda-managed-instances-operator-role-permissions-policy"></a>

The operator role needs permissions to manage capacity providers and the underlying compute resources. At minimum, the role requires the permissions in the [AWSLambdaManagedEC2ResourceOperator](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAWSLambdaManagedEC2ResourceOperator) managed policy, currently:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:RunInstances",
        "ec2:CreateTags",
        "ec2:AttachNetworkInterface"
      ],
      "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:network-interface/*",
        "arn:aws:ec2:*:*:volume/*"
      ],
      "Condition": {
        "StringEquals": {
          "ec2:ManagedResourceOperator": "scaler.lambda.amazonaws.com"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeAvailabilityZones",
        "ec2:DescribeCapacityReservations",
        "ec2:DescribeInstances",
        "ec2:DescribeInstanceStatus",
        "ec2:DescribeInstanceTypeOfferings",
        "ec2:DescribeInstanceTypes",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeSubnets"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:RunInstances",
        "ec2:CreateNetworkInterface"
      ],
      "Resource": [
        "arn:aws:ec2:*:*:subnet/*",
        "arn:aws:ec2:*:*:security-group/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:RunInstances"
      ],
      "Resource": [
        "arn:aws:ec2:*:*:image/*"
      ],
      "Condition": {
        "StringEquals": {
          "ec2:Owner": "amazon"
        }
      }
    }
  ]
}
```

### Trust policy
<a name="lambda-managed-instances-operator-role-trust-policy"></a>

The trust policy allows Lambda to assume the operator role:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

## Service-Linked Role for Lambda Managed Instances
<a name="lambda-managed-instances-service-linked-role-for-lmi"></a>

To responsibly manage the lifecycle of Lambda Managed Instances, Lambda requires persistent access to terminate managed instances in your account. Lambda uses an AWS Identity and Access Management (IAM) service-linked role (SLR) to perform these operations.

**Automatic creation**: The service-linked role is automatically created the first time you create a capacity provider. The user creating the first capacity provider must have the `iam:CreateServiceLinkedRole` permission for the `lambda.amazonaws.com` principal.

**Permissions**: The service-linked role grants Lambda the following permissions on managed instances:
+ `ec2:TerminateInstances` – To terminate instances at the end of their lifecycle
+ `ec2:DescribeInstances` – To enumerate managed instances

**Deletion**: You can only delete this service-linked role after you have deleted all Lambda Managed Instances capacity providers in your account.

For more information about service-linked roles, see [Using service-linked roles for Lambda](using-service-linked-roles.md).

# Understanding the Lambda Managed Instances execution environment
<a name="lambda-managed-instances-execution-environment"></a>

Lambda Managed Instances provide an alternative deployment model that runs your function code on customer-owned Amazon EC2 instances while Lambda manages the operational aspects. The execution environment for Managed Instances has several important differences from Lambda (default) functions, particularly in how it handles concurrent invocations and manages container lifecycles.

**Note:** For information about the Lambda (default) execution environment, see Understanding the Lambda execution environment lifecycle.

## Execution environment lifecycle
<a name="lambda-managed-instances-execution-lifecycle"></a>

The lifecycle of a Lambda Managed Instances function execution environment differs from Lambda (default) in several key ways:

### Init phase
<a name="lambda-managed-instances-init-phase"></a>

During the Init phase, Lambda performs the following steps:
+ Initialize and register all extensions
+ Bootstrap the runtime entrypoint. Runtime spawns the configured number of runtime workers (implementation depends on runtime)
+ Run function initialization code (code outside the handler)
+ Wait for at least one runtime worker to signal readiness by calling `/runtime/invocation/next`

The Init phase is considered complete when extensions have initialized and at least one runtime worker has called `/runtime/invocation/next`. The function is then ready to process invocations.

**Note**  
For Lambda Managed Instances functions, initialization can take up to 15 minutes. The timeout is the maximum of 130 seconds or the configured function timeout (up to 900 seconds).

### Invoke phase
<a name="lambda-managed-instances-invoke-phase"></a>

The Invoke phase for Lambda Managed Instances functions has several unique characteristics:

**Continuous operation.** Unlike Lambda (default), the execution environment remains continuously active, processing invocations as they arrive without freezing between invocations.

**Parallel processing.** Multiple invocations can execute simultaneously within the same execution environment, each handled by a different runtime worker.

**Independent timeouts.** The function's configured timeout applies to each individual invocation. When an invocation times out, Lambda marks that specific invocation as failed but does not interrupt other running invocations or terminate the execution environment.

**Backpressure handling.** If all runtime workers are busy processing invocations, new invocation requests are rejected until a worker becomes available.

## Error handling and recovery
<a name="lambda-managed-instances-error-handling"></a>

Error handling in Lambda Managed Instances function execution environments differs from Lambda (default):

**Invoke timeouts.** When an individual invocation times out, Lambda returns a timeout error for that invocation. However, Lambda Managed Instances does not enforce the timeout—your code will keep running. As a function developer, you are responsible for detecting and handling the timeout. The context object exposes the remaining time for the invocation, with a zero or negative value indicating a timeout. Other concurrent invocations in the execution environment continue processing normally.

**Runtime worker failures.** If a runtime worker process crashes, the execution environment continues operating with the remaining healthy workers.

**Extension crashes.** If an extension process crashes during initialization or operation, the entire execution environment is marked as unhealthy and is terminated. Lambda creates a new execution environment to replace it.

**No reset/repair.** Unlike Lambda (default), Managed Instances do not attempt to reset and reinitialize the execution environment after errors. Instead, unhealthy containers are terminated and replaced with new ones.

# \$1LATEST.PUBLISHED version in Lambda Managed Instances
<a name="lambda-managed-instances-version-publishing"></a>

Lambda Managed Instances functions support the same numbered versioning workflow as Lambda (default). If you prefer not to maintain numbered versions, Lambda Managed Instances introduces a new version type: `$LATEST.PUBLISHED`. This version allows you to create or republish a latest published version as needed with updated code or configuration, without managing numbered versions.

**Key difference from \$1LATEST:** When you invoke a Lambda Managed Instances function using an unqualified ARN, Lambda implicitly invokes the `$LATEST.PUBLISHED` version rather than the unpublished \$1LATEST version.

The following AWS CLI command creates or republishes the `$LATEST.PUBLISHED` version.

```
aws lambda publish-version --function-name my-function --publish-to LATEST_PUBLISHED
```

You should see the following output:

```
{
  "FunctionName": "my-function",
  "FunctionArn": "arn:aws:lambda:us-east-2:123456789012:function:my-function:$LATEST.PUBLISHED",
  "Version": "$LATEST.PUBLISHED",
  "Role": "arn:aws:iam::123456789012:role/lambda-role",
  "Handler": "function.handler",
  "Runtime": "nodejs24.x",
  ...
}
```

**Note**  
If you use AWS CloudFormation or the Lambda console to create a Lambda Managed Instances function, Lambda automatically creates the `$LATEST.PUBLISHED` version.

# Lambda Managed Instances runtimes
<a name="lambda-managed-instances-runtimes"></a>

Lambda processes requests differently when using Lambda Managed Instances. Instead of handling requests sequentially in each execution environment, Lambda Managed Instances process multiple requests concurrently within each execution environment. This change in execution model means that functions using Lambda Managed Instances need to consider thread safety, state management, and context isolation, concerns which do not arise in the Lambda (default) single-concurrency model. In addition, the multi-concurrency implementation varies between runtimes.

## Supported languages
<a name="lambda-managed-instances-supported-runtimes"></a>

Lambda Managed Instances can be used with the following programming languages and runtimes:
+ **Java:** Java 21 and later.
+ **Python:** Python 3.13 and later.
+ **Node.js:** Node.js 22 and later.
+ **.NET:** .NET 8 and later.
+ **Rust:** Supported using the OS-only runtime `provided.al2023` and later.

## Language-specific considerations
<a name="lambda-managed-instances-runtime-considerations"></a>

Each programming language implements multi-concurrency differently. You need to understand how multi-concurrency is implemented in your chosen programming language to apply the appropriate concurrency best practices.

**Java**

Uses a single process with OS threads for concurrency. Multiple threads execute the handler method simultaneously, requiring thread-safe handling of state and shared resources.

**Python**

Uses multiple Python processes where each concurrent request runs in a separate process. This protects against most concurrency issues, though care is required for shared resources such as the `/tmp` directory.

**Node.js**

Uses [worker threads](https://nodejs.org/api/worker_threads.html) with asynchronous execution. Concurrent requests are distributed across worker threads, and each worker thread can also handle concurrent requests asynchronously, requiring safe handling of state and shared resources.

**.NET**

Uses .NET Tasks with asynchronous processing of multiple concurrent requests. Requires safe handling of state and shared resources.

**Rust**

Uses a single process with async tasks powered by [Tokio](https://tokio.rs/). The handler must be `Clone` \$1 `Send`.

## Next steps
<a name="lambda-managed-instances-runtime-next-steps"></a>

For detailed information about each runtime, see the following topics:
+ [Java runtime for Lambda Managed Instances](lambda-managed-instances-java-runtime.md)
+ [Node.js runtime for Lambda Managed Instances](lambda-managed-instances-nodejs-runtime.md)
+ [Python runtime for Lambda Managed Instances](lambda-managed-instances-python-runtime.md)
+ [.NET runtime for Lambda Managed Instances](lambda-managed-instances-dotnet-runtime.md)
+ [Rust support for Lambda Managed Instances](lambda-managed-instances-rust.md)

# Java runtime for Lambda Managed Instances
<a name="lambda-managed-instances-java-runtime"></a>

For Java runtimes, Lambda Managed Instances use OS threads for concurrency. Lambda loads your handler object once per execution environment during initialization and then creates multiple threads. These threads execute in parallel and require thread-safe handling of state and shared resources. Each thread shares the same handler object and any static fields.

## Concurrency configuration
<a name="lambda-managed-instances-java-concurrency-config"></a>

The maximum number of concurrent requests which Lambda sends to each execution environment is controlled by the `PerExecutionEnvironmentMaxConcurrency` setting in the function configuration. This is an optional setting, and the default value varies depending on the runtime. For Java runtimes, the default is 32 concurrent requests per vCPU, or you can configure your own value. This value also determines the number of threads used by the Java runtime. Lambda automatically adjusts the number of concurrent requests up to the configured maximum based on the capacity of each execution environment to absorb those requests.

## Building functions for multi-concurrency
<a name="lambda-managed-instances-java-building"></a>

You should apply the same thread safety practices when using Lambda Managed Instances as you would in any other multi-threaded environment. Since the handler object is shared across all runtime worker threads, any mutable state must be thread-safe. This includes collections, database connections, and any static objects that are modified during request processing.

AWS SDK clients are thread safe and do not require special handling.

**Example: Database connection pools**

The following code uses a static database connection object which is shared between threads. Depending on the connection library used, this may not be thread safe.

```
public class DBQueryHandler implements RequestHandler<Object, String> {
    // Single connection shared across all threads - NOT SAFE
    private static Connection connection;

    public DBQueryHandler() {
        this.connection = DriverManager.getConnection(jdbcUrl, username, password);
    }

    @Override
    public String handleRequest(Object input, Context context) {
        PreparedStatement stmt = connection.prepareStatement(query);
        ResultSet rs = stmt.executeQuery();
        // Multiple threads using same connection causes issues
        return result.toString();
    }
}
```

A thread-safe approach is to use a connection pool. In the following example, the function handler retrieves a connection from the pool. The connection is only used in the context of a single request.

```
public class DBQueryHandler implements RequestHandler<Object, String> {

    private static HikariDataSource dataSource;

    public DBQueryHandler() {
        HikariConfig config = new HikariConfig();
        config.setJdbcUrl("jdbc:mysql://localhost:3306/your_database");
        dataSource = new HikariDataSource(config); // Create pool once per Lambda container
    }

    @Override
    public String handleRequest(Object input, Context context) {
        String query = "SELECT column_name FROM your_table LIMIT 10";
        StringBuilder result = new StringBuilder("Data:\n");

        // try-with-resources automatically calls close() on the connection,
        // which returns it to the HikariCP pool (does NOT close the physical DB connection)
        try (Connection connection = dataSource.getConnection();
             PreparedStatement stmt = connection.prepareStatement(query);
             ResultSet rs = stmt.executeQuery()) {

            while (rs.next()) {
                result.append(rs.getString("column_name")).append("\n");
            }

        } catch (Exception e) {
            context.getLogger().log("Error: " + e.getMessage());
            return "Error";
        }

        return result.toString();
    }
}
```

**Example: Collections**

Standard Java collections are not thread safe:

```
public class Handler implements RequestHandler<Object, String> {
    private static List<String> items = new ArrayList<>();
    private static Map<String, Object> cache = new HashMap<>();

    @Override
    public String handleRequest(Object input, Context context) {
        items.add("list item");  // Not thread-safe
        cache.put("key", input); // Not thread-safe
        return "Success";
    }
}
```

Instead, use thread-safe collections:

```
public class Handler implements RequestHandler<Object, String> {
    private static final List<String> items = 
        Collections.synchronizedList(new ArrayList<>());
    private static final ConcurrentHashMap<String, Object> cache = 
        new ConcurrentHashMap<>();

    @Override
    public String handleRequest(Object input, Context context) {
        items.add("list item");  // Thread-safe
        cache.put("key", input); // Thread-safe
        return "Success";
    }
}
```

## Shared /tmp directory
<a name="lambda-managed-instances-java-shared-tmp"></a>

The `/tmp` directory is shared across all concurrent requests in the execution environment. Concurrent writes to the same file can cause data corruption, for example if another process overwrites the file. To address this, either implement file locking for shared files or use unique file names per thread or per request to avoid conflicts. Remember to clean up unneeded files to avoid exhausting the available space.

## Logging
<a name="lambda-managed-instances-java-logging"></a>

Log interleaving (log entries from different requests being interleaved in logs) is normal in multi-concurrent systems.

Functions using Lambda Managed Instances always use the structured JSON log format introduced with [advanced logging controls](monitoring-logs.md#monitoring-cloudwatchlogs-advanced). This format includes the `requestId`, allowing log entries to be correlated to a single request. When you use the `LambdaLogger` object from `context.getLogger()` the `requestId` is automatically included in each log entry. For further information, see [Using Lambda advanced logging controls with Java](java-logging.md#java-logging-advanced).

## Request context
<a name="lambda-managed-instances-java-request-context"></a>

The `context` object is bound to the request thread. Using `context.getAwsRequestId()` provides thread-safe access to the request ID for the current request.

Use `context.getXrayTraceId()` to access the X-Ray trace ID. This provides thread-safe access to the trace ID for the current request. Lambda does not support the `_X_AMZN_TRACE_ID` environment variable with Lambda Managed Instances. The X-Ray trace ID is propagated automatically when using the AWS SDK.

Use `com.amazonaws.services.lambda.runtime.Context.getRemainingTimeInMillis()` to detect timeouts. See [Error handling and recovery](lambda-managed-instances-execution-environment.md#lambda-managed-instances-error-handling) for more information.

If you use virtual threads in your program or create threads during initialization, you will need to pass any required request context to these threads.

## Initialization and shutdown
<a name="lambda-managed-instances-java-init-shutdown"></a>

Function initialization occurs once per execution environment. Objects created during initialization are shared across threads.

For Lambda functions with extensions, the execution environment emits a SIGTERM signal during shut down. This signal is used by extensions to trigger clean up tasks, such as flushing buffers. You can subscribe to SIGTERM events to trigger function clean-up tasks, such as closing database connections. To learn more about the execution environment lifecycle, see [Understanding the Lambda execution environment lifecycle](lambda-runtime-environment.md).

## Dependency versions
<a name="lambda-managed-instances-java-dependencies"></a>

Lambda Managed Instances requires the following minimum package versions:
+ AWS SDK for Java 2.0: version 2.34.0 or later
+ AWS X-Ray SDK for Java: version 2.20.0 or later
+ AWS Distro for OpenTelemetry - Instrumentation for Java: version 2.20.0 or later
+ Powertools for AWS Lambda (Java): version 2.8.0 or later

## Powertools for AWS Lambda (Java)
<a name="lambda-managed-instances-java-powertools"></a>

Powertools for AWS Lambda (Java) is compatible with Lambda Managed Instances and provides utilities for logging, tracing, metrics, and more. For more information, see [Powertools for AWS Lambda (Java)](https://github.com/aws-powertools/powertools-lambda-java).

## Next steps
<a name="lambda-managed-instances-java-next-steps"></a>
+ Review [Node.js runtime for Lambda Managed Instances](lambda-managed-instances-nodejs-runtime.md)
+ Review [Python runtime for Lambda Managed Instances](lambda-managed-instances-python-runtime.md)
+ Review [.NET runtime for Lambda Managed Instances](lambda-managed-instances-dotnet-runtime.md)
+ Learn about [scaling Lambda Managed Instances](lambda-managed-instances-scaling.md)

# Node.js runtime for Lambda Managed Instances
<a name="lambda-managed-instances-nodejs-runtime"></a>

For Node.js runtimes, Lambda Managed Instances uses worker threads with `async`/`await`-based execution to handle concurrent requests. Function initialization occurs once per worker thread. Concurrent invocations are handled across two dimensions: worker threads provide parallelism across vCPUs, and asynchronous execution provides concurrency within each thread. Each concurrent request handled by the same worker thread shares the same handler object and global state, requiring safe handling under multiple concurrent requests.

## Maximum concurrency
<a name="lambda-managed-instances-nodejs-max-concurrency"></a>

The maximum number of concurrent requests which Lambda sends to each execution environment is controlled by the `PerExecutionEnvironmentMaxConcurrency` setting in the function configuration. This is an optional setting, and the default value varies depending on the runtime. For Node.js runtimes, the default is 64 concurrent requests per vCPU, or you can configure your own value. Lambda automatically adjusts the number of concurrent requests up to the configured maximum based on the capacity of each execution environment to absorb those requests.

For Node.js, the number of concurrent requests that each execution environment can process is determined by the number of worker threads and the capacity of each worker thread to process concurrent requests asynchronously. The default number of worker threads is determined by the number of vCPUs available, or you can configure the number of worker threads by setting the `AWS_LAMBDA_NODEJS_WORKER_COUNT` environment variable. We recommend using async function handlers since this allows processing multiple requests per worker thread. If your function handler is synchronous, each worker thread can only process a single request at a time.

## Building functions for multi-concurrency
<a name="lambda-managed-instances-nodejs-building"></a>

With an async function handler, each runtime worker processes multiple requests concurrently. Global objects will be shared across multiple concurrent requests. For mutable objects, avoid using global state or use `AsyncLocalStorage`.

AWS SDK clients are async safe and do not require special handling.

**Example: Global state**

The following code uses a global object which is mutated inside the function handler. This is not async-safe.

```
let state = {
    currentUser: null,
    requestData: null
};

export const handler = async (event, context) => {
    state.currentUser = event.userId;
    state.requestData = event.data;

    await processData(state.requestData);

    // state.currentUser might now belong to a different request
    return { user: state.currentUser };
};
```

Initialising the `state` object inside the function handler avoids shared global state.

```
export const handler = async (event, context) => {
    let state = {
        currentUser: event.userId,
        requestData: event.data
    };
    
    await processData(state.requestData);

    return { user: state.currentUser };
};
```

**Example: Database connections**

The following code uses a shared client object which is shared between multiple invocations. Depending on the connection library used, this may not be concurrency safe.

```
const { Client } = require('pg');

// Single connection created at init time
const client = new Client({
  host: process.env.DB_HOST,
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD
});

// Connect once during cold start
client.connect();

exports.handler = async (event) => {
  // Multiple parallel invocations share this single connection = BAD
  // With multi-concurrent Lambda, queries will collide
  const result = await client.query('SELECT * FROM users WHERE id = $1', [event.userId]);
  
  return {
    statusCode: 200,
    body: JSON.stringify(result.rows[0])
  };
};
```

A concurrency-safe approach is to use a connection pool. The pool uses a separate connection for each concurrent database query.

```
const { Pool } = require('pg');

// Connection pool created at init time
const pool = new Pool({
  host: process.env.DB_HOST,
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  max: 20,  // Max connections in pool
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000
});

exports.handler = async (event) => {
  // Pool gives each parallel invocation its own connection
  const result = await pool.query('SELECT * FROM users WHERE id = $1', [event.userId]);
  
  return {
    statusCode: 200,
    body: JSON.stringify(result.rows[0])
  };
};
```

## Node.js 22 callback-based handlers
<a name="lambda-managed-instances-nodejs-callback-handlers"></a>

When using Node.js 22, you cannot use a callback-based function handler with Lambda Managed Instances. Callback-based handlers are only supported for Lambda (default) functions. For Node.js 24 and later runtimes, callback-based function handlers are deprecated for both Lambda (default) and Lambda Managed Instances.

Instead, use an `async` function handler when using Lambda Managed Instances. For more information, see [Define Lambda function handler in Node.js](https://docs.aws.amazon.com/lambda/latest/dg/nodejs-handler.html).

## Shared /tmp directory
<a name="lambda-managed-instances-nodejs-shared-tmp"></a>

The `/tmp` directory is shared across all concurrent requests in the execution environment. Concurrent writes to the same file can cause data corruption, for example if another process overwrites the file. To address this, either implement file locking for shared files or use unique file names per request to avoid conflicts. Remember to clean up unneeded files to avoid exhausting the available space.

## Logging
<a name="lambda-managed-instances-nodejs-logging"></a>

Log interleaving (log entries from different requests being interleaved in logs) is normal in multi-concurrent systems. Functions using Lambda Managed Instances always use the structured JSON log format introduced with [advanced logging controls](monitoring-logs.md#monitoring-cloudwatchlogs-advanced). This format includes the `requestId`, allowing log entries to be correlated to a single request. When you use the `console` logger, the `requestId` is automatically included in each log entry. For further information, see [Using Lambda advanced logging controls with Node.js](nodejs-logging.md#node-js-logging-advanced).

Popular third-party logging libraries, such as [Winston](https://github.com/winstonjs/winston), typically include support for using console for log output.

## Request context
<a name="lambda-managed-instances-nodejs-request-context"></a>

Using `context.awsRequestId` provides async-safe access to the request ID for the current request.

Use `context.xRayTraceId` to access the X-Ray trace ID. This provides concurrency-safe access to the trace ID for the current request. Lambda does not support the `_X_AMZN_TRACE_ID` environment variable with Lambda Managed Instances. The X-Ray trace ID is propagated automatically when using the AWS SDK.

Use `context.getRemainingTimeInMillis()` to detect timeouts. See [Error handling and recovery](lambda-managed-instances-execution-environment.md#lambda-managed-instances-error-handling) for more information.

## Initialization and shutdown
<a name="lambda-managed-instances-nodejs-init-shutdown"></a>

Function initialization occurs once per worker thread. You may see repeat log entries if your function emits logs during initialization.

For Lambda functions with extensions, the execution environment emits a SIGTERM signal during shut down. This signal is used by extensions to trigger clean up tasks, such as flushing buffers. Lambda (default) functions with extensions can also subscribe to the SIGTERM signal using `process.on()`. This is not supported for functions using Lambda Managed Instances since `process.on()` cannot be used with worker threads. To learn more about the execution environment lifecycle, see [Understanding the Lambda execution environment lifecycle](lambda-runtime-environment.md).

## Dependency versions
<a name="lambda-managed-instances-nodejs-dependencies"></a>

Lambda Managed Instances requires the following minimum package versions:
+ AWS SDK for JavaScript v3: version 3.933.0 or later
+ AWS X-Ray SDK for Node.js: version 3.12.0 or later
+ AWS Distro for OpenTelemetry - Instrumentation for JavaScript: version 0.8.0 or later
+ Powertools for AWS Lambda (TypeScript): version 2.29.0 or later

## Powertools for AWS Lambda (TypeScript)
<a name="lambda-managed-instances-nodejs-powertools"></a>

Powertools for AWS Lambda (TypeScript) is compatible with Lambda Managed Instances and provides utilities for logging, tracing, metrics, and more. For more information, see [Powertools for AWS Lambda (TypeScript)](https://github.com/aws-powertools/powertools-lambda-typescript).

## Next steps
<a name="lambda-managed-instances-nodejs-next-steps"></a>
+ Review [Java runtime for Lambda Managed Instances](lambda-managed-instances-java-runtime.md)
+ Review [Python runtime for Lambda Managed Instances](lambda-managed-instances-python-runtime.md)
+ Review [.NET runtime for Lambda Managed Instances](lambda-managed-instances-dotnet-runtime.md)
+ Learn about [scaling Lambda Managed Instances](lambda-managed-instances-scaling.md)

# Python runtime for Lambda Managed Instances
<a name="lambda-managed-instances-python-runtime"></a>

The Lambda runtime uses multiple Python processes to handle concurrent requests. Each concurrent request runs in a separate process with its own memory space and initialization. Each process handles one request at a time, synchronously. Processes don't share memory directly, so global variables, module-level caches, and singleton objects are isolated between concurrent requests.

## Concurrency configuration
<a name="lambda-managed-instances-python-concurrency-config"></a>

The maximum number of concurrent requests which Lambda sends to each execution environment is controlled by the `PerExecutionEnvironmentMaxConcurrency` setting in the function configuration. This is an optional setting, and the default value varies depending on the runtime. For Python runtimes, the default is 16 concurrent requests per vCPU, or you can configure your own value. This value also determines the number of processes used by the Python runtime. Lambda automatically adjusts the number of concurrent requests up to the configured maximum based on the capacity of each execution environment to absorb those requests.

**Important**  
Using process-based concurrency means each runtime worker process performs its own initialization. Total memory usage equals the per-process memory multiplied by the number of concurrent processes. If you are loading large libraries or data sets and have high concurrency, you will have a large memory footprint. According to your workload, you may need to tune your CPU-to-memory ratio or use a lower concurrency setting to avoid exceeding the available memory. You can use the `MemoryUtilization` metric in CloudWatch to track memory consumption.

## Building functions for multi-concurrency
<a name="lambda-managed-instances-python-building"></a>

Due to the process-based multi-concurrency model, Lambda Managed Instances functions using Python runtimes do not access in-memory resources concurrently from multiple invokes. You do not need to apply coding practices for in-memory concurrency safety.

## Shared /tmp directory
<a name="lambda-managed-instances-python-shared-tmp"></a>

The `/tmp` directory is shared across all concurrent requests in the execution environment. Concurrent writes to the same file can cause data corruption, for example if another process overwrites the file. To address this, either implement file locking for shared files or use unique file names per process or per request to avoid conflicts. Remember to clean up unneeded files to avoid exhausting the available space.

## Logging
<a name="lambda-managed-instances-python-logging"></a>

Log interleaving (log entries from different requests being interleaved in logs) is normal in multi-concurrent systems.

Functions using Lambda Managed Instances always use the structured JSON log format introduced with [advanced logging controls](monitoring-logs.md#monitoring-cloudwatchlogs-advanced). This format includes the `requestId`, allowing log entries to be correlated to a single request. When you use the `logging` module from the Python standard library in Lambda, the `requestId` is automatically included in each log entry. For further information, see [Using Lambda advanced logging controls with Python](https://docs.aws.amazon.com/lambda/latest/dg/python-logging.html#python-logging-advanced).

## Request context
<a name="lambda-managed-instances-python-request-context"></a>

Use `context.aws_request_id` to access to the request ID for the current request.

With Python runtimes, you can use the `_X_AMZN_TRACE_ID` environment variable to access the X-Ray trace ID with Lambda Managed Instances. The X-Ray trace ID is propagated automatically when using the AWS SDK.

Use `context.get_remaining_time_in_millis()` to detect timeouts. See [Error handling and recovery](lambda-managed-instances-execution-environment.md#lambda-managed-instances-error-handling) for more information.

## Initialization and shutdown
<a name="lambda-managed-instances-python-init-shutdown"></a>

Function initialization occurs once per process. You may see repeat log entries if your function emits logs during initialization.

For Lambda functions with extensions, the execution environment emits a SIGTERM signal during shut down. This signal is used by extensions to trigger clean up tasks, such as flushing buffers. You can subscribe to SIGTERM events to trigger function clean-up tasks, such as closing database connections. To learn more about the execution environment lifecycle, see [Understanding the Lambda execution environment lifecycle](lambda-runtime-environment.md).

## Dependency versions
<a name="lambda-managed-instances-python-dependencies"></a>

Lambda Managed Instances requires the following minimum package versions:
+ Powertools for AWS Lambda (Python): version 3.23.0 or later

## Powertools for AWS Lambda (Python)
<a name="lambda-managed-instances-python-powertools"></a>

Powertools for AWS Lambda (Python) is compatible with Lambda Managed Instances and provides utilities for logging, tracing, metrics, and more. For more information, see [Powertools for AWS Lambda (Python)](https://github.com/aws-powertools/powertools-lambda-python).

## Next steps
<a name="lambda-managed-instances-python-next-steps"></a>
+ Review [Java runtime for Lambda Managed Instances](lambda-managed-instances-java-runtime.md)
+ Review [Node.js runtime for Lambda Managed Instances](lambda-managed-instances-nodejs-runtime.md)
+ Review [.NET runtime for Lambda Managed Instances](lambda-managed-instances-dotnet-runtime.md)
+ Learn about [scaling Lambda Managed Instances](lambda-managed-instances-scaling.md)

# .NET runtime for Lambda Managed Instances
<a name="lambda-managed-instances-dotnet-runtime"></a>

For .NET runtimes, Lambda Managed Instances use a single .NET process per execution environment. Multiple concurrent requests are processed using .NET Tasks.

## Concurrency configuration
<a name="lambda-managed-instances-dotnet-concurrency-config"></a>

The maximum number of concurrent requests which Lambda sends to each execution environment is controlled by the `PerExecutionEnvironmentMaxConcurrency` setting in the function configuration. This is an optional setting, and the default value varies depending on the runtime. For .NET runtimes, the default is 32 concurrent requests per vCPU, or you can configure your own value. Lambda automatically adjusts the number of concurrent requests up to the configured maximum based on the capacity of each execution environment to absorb those requests.

## Building functions for multi-concurrency
<a name="lambda-managed-instances-dotnet-building"></a>

You should apply the same concurrency safety practices when using Lambda Managed Instances as you would in any other multi-concurrent environment. Since the handler object is shared across all Tasks any mutable state must be thread-safe. This includes collections, database connections and any static objects that are modified during request processing.

AWS SDK clients are thread safe and do not require special handling.

**Example: Database connection pools**

The following code uses a static database connection object which is shared between concurrent requests. The `SqlConnection` object is not thread safe.

```
public class DBQueryHandler
{
    // Single connection shared across threads - NOT SAFE
    private SqlConnection connection;

    public DBQueryHandler()
    {
        connection = new SqlConnection("your-connection-string-here");
        connection.Open();
    }

    public string Handle(object input, ILambdaContext context)
    {
        using var cmd = connection.CreateCommand();
        cmd.CommandText = "SELECT ..."; // your query

        using var reader = cmd.ExecuteReader();

        ...
    }
}
```

To address this, use a separate connection for each request, drawn from a connection pool. ADO.NET providers like `Microsoft.Data.SqlClient` automatically support connection pooling when the connection object is opened.

```
public class DBQueryHandler
{
    public DBQueryHandler()
    {
    }

    public string Handle(object input, ILambdaContext context)
    {
        using var connection = new SqlConnection("your-connection-string-here");
        connection.Open();
        using var cmd = connection.CreateCommand();
        cmd.CommandText = "SELECT ..."; // your query

        using var reader = cmd.ExecuteReader();

        ...
    }
}
```

**Example: Collections**

Standard .NET collections are not thread safe:

```
public class Handler
{
    private static List<string> items = new List<string>();
    private static Dictionary<string, object> cache = new Dictionary<string, object>();

    public string FunctionHandler(object input, ILambdaContext context)
    {
        items.Add(context.AwsRequestId);
        cache["key"] = input;

        return "Success";
    }
}
```

Use collections from the `System.Collections.Concurrent` namespace for concurrency safety:

```
public class Handler
{
    private static ConcurrentBag<string> items = new ConcurrentBag<string>();
    private static ConcurrentDictionary<string, object> cache = new ConcurrentDictionary<string, object>();

    public string FunctionHandler(object input, ILambdaContext context)
    {
        items.Add(context.AwsRequestId);
        cache["key"] = input;

        return "Success";
    }
}
```

## Shared /tmp directory
<a name="lambda-managed-instances-dotnet-shared-tmp"></a>

The `/tmp` directory is shared across all concurrent requests in the execution environment. Concurrent writes to the same file can cause data corruption, for example if another request overwrites the file. To address this, either implement file locking for shared files or use unique file names per request to avoid conflicts. Remember to clean up unneeded files to avoid exhausting the available space.

## Logging
<a name="lambda-managed-instances-dotnet-logging"></a>

Log interleaving (log entries from different requests being interleaved in logs) is normal in multi-concurrent systems. Functions using Lambda Managed Instances always use the structured JSON log format introduced with [advanced logging controls](monitoring-logs.md#monitoring-cloudwatchlogs-advanced). This format includes the `requestId`, allowing log entries to be correlated to a single request. When you use the `context.Logger` object to generate logs, the `requestId` is automatically included in each log entry. For further information, see [Using Lambda advanced logging controls with .NET](csharp-logging.md#csharp-logging-advanced).

## Request context
<a name="lambda-managed-instances-dotnet-request-context"></a>

Use the `context.AwsRequestId` property to access to the request ID for the current request.

Use the `context.TraceId` property to access the X-Ray trace ID. This provides concurrency-safe access to the trace ID for the current request. Lambda does not support the `_X_AMZN_TRACE_ID` environment variable with Lambda Managed Instances. The X-Ray trace ID is propagated automatically when using the AWS SDK.

Use `ILambdaContext.RemainingTime` to detect timeouts. See [Error handling and recovery](lambda-managed-instances-execution-environment.md#lambda-managed-instances-error-handling) for more information.

## Initialization and shutdown
<a name="lambda-managed-instances-dotnet-init-shutdown"></a>

Function initialization occurs once per execution environment. Objects created during initialization are shared across requests.

For Lambda functions with extensions, the execution environment emits a SIGTERM signal during shut down. This signal is used by extensions to trigger clean up tasks, such as flushing buffers. You can subscribe to SIGTERM events to trigger function clean-up tasks, such as closing database connections. To learn more about the execution environment lifecycle, see [Understanding the Lambda execution environment lifecycle](lambda-runtime-environment.md).

## Dependency versions
<a name="lambda-managed-instances-dotnet-dependencies"></a>

Lambda Managed Instances requires the following minimum package versions:
+ Amazon.Lambda.Core: version 2.7.1 or later
+ Amazon.Lambda.RuntimeSupport: version 1.14.1 or later
+ OpenTelemetry.Instrumentation.AWSLambda: version 1.14.0 or later
+ AWSXRayRecorder.Core: version 2.16.0 or later
+ AWSSDK.Core: version 4.0.0.32 or later

## Powertools for AWS Lambda (.NET)
<a name="lambda-managed-instances-dotnet-powertools"></a>

[Powertools for AWS Lambda (.NET)](https://docs.aws.amazon.com/powertools/dotnet/) and [AWS Distro for OpenTelemetry - Instrumentation for DotNet](https://github.com/aws-observability/aws-otel-dotnet-instrumentation) currently do not support Lambda Managed Instances.

## Next steps
<a name="lambda-managed-instances-dotnet-next-steps"></a>
+ Review [Java runtime for Lambda Managed Instances](lambda-managed-instances-java-runtime.md)
+ Review [Node.js runtime for Lambda Managed Instances](lambda-managed-instances-nodejs-runtime.md)
+ Review [Python runtime for Lambda Managed Instances](lambda-managed-instances-python-runtime.md)
+ Learn about [scaling Lambda Managed Instances](lambda-managed-instances-scaling.md)

# Rust support for Lambda Managed Instances
<a name="lambda-managed-instances-rust"></a>

## Concurrency configuration
<a name="lambda-managed-instances-rust-concurrency-config"></a>

The maximum number of concurrent requests which Lambda sends to each execution environment is controlled by the `PerExecutionEnvironmentMaxConcurrency` setting in the function configuration. This is an optional setting, and the default value for Rust is 8 concurrent requests per vCPU, or you can configure your own value. This value determines the number of Tokio tasks spawned by the runtime and is static for the lifetime of the execution environment. Each worker handles exactly one in-flight request at a time, with no multiplexing per worker. Lambda automatically adjusts the number of concurrent requests up to the configured maximum based on the capacity of each execution environment to absorb those requests.

## Building functions for multi-concurrency
<a name="lambda-managed-instances-rust-building"></a>

You should apply the same thread safety practices when using Lambda Managed Instances as you would in any other multi-threaded environment. Since the handler object is shared across all worker threads, any mutable state must be thread-safe. This includes collections, database connections, and any static objects that are modified during request processing.

To enable concurrent request handling, add the `concurrency-tokio` feature flag to your `Cargo.toml` file.

```
[dependencies]  
lambda_runtime = { version = "1", features = ["concurrency-tokio"] }
```

The `lambda_runtime::run_concurrent(…)` entry point must be called from within a Tokio runtime, typically provided by the `#[tokio::main]` attribute on your main function. Your handler closure must implement [https://doc.rust-lang.org/std/clone/trait.Clone.html](https://doc.rust-lang.org/std/clone/trait.Clone.html) \$1 [https://doc.rust-lang.org/std/marker/trait.Send.html](https://doc.rust-lang.org/std/marker/trait.Send.html). This allows the framework to share your handler across multiple async tasks safely. If those bounds are not met, your code will not compile.

When you need shared state across invocations (a database pool, a config struct), wrap it in [https://doc.rust-lang.org/std/sync/struct.Arc.html](https://doc.rust-lang.org/std/sync/struct.Arc.html) and clone the `Arc` into each invocation.

All AWS SDK for Rust clients are concurrency-safe and require no special handling.

### Example: AWS SDK client
<a name="lambda-managed-instances-rust-example-sdk"></a>

The following example uses an S3 client to upload an object on each invocation. The client is cloned directly into the closure without `Arc`:

```
let config = aws_config::load_defaults(BehaviorVersion::latest()).await;  
let s3_client = aws_sdk_s3::Client::new(&config);  
  
run_concurrent(service_fn(move |event: LambdaEvent<Request>| {  
    let s3_client = s3_client.clone(); // cheap clone, no Arc needed  
    async move {  
        s3_client.put_object()  
            .bucket(&event.payload.bucket)  
            .key(&event.payload.key)  
            .body(event.payload.body.into_bytes().into())  
            .send()  
            .await?;  
        Ok(Response { message: "uploaded".into() })  
    }  
}))  
.await
```

### Example: Database connection pools
<a name="lambda-managed-instances-rust-example-db"></a>

When your handler needs access to shared state such as a client and configuration, wrap it in [https://doc.rust-lang.org/std/sync/struct.Arc.html](https://doc.rust-lang.org/std/sync/struct.Arc.html) and clone the `Arc` into each invocation:

```
#[derive(Debug)]  
struct AppState {  
    dynamodb_client: DynamoDbClient,  
    table_name: String,  
    cache_ttl: Duration,  
}  
  
let config = aws_config::load_defaults(BehaviorVersion::latest()).await;  
let state = Arc::new(AppState {  
    dynamodb_client: DynamoDbClient::new(&config),  
    table_name: std::env::var("TABLE_NAME").expect("TABLE_NAME must be set"),  
    cache_ttl: Duration::from_secs(300),  
});  
  
run_concurrent(service_fn(move |event: LambdaEvent<Request>| {  
    let state = state.clone();  
    async move { handle(event, state).await }  
}))  
.await
```

## Shared /tmp directory
<a name="lambda-managed-instances-rust-tmp"></a>

The `/tmp` directory is shared across all concurrent invocations in the same execution environment. Use unique file names per invocation (e.g. include the request ID) or implement explicit file locking to avoid data corruption.

## Logging
<a name="lambda-managed-instances-rust-logging"></a>

Log interleaving (log entries from different requests being interleaved in logs) is normal in multi-concurrent systems. Functions using Lambda Managed Instances support structured JSON log format via Lambda's [advanced logging controls](monitoring-logs.md#monitoring-cloudwatchlogs-advanced). This format includes the `requestId`, allowing log entries to be correlated to a single request. For further information, see [Implementing advanced logging with the Tracing crate](rust-logging.md#rust-logging-tracing).

## Request Context
<a name="lambda-managed-instances-rust-context"></a>

The `Context` object is passed directly to each handler invocation. Use `event.context.request_id` to access the request ID for the current request.

Use `event.context.xray_trace_id` to access the X-Ray trace ID. Lambda does not support the `_X_AMZN_TRACE_ID` environment variable with Lambda Managed Instances. The X-Ray trace ID is propagated automatically when using the AWS SDK for Rust.

Use `event.context.deadline` to detect timeouts — it contains the invocation deadline in milliseconds.

## Initialization and shutdown
<a name="lambda-managed-instances-rust-lifecycle"></a>

Function initialization occurs once per execution environment. Objects created during initialization are shared across requests.

For Lambda functions with extensions, the execution environment emits a SIGTERM signal during shut down. This signal is used by extensions to trigger clean up tasks, such as flushing buffers. `lambda_runtime` offers a helper to simplify configuring graceful shutdown signal handling, [https://docs.rs/lambda_runtime/latest/lambda_runtime/fn.spawn_graceful_shutdown_handler.html](https://docs.rs/lambda_runtime/latest/lambda_runtime/fn.spawn_graceful_shutdown_handler.html). To learn more about the execution environment lifecycle, see [Understanding the Lambda execution environment lifecycle](lambda-runtime-environment.md).

## Dependency versions
<a name="lambda-managed-instances-rust-dependencies"></a>

Lambda Managed Instances requires the following minimum package version:
+ `lambda_runtime`: version 1.1.1 or later, with the `concurrency-tokio` feature enabled
+ The minimum supported Rust version (MSRV) is 1.84.0.

# Networking for Lambda Managed Instances
<a name="lambda-managed-instances-networking"></a>

When running Lambda Managed Instances functions, you need to configure network connectivity to enable your functions to access resources outside the VPC. This includes AWS services such as Amazon S3 and DynamoDB. The connectivity is also needed for transmitting telemetry data to CloudWatch Logs and X-Ray.

## Connectivity options
<a name="lambda-managed-instances-connectivity-options"></a>

There are three primary approaches for configuring VPC connectivity, each with different trade-offs for cost, security, and complexity.

## Public subnet with an internet gateway
<a name="lambda-managed-instances-public-subnet-igw"></a>

This option uses a public subnet with direct internet access through an internet gateway. You can choose between IPv4 and IPv6 configurations.

### IPv4 with internet gateway
<a name="lambda-managed-instances-ipv4-igw"></a>

**To configure IPv4 connectivity with an internet gateway**

1. Create or use an existing public subnet with an IPv4 CIDR block.

1. Attach an internet gateway to your VPC.

1. Update the route table to route `0.0.0.0/0` traffic to the internet gateway.

1. Ensure resources have public IPv4 addresses or Elastic IP addresses assigned.

1. Configure security groups to allow outbound traffic on the required ports.

This configuration provides bidirectional connectivity, allowing both outbound connections from your functions and inbound connections from the internet.

### IPv6 with internet gateway
<a name="lambda-managed-instances-ipv6-igw"></a>

**To configure IPv6 connectivity with an internet gateway**

1. Enable IPv6 on your VPC.

1. Create or use an existing public subnet with an IPv6 CIDR block assigned.

1. Attach an internet gateway to your VPC (the same internet gateway can handle both IPv4 and IPv6).

1. Update the route table to route `::/0` traffic to the internet gateway.

1. Verify that the AWS services you need to access support IPv6 in your Region.

1. Configure security groups to allow outbound traffic on the required ports.

This configuration provides bidirectional connectivity using IPv6 addressing.

### IPv6 with egress-only internet gateway
<a name="lambda-managed-instances-ipv6-egress-only"></a>

**To configure IPv6 connectivity with an egress-only internet gateway**

1. Enable IPv6 on your VPC.

1. Create or use an existing public subnet with an IPv6 CIDR block assigned.

1. Attach an egress-only internet gateway to your VPC.

1. Update the route table to route `::/0` traffic to the egress-only internet gateway.

1. Verify that the AWS services you need to access support IPv6 in your Region.

1. Configure security groups to allow outbound traffic on the required ports.

This configuration provides outbound-only connectivity, preventing inbound connections from the internet while allowing your functions to initiate outbound connections.

## VPC endpoints
<a name="lambda-managed-instances-vpc-endpoints"></a>

VPC endpoints enable you to privately connect your VPC to supported AWS services without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Traffic between your VPC and the AWS service does not leave the Amazon network.

**To configure VPC endpoints**

1. Open the Amazon VPC console at [console.aws.amazon.com/vpc/](http://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Endpoints**.

1. Choose **Create endpoint**.

1. For **Service category**, choose **AWS services**.

1. For **Service name**, select the service endpoint you need (for example, `com.amazonaws.region.s3` for Amazon S3).

1. For **VPC**, select your VPC.

1. For **Subnets**, select the subnets where you want to create endpoint network interfaces. For high availability, select subnets in multiple Availability Zones.

1. For **Security groups**, select the security groups to associate with the endpoint network interfaces. The security groups must allow inbound traffic from your function's security group on the required ports.

1. Choose **Create endpoint**.

Repeat these steps for each AWS service that your functions need to access.

## Private subnet with NAT gateway
<a name="lambda-managed-instances-private-subnet-nat"></a>

This option uses a NAT gateway to provide internet access for resources in private subnets while keeping the resources private.

**To configure a private subnet with NAT gateway**

1. Create a public subnet (if one doesn't already exist) with a CIDR block.

1. Attach an internet gateway to your VPC.

1. Create a NAT gateway in the public subnet and assign an Elastic IP address.

1. Update the public subnet route table to add a route: `0.0.0.0/0` → internet gateway.

1. Create or use an existing private subnet with a CIDR block.

1. Update the private subnet route table to add a route: `0.0.0.0/0` → NAT gateway.

1. Configure security groups to allow outbound traffic on the required ports.

For high availability, deploy one NAT gateway in each Availability Zone and configure route tables per Availability Zone to use the local NAT gateway. This prevents cross-AZ data transfer charges and improves resilience.

## Choosing a connectivity option
<a name="lambda-managed-instances-choosing-connectivity"></a>

Consider the following factors when choosing a connectivity option:

**Public subnet with internet gateway**
+ Simplest configuration with lowest cost
+ Suitable for development and testing environments
+ Resources can receive inbound connections from the internet (security consideration)
+ Supports both IPv4 and IPv6

**VPC endpoints**
+ Highest security, traffic stays within the AWS network
+ Lower latency compared to internet routing
+ Recommended for production environments with strict security requirements
+ Higher cost per endpoint, per Availability Zone, and per GB processed
+ Requires an endpoint in each Availability Zone for high availability

**Private subnet with NAT gateway**
+ Resources remain private with no inbound internet access
+ Standard enterprise architecture pattern
+ Supports all IPv4 internet traffic
+ Moderate cost with NAT gateway hourly and data processing charges
+ Supports IPv4 only

## Next steps
<a name="lambda-managed-instances-networking-next-steps"></a>
+ Learn about [capacity providers for Lambda Managed Instances](lambda-managed-instances-capacity-providers.md)
+ Understand [scaling for Lambda Managed Instances](lambda-managed-instances-scaling.md)
+ Review runtime-specific guides for [Java](lambda-managed-instances-java-runtime.md), [Node.js](lambda-managed-instances-nodejs-runtime.md), and [Python](lambda-managed-instances-python-runtime.md)
+ Understand [security and permissions for Lambda Managed Instances](lambda-managed-instances-security.md)

# Monitoring Lambda Managed Instances
<a name="lambda-managed-instances-monitoring"></a>

You can monitor Lambda Managed Instances using CloudWatch metrics. Lambda automatically publishes metrics to CloudWatch to help you monitor resource utilization, track costs, and optimize performance.

## Available metrics
<a name="lambda-managed-instances-available-metrics"></a>

Lambda Managed Instances provides metrics at two levels: capacity provider level and execution environment level.

### Capacity provider level metrics
<a name="lambda-managed-instances-capacity-provider-metrics"></a>

Capacity provider level metrics provide visibility into overall resource utilization across your instances. These metrics use the following dimensions:
+ **CapacityProviderName** - The name of your capacity provider
+ **InstanceType** - The EC2 instance type

**Resource utilization metrics:**
+ **CPUUtilization** - The percentage of CPU utilization across instances in the capacity provider
+ **MemoryUtilization** - The percentage of memory utilization across instances in the capacity provider

**Capacity metrics:**
+ **vCPUAvailable** - The amount of vCPU available on instances for allocation (in count)
+ **MemoryAvailable** - The amount of memory available on instances for allocation (in bytes)
+ **vCPUAllocated** - The amount of vCPU allocated on instances for execution environments (in count)
+ **MemoryAllocated** - The amount of memory allocated on instances for execution environments (in bytes)

### Execution environment level metrics
<a name="lambda-managed-instances-execution-environment-metrics"></a>

Execution environment level metrics provide visibility into resource utilization and concurrency for individual functions. These metrics use the following dimensions:
+ **CapacityProviderName** - The name of your capacity provider
+ **FunctionName** - The name of your Lambda function
+ **Resource** - By resource, view metrics for a specific version of a function.

**Note**  
For Lambda Managed Instances (LMI), the `Resource` dimension supports function versions only. The format is `<FunctionName>:<FunctionVersion>`.

**Available execution environment metrics:**
+ **ExecutionEnvironmentConcurrency** - The maximum concurrency over a 5-minute sample period
+ **ExecutionEnvironmentConcurrencyLimit** - The maximum concurrency limit per execution environment
+ **ExecutionEnvironmentCPUUtilization** - The percentage of CPU utilization for the function's execution environments
+ **ExecutionEnvironmentMemoryUtilization** - The percentage of memory utilization for the function's execution environments

## Metric frequency and retention
<a name="lambda-managed-instances-metric-frequency"></a>

Lambda Managed Instances metrics are published at 5-minute intervals and retained for 15 months.

## Viewing metrics in CloudWatch
<a name="lambda-managed-instances-viewing-metrics"></a>

**To view Lambda Managed Instances metrics in the CloudWatch console**

1. Open the CloudWatch console at [console.aws.amazon.com/cloudwatch/](http://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Metrics**.

1. In the **All metrics** tab, choose **AWS/Lambda**.

1. Choose the metric dimension you want to view:
   + For capacity provider level metrics, filter by **CapacityProviderName** and **InstanceType**
   + For execution environment level metrics, filter by **CapacityProviderName**, **FunctionName**, and **Resource**

1. Select the metrics you want to monitor.

## Using metrics to optimize performance
<a name="lambda-managed-instances-using-metrics"></a>

Monitor CPU and memory utilization to understand if your functions are properly sized. High utilization may indicate the need for larger instance types or increased function memory allocation. Track concurrency metrics to understand scaling behavior and identify potential throttling.

Monitor capacity metrics to verify sufficient resources are available for your workloads. The **vCPUAvailable** and **MemoryAvailable** metrics help you understand remaining capacity on your instances.

## Next steps
<a name="lambda-managed-instances-monitoring-next-steps"></a>
+ Learn about [scaling Lambda Managed Instances](lambda-managed-instances-scaling.md)
+ Review runtime-specific guides for [Java](lambda-managed-instances-java-runtime.md), [Node.js](lambda-managed-instances-nodejs-runtime.md), and [Python](lambda-managed-instances-python-runtime.md)
+ Configure [VPC connectivity for your capacity providers](lambda-managed-instances-networking.md)
+ Understand [security and permissions for Lambda Managed Instances](lambda-managed-instances-security.md)

# Lambda Managed Instances quotas
<a name="lambda-managed-instances-quotas"></a>

This page describes the service quotas for AWS Lambda Managed Instances. These quotas are separate from AWS Lambda (default) quotas. Some quotas can be increased upon request.

## Lambda API request quotas
<a name="lambda-managed-instances-api-request-quotas"></a>

These quotas control the rate at which you can make API calls to manage Lambda Managed Instances capacity providers. The read and write API rate limits apply to all capacity provider operations combined, including creating, updating, describing, and deleting capacity providers.


| Resource | Quota | 
| --- | --- | 
| The maximum combined rate (requests per second) for all capacity provider read APIs | 15 requests per second. Cannot be increased. | 
| The maximum combined rate (requests per second) for all capacity provider write APIs | 1 request per second. Cannot be increased. | 

## Lambda Managed Instances resource quotas
<a name="lambda-managed-instances-resource-quotas"></a>

These quotas define the limits for core Lambda Managed Instances resources within your AWS account. They govern the number of capacity providers you can create and the number of function versions that can be associated with each capacity provider.


| Resource | Quota | 
| --- | --- | 
| Capacity providers | 1,000. The maximum number of capacity providers created in an account. | 
| Function versions per capacity provider | 100. The maximum number of function versions per capacity provider. Cannot be increased. | 

## Event source mapping quotas
<a name="lambda-managed-instances-event-source-quotas"></a>

These quotas control the throughput and configuration limits for processing events from various AWS services on Lambda Managed Instances. The throughput limits ensure predictable performance while the mapping count limits help maintain service stability. Event source mappings on Lambda Managed Instances support Amazon SQS, DynamoDB Streams, Amazon Kinesis Data Streams, Amazon MSK, and self-managed Apache Kafka as event sources.


| Resource | Quota | 
| --- | --- | 
| Standard SQS event source mapping throughput on Lambda Managed Instances | 5 MB per second. Cannot be increased. | 
| Standard Kafka event source mapping throughput on Lambda Managed Instances | 1 MB per second. Cannot be increased. | 
| Standard Kafka event source mappings on Lambda Managed Instances | 100 event source mappings. Cannot be increased. | 
| Kinesis event source mapping throughput on Lambda Managed Instances | 25 MB per second. Can be increased. | 
| DynamoDB event source mapping throughput on Lambda Managed Instances | 10 MB per second. Can be increased. | 
| Invoke request throughput for asynchronous invocations on Lambda Managed Instances | 5 MB per second. Can be increased. | 

## Requesting a quota increase
<a name="lambda-managed-instances-requesting-quota-increase"></a>

For quotas that can be increased, you can request an increase through the Service Quotas console.

**To request a quota increase**

1. Open the Service Quotas console at [console.aws.amazon.com/servicequotas/](http://console.aws.amazon.com/servicequotas/).

1. In the navigation pane, choose **AWS services**.

1. Choose **AWS Lambda**.

1. Select the quota you want to increase.

1. Choose **Request quota increase**.

1. Enter the new quota value and provide a justification for the increase.

1. Choose **Request**.

## Next steps
<a name="lambda-managed-instances-quotas-next-steps"></a>
+ Learn about [capacity providers for Lambda Managed Instances](lambda-managed-instances-capacity-providers.md)
+ Understand [scaling for Lambda Managed Instances](lambda-managed-instances-scaling.md)
+ Review runtime-specific guides for [Java](lambda-managed-instances-java-runtime.md), [Node.js](lambda-managed-instances-nodejs-runtime.md), and [Python](lambda-managed-instances-python-runtime.md)
+ Configure [VPC connectivity for your capacity providers](lambda-managed-instances-networking.md)

# Best practices for Lambda Managed Instances
<a name="lambda-managed-instances-best-practices"></a>

## Capacity provider configuration
<a name="lambda-managed-instances-bp-capacity-provider"></a>

**Separate capacity providers by trust level.** Create different capacity providers for workloads with different security requirements. All functions assigned to the same capacity provider must be mutually trusted, as capacity providers serve as the security boundary.

**Use descriptive names.** Name capacity providers to clearly indicate their intended use and trust level (for example, `production-trusted`, `dev-sandbox`). This helps teams understand the purpose and security posture of each capacity provider.

**Use multiple Availability Zones.** Specify subnets across multiple Availability Zones when creating capacity providers. Lambda launches three instances by default for AZ resiliency, ensuring high availability for your functions.

## Instance type selection
<a name="lambda-managed-instances-bp-instance-types"></a>

**Let Lambda choose instance types.** By default, Lambda chooses the best instance types for your workload. We recommend letting Lambda Managed Instances choose instance types for you, as restricting the number of possible instance types may result in lower availability.

**Specify instance types for specific requirements.** If you have specific hardware requirements, set allowed instance types to a list of compatible instances. For example:
+ For applications requiring high network bandwidth, select several n instance types
+ For testing or development environments with cost constraints, choose smaller instance types like m7a.large

## Function configuration
<a name="lambda-managed-instances-bp-function-config"></a>

**Choose appropriate memory and vCPU settings.** Select memory and vCPU configurations that support multi-concurrent executions of your function. The minimum supported function size is 2GB and 1 vCPU.
+ For Python applications, choose a higher ratio of memory to vCPUs (such as 4 to 1 or 8 to 1) because of the way Python handles multi-concurrency
+ For CPU-intensive operations or functions that perform little IO, choose more than one vCPU
+ For IO-heavy applications like web services or batch jobs, multi-concurrency provides the most benefit

**Configure maximum concurrency appropriately.** Lambda chooses sensible defaults for maximum concurrency that balance resource consumption and throughput. Adjust this setting based on your function's resource usage:
+ Increase maximum concurrency (up to 64 per vCPU) if your function invocations use very little CPU
+ Decrease maximum concurrency if your application consumes a large amount of memory and very little CPU

Note that execution environments with very low concurrency may experience throttles and difficulty scaling.

## Scaling configuration
<a name="lambda-managed-instances-bp-scaling"></a>

**Set appropriate target resource utilization.** By default, Lambda maintains enough headroom for your traffic to double within 5 minutes without throttles. Adjust this based on your workload characteristics:
+ For very steady workloads or applications not sensitive to throttles, set the target to a high level to achieve higher utilization and lower costs
+ For workloads with potential traffic bursts, set resource targets to a low level to maintain additional headroom

**Plan for traffic growth.** If your traffic more than doubles within 5 minutes, you may see throttles as Lambda scales up instances and execution environments. Design your application to handle potential throttling during rapid scale-up periods.

## Security
<a name="lambda-managed-instances-bp-security"></a>

**Apply least privilege for PassCapacityProvider permissions.** Grant `lambda:PassCapacityProvider` permissions only for necessary capacity providers. Use resource-level permissions to restrict which capacity providers users can assign to functions.

**Monitor capacity provider usage.** Use AWS CloudTrail to monitor capacity provider assignments and access patterns. This helps identify unauthorized access attempts and ensures compliance with security policies.

**Separate untrusted workloads.** Do not rely on containers for security isolation between untrusted workloads. Use different capacity providers to separate workloads that are not mutually trusted.

## Cost optimization
<a name="lambda-managed-instances-bp-cost"></a>

**Leverage EC2 pricing options.** Take advantage of EC2 Savings Plans and Reserved Instances to reduce costs. These pricing options apply to the underlying EC2 compute (the 15% management fee is not discounted).

**Optimize for steady-state workloads.** Lambda Managed Instances are best suited for steady-state functions with predictable high-volume traffic. For bursty traffic patterns, Lambda (default) may be more cost-effective.

**Monitor resource utilization.** Track CloudWatch metrics to understand CPU and memory utilization. Adjust function memory allocation and instance type selection based on actual usage patterns to optimize costs.

## Monitoring and observability
<a name="lambda-managed-instances-bp-monitoring"></a>

**Monitor capacity provider metrics.** Track capacity provider level metrics including CPUUtilization, MemoryUtilization, vCPUAvailable, and MemoryAvailable to verify sufficient resources are available for your workloads.

**Monitor execution environment metrics.** Track execution environment level metrics including ExecutionEnvironmentConcurrency and ExecutionEnvironmentConcurrencyLimit to understand scaling behavior and identify potential throttling.

**Set up CloudWatch alarms.** Create CloudWatch alarms for key metrics to proactively identify issues:
+ High CPU or memory utilization
+ Low available capacity
+ Approaching concurrency limits

## Language-specific considerations
<a name="lambda-managed-instances-bp-runtime"></a>

**Follow language-specific best practices.** Each programming language handles multi-concurrency differently. Review the language-specific guides for detailed recommendations:
+ **Java:** Use thread-safe collections, `AtomicInteger`, and `ThreadLocal` for request-specific state
+ **Node.js:** Use InvokeStore for all request-specific state and avoid global variables
+ **Python:** Use unique file names in `/tmp` with request IDs and consider process-based memory isolation
+ **Rust:** Use `run_concurrent` instead of `run`, with the `concurrency-tokio` feature enabled. The handler must be `Clone` \$1 `Send`.

**Test for thread safety and concurrency issues.** Before deploying to production, thoroughly test your functions for thread safety issues, race conditions, and proper state isolation under concurrent load.

## Next steps
<a name="lambda-managed-instances-bp-next-steps"></a>
+ Learn about [capacity providers for Lambda Managed Instances](lambda-managed-instances-capacity-providers.md)
+ Understand [scaling for Lambda Managed Instances](lambda-managed-instances-scaling.md)
+ Review runtime-specific guides for [Java](lambda-managed-instances-java-runtime.md), [Node.js](lambda-managed-instances-nodejs-runtime.md), and [Python](lambda-managed-instances-python-runtime.md)
+ Configure [VPC connectivity for your capacity providers](lambda-managed-instances-networking.md)
+ Monitor Lambda Managed Instances with [CloudWatch metrics](lambda-managed-instances-monitoring.md)

# Troubleshooting Lambda Managed Instances
<a name="lambda-managed-instances-troubleshooting"></a>

## Throttling and scaling issues
<a name="lambda-managed-instances-ts-throttling"></a>

### High error rates during scale-up
<a name="lambda-managed-instances-ts-high-error-rates"></a>

**Problem:** You experience throttling errors (HTTP 429) when traffic increases rapidly.

**Cause:** Lambda Managed Instances scale asynchronously based on CPU resource utilization and multi-concurrency saturation. If your traffic more than doubles within 5 minutes, you may see throttles as Lambda scales up instances and execution environments to meet demand.

**Solution:**
+ **Adjust target resource utilization:** If your workload has predictable traffic patterns, set a lower target resource utilization to maintain additional headroom for traffic bursts.
+ **Pre-warm capacity:** For planned traffic increases, gradually ramp up traffic over a longer period to allow scaling to keep pace.
+ **Monitor scaling metrics:** Track throttle error metrics to understand the reason for throttles and capacity scaling issues.
+ **Review function configuration:** Ensure your function memory and vCPU settings support multi-concurrent executions. Increase function memory or vCPU allocation if needed.

### Slow scale-down
<a name="lambda-managed-instances-ts-slow-scale-down"></a>

**Problem:** Instances take a long time to scale down after traffic decreases.

**Cause:** Lambda Managed Instances scale down gradually to maintain availability and avoid rapid capacity changes that could impact performance.

**Solution:**

This is expected behavior. Lambda scales down instances conservatively to ensure stability. Monitor your CloudWatch metrics to track the number of running instances.

## Concurrency issues
<a name="lambda-managed-instances-ts-concurrency"></a>

### Execution environments with low concurrency experience throttles
<a name="lambda-managed-instances-ts-low-concurrency-throttles"></a>

**Problem:** Your functions experience throttling despite having available capacity.

**Cause:** Execution environments with very low maximum concurrency may have difficulty scaling effectively. Lambda Managed Instances are designed for multi-concurrent applications.

**Solution:**
+ **Increase maximum concurrency:** If your function invocations use very little CPU, increase the maximum concurrency setting up to 64 per vCPU.
+ **Optimize function code:** Review your function code to reduce CPU consumption per invocation, allowing higher concurrency.
+ **Adjust function memory and vCPU:** Ensure your function has sufficient resources to handle multiple concurrent invocations.

### Thread safety issues (Java runtime)
<a name="lambda-managed-instances-ts-thread-safety-java"></a>

**Problem:** Your Java function produces incorrect results or experiences race conditions under load.

**Cause:** Multiple threads execute the handler method simultaneously, and shared state is not thread-safe.

**Solution:**
+ Use `AtomicInteger` or `AtomicLong` for counters instead of primitive types
+ Replace `HashMap` with `ConcurrentHashMap`
+ Use `Collections.synchronizedList()` to wrap `ArrayList`
+ Use `ThreadLocal` for request-specific state
+ Access trace IDs from the Lambda Context object, not environment variables

For detailed guidance, see the [Java runtime for Lambda Managed Instances](lambda-managed-instances-java-runtime.md) documentation.

### State isolation issues (Node.js runtime)
<a name="lambda-managed-instances-ts-state-isolation-nodejs"></a>

**Problem:** Your Node.js function returns data from different requests or experiences data corruption.

**Cause:** Global variables are shared across concurrent invocations on the same worker thread. When async operations yield control, other invocations can modify shared state.

**Solution:**
+ Install and use `@aws/lambda-invoke-store` for all request-specific state
+ Replace global variables with `InvokeStore.set()` and `InvokeStore.get()`
+ Use unique file names in `/tmp` with request IDs
+ Access trace IDs using `InvokeStore.getXRayTraceId()` instead of environment variables

For detailed guidance, see the [Node.js runtime for Lambda Managed Instances](lambda-managed-instances-nodejs-runtime.md) documentation.

### File conflicts (Python runtime)
<a name="lambda-managed-instances-ts-file-conflicts-python"></a>

**Problem:** Your Python function reads incorrect data from files in `/tmp`.

**Cause:** Multiple processes share the `/tmp` directory. Concurrent writes to the same file can cause data corruption.

**Solution:**
+ Use unique file names with request IDs: `/tmp/request_{context.request_id}.txt`
+ Use file locking with `fcntl.flock()` for shared files
+ Clean up temporary files with `os.remove()` after use

For detailed guidance, see the [Python runtime for Lambda Managed Instances](lambda-managed-instances-python-runtime.md) documentation.

## Performance issues
<a name="lambda-managed-instances-ts-performance"></a>

### High memory utilization
<a name="lambda-managed-instances-ts-high-memory"></a>

**Problem:** Your functions experience high memory utilization or out-of-memory errors.

**Cause:** Each concurrent request in Python runs in a separate process with its own memory space. Total memory usage equals per-process memory multiplied by concurrent processes.

**Solution:**
+ Monitor the `MemoryUtilization` metric in CloudWatch
+ Reduce the `MaxConcurrency` setting if memory usage approaches the function's memory limit
+ Increase function memory allocation to support higher concurrency
+ Optimize memory usage by loading data on-demand instead of during initialization

### Inconsistent performance
<a name="lambda-managed-instances-ts-inconsistent-performance"></a>

**Problem:** Function performance varies significantly between invocations.

**Cause:** Lambda may select different instance types based on availability, or functions may be running on instances with varying resource availability.

**Solution:**
+ **Specify allowed instance types:** If you have specific performance requirements, configure allowed instance types in your capacity provider to limit the instance types Lambda can select.
+ **Monitor instance-level metrics:** Track `CPUUtilization` and `MemoryUtilization` at the capacity provider level to identify resource constraints.
+ **Review capacity metrics:** Check `vCPUAvailable` and `MemoryAvailable` to ensure sufficient resources are available on your instances.

## Capacity provider issues
<a name="lambda-managed-instances-ts-capacity-provider"></a>

### Function version fails to become ACTIVE
<a name="lambda-managed-instances-ts-function-not-active"></a>

**Problem:** Your function version remains in a pending state after publishing.

**Cause:** Lambda is launching Managed Instances and starting execution environments. This process takes time, especially for the first function version on a new capacity provider.

**Solution:**

Wait for Lambda to complete the initialization process. Lambda launches three instances by default for AZ resiliency and starts three execution environments before marking your function version ACTIVE. This typically takes several minutes.

### Cannot delete capacity provider
<a name="lambda-managed-instances-ts-cannot-delete"></a>

**Problem:** You receive an error when attempting to delete a capacity provider.

**Cause:** You cannot delete a capacity provider that has function versions attached to it.

**Solution:**

1. Identify all function versions using the capacity provider with the `ListFunctionVersionsByCapacityProvider` API.

1. Delete or update those function versions to remove the capacity provider association.

1. Retry deleting the capacity provider.

### Generic error messages during function publishing
<a name="lambda-managed-instances-ts-generic-errors"></a>

**Problem:** You encounter generic error messages such as "Internal error occurred during publishing" when publishing functions.

**Solution:**
+ **Check IAM permissions:** Ensure you have the `lambda:PassCapacityProvider` permission for the capacity provider you're trying to use.
+ **Verify capacity provider configuration:** Confirm that your capacity provider is in the ACTIVE state using the `GetCapacityProvider` API.
+ **Review VPC configuration:** Ensure the subnets and security groups specified in your capacity provider are correctly configured and accessible.
+ **Check AWS CloudTrail logs:** Review CloudTrail logs for detailed error information about the failed operation.

## Monitoring and observability issues
<a name="lambda-managed-instances-ts-monitoring"></a>

### Missing CloudWatch metrics
<a name="lambda-managed-instances-ts-missing-metrics"></a>

**Problem:** You don't see expected metrics in CloudWatch for your capacity provider or functions.

**Cause:** Metrics are published at 5-minute intervals. New capacity providers or functions may not have metrics available immediately.

**Solution:**

Wait at least 5-10 minutes after publishing a function version before expecting metrics to appear in CloudWatch. Verify you're looking at the correct namespace (`AWS/Lambda`) and dimensions (`CapacityProviderName`, `FunctionName`, or `InstanceType`).

### Cannot find CloudWatch logs
<a name="lambda-managed-instances-ts-no-logs"></a>

**Problem:** Your function executes successfully, but you cannot find logs in CloudWatch Logs.

**Cause:** Lambda Managed Instances run in your VPC and require network connectivity to send logs to CloudWatch Logs. Without proper VPC connectivity configuration, your functions cannot reach the CloudWatch Logs service endpoint.

**Solution:**

Configure VPC connectivity to enable your functions to send logs to CloudWatch Logs. You have three options:

**Option 1: VPC endpoint for CloudWatch Logs (recommended for production)**

1. Open the Amazon VPC console at [console.aws.amazon.com/vpc/](http://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Endpoints**.

1. Choose **Create endpoint**.

1. For **Service category**, choose **AWS services**.

1. For **Service name**, select `com.amazonaws.region.logs` (replace `region` with your AWS Region).

1. For **VPC**, select the VPC used by your capacity provider.

1. For **Subnets**, select the subnets where you want to create endpoint network interfaces. For high availability, select subnets in multiple Availability Zones.

1. For **Security groups**, select security groups that allow inbound HTTPS traffic (port 443) from your function's security group.

1. Enable **Private DNS** for the endpoint.

1. Choose **Create endpoint**.

**Option 2: Public subnet with internet gateway**

If your capacity provider uses public subnets, ensure:

1. An internet gateway is attached to your VPC

1. The route table routes `0.0.0.0/0` traffic to the internet gateway

1. Security groups allow outbound HTTPS traffic on port 443

**Option 3: Private subnet with NAT gateway**

If your capacity provider uses private subnets, ensure:

1. A NAT gateway exists in a public subnet

1. The private subnet route table routes `0.0.0.0/0` traffic to the NAT gateway

1. The public subnet route table routes `0.0.0.0/0` traffic to an internet gateway

1. Security groups allow outbound HTTPS traffic on port 443

For detailed guidance on VPC connectivity options, see [VPC connectivity for Lambda Managed Instances](lambda-managed-instances-networking.md).

### Difficulty correlating logs from concurrent requests
<a name="lambda-managed-instances-ts-log-correlation"></a>

**Problem:** Logs from different requests are interleaved, making it difficult to trace individual requests.

**Cause:** Log interleaving is expected and standard behavior in multi-concurrent systems.

**Solution:**
+ **Use structured logging with JSON format:** Include request ID in all log statements
+ **Java:** Use Log4j with `ThreadContext` to automatically include request ID
+ **Node.js:** Use `console.log()` with JSON formatting and include `InvokeStore.getRequestId()`
+ **Python:** Use the standard logging module with JSON formatting and include `context.request_id`

For detailed guidance, see the runtime-specific documentation pages.

## Getting additional help
<a name="lambda-managed-instances-ts-getting-help"></a>

If you continue to experience issues after trying these solutions:

1. **Review CloudWatch metrics:** Check capacity provider and execution environment metrics to identify resource constraints or scaling issues.

1. **Check AWS CloudTrail logs:** Review CloudTrail logs for detailed information about API calls and errors.

1. **Contact AWS Support:** If you cannot resolve the issue, contact AWS Support with details about your capacity provider configuration, function configuration, and the specific error messages you're encountering.

## Next steps
<a name="lambda-managed-instances-ts-next-steps"></a>
+ Learn about [capacity providers for Lambda Managed Instances](lambda-managed-instances-capacity-providers.md)
+ Understand [scaling for Lambda Managed Instances](lambda-managed-instances-scaling.md)
+ Review runtime-specific guides for [Java](lambda-managed-instances-java-runtime.md), [Node.js](lambda-managed-instances-nodejs-runtime.md), and [Python](lambda-managed-instances-python-runtime.md)
+ Monitor Lambda Managed Instances with [CloudWatch metrics](lambda-managed-instances-monitoring.md)
+ Review [best practices for Lambda Managed Instances](lambda-managed-instances-best-practices.md)