

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Simplify node lifecycle with managed node groups
<a name="managed-node-groups"></a>

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.

With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, automatically update, or terminate nodes for your cluster with a single operation. Node updates and terminations automatically drain nodes to ensure that your applications stay available.

Every managed node is provisioned as part of an Amazon EC2 Auto Scaling group that’s managed for you by Amazon EKS. Every resource including the instances and Auto Scaling groups runs within your AWS account. Each node group runs across multiple Availability Zones that you define.

Managed node groups can also optionally leverage node auto repair, which continuously monitors the health of nodes. It automatically reacts to detected problems and replaces nodes when possible. This helps overall availability of the cluster with minimal manual intervention. For more information, see [Detect node health issues and enable automatic node repair](node-health.md).

You can add a managed node group to new or existing clusters using the Amazon EKS console, `eksctl`, AWS CLI, AWS API, or infrastructure as code tools including AWS CloudFormation. Nodes launched as part of a managed node group are automatically tagged for auto-discovery by the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md). You can use the node group to apply Kubernetes labels to nodes and update them at any time.

There are no additional costs to use Amazon EKS managed node groups, you only pay for the AWS resources you provision. These include Amazon EC2 instances, Amazon EBS volumes, Amazon EKS cluster hours, and any other AWS infrastructure. There are no minimum fees and no upfront commitments.

To get started with a new Amazon EKS cluster and managed node group, see [Get started with Amazon EKS – AWS Management Console and AWS CLI](getting-started-console.md).

To add a managed node group to an existing cluster, see [Create a managed node group for your cluster](create-managed-node-group.md).

## Managed node groups concepts
<a name="managed-node-group-concepts"></a>
+ Amazon EKS managed node groups create and manage Amazon EC2 instances for you.
+ Every managed node is provisioned as part of an Amazon EC2 Auto Scaling group that’s managed for you by Amazon EKS. Moreover, every resource including Amazon EC2 instances and Auto Scaling groups run within your AWS account.
+ Amazon EKS periodically syncs the managed node group’s scaling configuration to match the actual Auto Scaling group values. If an external actor such as Cluster Autoscaler modifies the Auto Scaling group’s size, `DescribeNodegroup` will eventually reflect those changes. When you initiate a node group update or upgrade without explicitly modifying the scaling configuration, the workflow uses the current Auto Scaling group values rather than the node group’s stored scaling configuration. The stored scaling configuration only takes precedence when you explicitly include it in an `UpdateNodegroupConfig` request.
+ The Auto Scaling group of a managed node group spans every subnet that you specify when you create the group.
+ Amazon EKS tags managed node group resources so that they are configured to use the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md).
**Important**  
If you are running a stateful application across multiple Availability Zones that is backed by Amazon EBS volumes and using the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md), you should configure multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the `--balance-similar-node-groups` feature.
+ You can use a custom launch template for a greater level of flexibility and customization when deploying managed nodes. For example, you can specify extra `kubelet` arguments and use a custom AMI. For more information, see [Customize managed nodes with launch templates](launch-templates.md). If you don’t use a custom launch template when first creating a managed node group, there is an auto-generated launch template. Don’t manually modify this auto-generated template or errors occur.
+ Amazon EKS follows the shared responsibility model for CVEs and security patches on managed node groups. When managed nodes run an Amazon EKS optimized AMI, Amazon EKS is responsible for building patched versions of the AMI when bugs or issues are reported. We can publish a fix. However, you’re responsible for deploying these patched AMI versions to your managed node groups. When managed nodes run a custom AMI, you’re responsible for building patched versions of the AMI when bugs or issues are reported and then deploying the AMI. For more information, see [Update a managed node group for your cluster](update-managed-node-group.md).
+ Amazon EKS managed node groups can be launched in both public and private subnets. If you launch a managed node group in a public subnet on or after April 22, 2020, the subnet must have `MapPublicIpOnLaunch` set to true for the instances to successfully join a cluster. If the public subnet was created using `eksctl` or the [Amazon EKS vended AWS CloudFormation templates](creating-a-vpc.md) on or after March 26, 2020, then this setting is already set to true. If the public subnets were created before March 26, 2020, you must change the setting manually. For more information, see [Modifying the public IPv4 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-public-ip).
+ When deploying a managed node group in private subnets, you must ensure that it can access Amazon ECR for pulling container images. You can do this by connecting a NAT gateway to the route table of the subnet or by adding the following [AWS PrivateLink VPC endpoints](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html#ecr-setting-up-vpc-create):
  + Amazon ECR API endpoint interface – `com.amazonaws.region-code.ecr.api` 
  + Amazon ECR Docker registry API endpoint interface – `com.amazonaws.region-code.ecr.dkr` 
  + Amazon S3 gateway endpoint – `com.amazonaws.region-code.s3` 

  For other commonly-used services and endpoints, see [Deploy private clusters with limited internet access](private-clusters.md).
+ Managed node groups can’t be deployed on [AWS Outposts](eks-outposts.md) or in [AWS Wavelength](https://docs.aws.amazon.com/wavelength/). Managed node groups can be created on [AWS Local Zones](https://aws.amazon.com/about-aws/global-infrastructure/localzones/). For more information, see [Launch low-latency EKS clusters with AWS Local Zones](local-zones.md).
+ You can create multiple managed node groups within a single cluster. For example, you can create one node group with the standard Amazon EKS optimized Amazon Linux AMI for some workloads and another with the GPU variant for workloads that require GPU support.
+ If your managed node group encounters an [Amazon EC2 instance status check](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html) failure, Amazon EKS returns an error code to help you to diagnose the issue. For more information, see [Managed node group error codes](troubleshooting.md#troubleshoot-managed-node-groups).
+ Amazon EKS adds Kubernetes labels to managed node group instances. These Amazon EKS provided labels are prefixed with `eks.amazonaws.com`.
+ Amazon EKS automatically drains nodes using the Kubernetes API during terminations or updates.
+ Pod disruption budgets aren’t respected when terminating a node with `AZRebalance` or reducing the desired node count. These actions try to evict Pods on the node. But if it takes more than 15 minutes, the node is terminated regardless of whether all Pods on the node are terminated. To extend the period until the node is terminated, add a lifecycle hook to the Auto Scaling group. For more information, see [Add lifecycle hooks](https://docs.aws.amazon.com/autoscaling/ec2/userguide/adding-lifecycle-hooks.html) in the *Amazon EC2 Auto Scaling User Guide*.
+ In order to run the drain process correctly after receiving a Spot interruption notification or a capacity rebalance notification, `CapacityRebalance` must be set to `true`.
+ Updating managed node groups respects the Pod disruption budgets that you set for your Pods. For more information, see [Understand each phase of node updates](managed-node-update-behavior.md).
+ There are no additional costs to use Amazon EKS managed node groups. You only pay for the AWS resources that you provision.
+ If you want to encrypt Amazon EBS volumes for your nodes, you can deploy the nodes using a launch template. To deploy managed nodes with encrypted Amazon EBS volumes without using a launch template, encrypt all new Amazon EBS volumes created in your account. For more information, see [Encryption by default](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default) in the *Amazon EC2 User Guide*.

## Managed node group capacity types
<a name="managed-node-group-capacity-types"></a>

When creating a managed node group, you can choose either the On-Demand or Spot capacity type. Amazon EKS deploys a managed node group with an Amazon EC2 Auto Scaling group that either contains only On-Demand or only Amazon EC2 Spot Instances. You can schedule Pods for fault tolerant applications to Spot managed node groups, and fault intolerant applications to On-Demand node groups within a single Kubernetes cluster. By default, a managed node group deploys On-Demand Amazon EC2 instances.

### On-Demand
<a name="managed-node-group-capacity-types-on-demand"></a>

With On-Demand Instances, you pay for compute capacity by the second, with no long-term commitments.

By default, if you don’t specify a **Capacity Type**, the managed node group is provisioned with On-Demand Instances. A managed node group configures an Amazon EC2 Auto Scaling group on your behalf with the following settings applied:
+ The allocation strategy to provision On-Demand capacity is set to `prioritized`. Managed node groups use the order of instance types passed in the API to determine which instance type to use first when fulfilling On-Demand capacity. For example, you might specify three instance types in the following order: `c5.large`, `c4.large`, and `c3.large`. When your On-Demand Instances are launched, the managed node group fulfills On-Demand capacity by starting with `c5.large`, then `c4.large`, and then `c3.large`. For more information, see [Amazon EC2 Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-purchase-options.html#asg-allocation-strategies) in the *Amazon EC2 Auto Scaling User Guide*.
+ Amazon EKS adds the following Kubernetes label to all nodes in your managed node group that specifies the capacity type: `eks.amazonaws.com/capacityType: ON_DEMAND`. You can use this label to schedule stateful or fault intolerant applications on On-Demand nodes.

### Spot
<a name="managed-node-group-capacity-types-spot"></a>

Amazon EC2 Spot Instances are spare Amazon EC2 capacity that offers steep discounts off of On-Demand prices. Amazon EC2 Spot Instances can be interrupted with a two-minute interruption notice when EC2 needs the capacity back. For more information, see [Spot Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html) in the *Amazon EC2 User Guide*. You can configure a managed node group with Amazon EC2 Spot Instances to optimize costs for the compute nodes running in your Amazon EKS cluster.

To use Spot Instances inside a managed node group, create a managed node group by setting the capacity type as `spot`. A managed node group configures an Amazon EC2 Auto Scaling group on your behalf with the following Spot best practices applied:
+ To ensure that your Spot nodes are provisioned in the optimal Spot capacity pools, the allocation strategy is set to one of the following:
  +  `price-capacity-optimized` (PCO) – When creating new node groups in a cluster with Kubernetes version `1.28` or higher, the allocation strategy is set to `price-capacity-optimized`. However, the allocation strategy won’t be changed for node groups already created with `capacity-optimized` before Amazon EKS managed node groups started to support PCO.
  +  `capacity-optimized` (CO) – When creating new node groups in a cluster with Kubernetes version `1.27` or lower, the allocation strategy is set to `capacity-optimized`.

  To increase the number of Spot capacity pools available for allocating capacity from, configure a managed node group to use multiple instance types.
+ Amazon EC2 Spot Capacity Rebalancing is enabled so that Amazon EKS can gracefully drain and rebalance your Spot nodes to minimize application disruption when a Spot node is at elevated risk of interruption. For more information, see [Amazon EC2 Auto Scaling Capacity Rebalancing](https://docs.aws.amazon.com/autoscaling/ec2/userguide/capacity-rebalance.html) in the *Amazon EC2 Auto Scaling User Guide*.
  + When a Spot node receives a rebalance recommendation, Amazon EKS automatically attempts to launch a new replacement Spot node.
  + If a Spot two-minute interruption notice arrives before the replacement Spot node is in a `Ready` state, Amazon EKS starts draining the Spot node that received the rebalance recommendation. Amazon EKS drains the node on a best-effort basis. As a result, there’s no guarantee that Amazon EKS will wait for the replacement node to join the cluster before draining the existing node.
  + When a replacement Spot node is bootstrapped and in the `Ready` state on Kubernetes, Amazon EKS cordons and drains the Spot node that received the rebalance recommendation. Cordoning the Spot node ensures that the service controller doesn’t send any new requests to this Spot node. It also removes it from its list of healthy, active Spot nodes. Draining the Spot node ensures that running Pods are evicted gracefully.
+ Amazon EKS adds the following Kubernetes label to all nodes in your managed node group that specifies the capacity type: `eks.amazonaws.com/capacityType: SPOT`. You can use this label to schedule fault tolerant applications on Spot nodes.
**Important**  
EC2 issues a [Spot interruption notice](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-instance-termination-notices.html) two minutes prior to terminating your Spot Instance. However, Pods on Spot nodes may not receive the full 2-minute window for graceful shutdown. When EC2 issues the notice, there is a delay before Amazon EKS begins evicting Pods. Evictions occur sequentially to protect the Kubernetes API server, so during multiple simultaneous Spot reclamations, some Pods may receive delayed eviction notices. Pods may be forcibly terminated without receiving termination signals, particularly on nodes with high Pod density, during concurrent reclamations, or when using long termination grace periods. For Spot workloads, we recommend designing applications to be interruption-tolerant, using termination grace periods of 30 seconds or less, avoiding long-running preStop hooks, and monitoring Pod eviction metrics to understand actual grace periods in your clusters. For workloads requiring guaranteed graceful termination, we recommend using On-Demand capacity instead.

When deciding whether to deploy a node group with On-Demand or Spot capacity, you should consider the following conditions:
+ Spot Instances are a good fit for stateless, fault-tolerant, flexible applications. These include batch and machine learning training workloads, big data ETLs such as Apache Spark, queue processing applications, and stateless API endpoints. Because Spot is spare Amazon EC2 capacity, which can change over time, we recommend that you use Spot capacity for interruption-tolerant workloads. More specifically, Spot capacity is suitable for workloads that can tolerate periods where the required capacity isn’t available.
+ We recommend that you use On-Demand for applications that are fault intolerant. This includes cluster management tools such as monitoring and operational tools, deployments that require `StatefulSets`, and stateful applications, such as databases.
+ To maximize the availability of your applications while using Spot Instances, we recommend that you configure a Spot managed node group to use multiple instance types. We recommend applying the following rules when using multiple instance types:
  + Within a managed node group, if you’re using the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md), we recommend using a flexible set of instance types with the same amount of vCPU and memory resources. This is to ensure that the nodes in your cluster scale as expected. For example, if you need four vCPUs and eight GiB memory, use `c3.xlarge`, `c4.xlarge`, `c5.xlarge`, `c5d.xlarge`, `c5a.xlarge`, `c5n.xlarge`, or other similar instance types.
  + To enhance application availability, we recommend deploying multiple Spot managed node groups. For this, each group should use a flexible set of instance types that have the same vCPU and memory resources. For example, if you need 4 vCPUs and 8 GiB memory, we recommend that you create one managed node group with `c3.xlarge`, `c4.xlarge`, `c5.xlarge`, `c5d.xlarge`, `c5a.xlarge`, `c5n.xlarge`, or other similar instance types, and a second managed node group with `m3.xlarge`, `m4.xlarge`, `m5.xlarge`, `m5d.xlarge`, `m5a.xlarge`, `m5n.xlarge` or other similar instance types.
  + When deploying your node group with the Spot capacity type that’s using a custom launch template, use the API to pass multiple instance types. Don’t pass a single instance type through the launch template. For more information about deploying a node group using a launch template, see [Customize managed nodes with launch templates](launch-templates.md).

# Create a managed node group for your cluster
<a name="create-managed-node-group"></a>

This topic describes how you can launch Amazon EKS managed node groups of nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them.

If this is your first time launching an Amazon EKS managed node group, we recommend that you instead follow one of our guides in [Get started with Amazon EKS](getting-started.md). These guides provide walkthroughs for creating an Amazon EKS cluster with nodes.

**Important**  
Amazon EKS nodes are standard Amazon EC2 instances. You’re billed based on the normal Amazon EC2 prices. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/).
You can’t create managed nodes in an AWS Region where you have AWS Outposts or AWS Wavelength enabled. You can create self-managed nodes instead. For more information, see [Create self-managed Amazon Linux nodes](launch-workers.md), [Create self-managed Microsoft Windows nodes](launch-windows-workers.md), and [Create self-managed Bottlerocket nodes](launch-node-bottlerocket.md). You can also create a self-managed Amazon Linux node group on an Outpost. For more information, see [Create Amazon Linux nodes on AWS Outposts](eks-outposts-self-managed-nodes.md).
If you don’t [specify an AMI ID](launch-templates.md#launch-template-custom-ami) for the `bootstrap.sh` file included with Amazon EKS optimized Linux or Bottlerocket, managed node groups enforce a maximum number on the value of `maxPods`. For instances with less than 30 vCPUs, the maximum number is `110`. For instances with greater than 30 vCPUs, the maximum number jumps to `250`. This enforcement overrides other `maxPods` configurations, including `maxPodsExpression`. For more information about how `maxPods` is determined and how to customize it, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).
+ An existing Amazon EKS cluster. To deploy one, see [Create an Amazon EKS cluster](create-cluster.md).
+ An existing IAM role for the nodes to use. To create one, see [Amazon EKS node IAM role](create-node-role.md). If this role doesn’t have either of the policies for the VPC CNI, the separate role that follows is required for the VPC CNI pods.
+ (Optional, but recommended) The Amazon VPC CNI plugin for Kubernetes add-on configured with its own IAM role that has the necessary IAM policy attached to it. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).
+ Familiarity with the considerations listed in [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md). Depending on the instance type you choose, there may be additional prerequisites for your cluster and VPC.
+ To add a Windows managed node group, you must first enable Windows support for your cluster. For more information, see [Deploy Windows nodes on EKS clusters](windows-support.md).

You can create a managed node group with either of the following:
+  [`eksctl`](#eksctl_create_managed_nodegroup) 
+  [AWS Management Console](#console_create_managed_nodegroup) 

## `eksctl`
<a name="eksctl_create_managed_nodegroup"></a>

 **Create a managed node group with eksctl** 

This procedure requires `eksctl` version `0.215.0` or later. You can check your version with the following command:

```
eksctl version
```

For instructions on how to install or upgrade `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

1. (Optional) If the **AmazonEKS\$1CNI\$1Policy** managed IAM policy is attached to your [Amazon EKS node IAM role](create-node-role.md), we recommend assigning it to an IAM role that you associate to the Kubernetes `aws-node` service account instead. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

1. Create a managed node group with or without using a custom launch template. Manually specifying a launch template allows for greater customization of a node group. For example, it can allow deploying a custom AMI or providing arguments to the `boostrap.sh` script in an Amazon EKS optimized AMI. For a complete list of every available option and default, enter the following command.

   ```
   eksctl create nodegroup --help
   ```

   In the following command, replace *my-cluster* with the name of your cluster and replace *my-mng* with the name of your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.
**Important**  
If you don’t use a custom launch template when first creating a managed node group, don’t use one at a later time for the node group. If you didn’t specify a custom launch template, the system auto-generates a launch template that we don’t recommend that you modify manually. Manually modifying this auto-generated launch template might cause errors.

 **Without a launch template** 

 `eksctl` creates a default Amazon EC2 launch template in your account and deploys the node group using a launch template that it creates based on options that you specify. Before specifying a value for `--node-type`, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md).

Replace *ami-family* with an allowed keyword. For more information, see [Setting the node AMI Family](https://eksctl.io/usage/custom-ami-support/#setting-the-node-ami-family) in the `eksctl` documentation. Replace *my-key* with the name of your Amazon EC2 key pair or public key. This key is used to SSH into your nodes after they launch.

**Note**  
For Windows, this command doesn’t enable SSH. Instead, it associates your Amazon EC2 key pair with the instance and allows you to RDP into the instance.

If you don’t already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For Linux information, see [Amazon EC2 key pairs and Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*. For Windows information, see [Amazon EC2 key pairs and Windows instances](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.

We recommend blocking Pod access to IMDS if the following conditions are true:
+ You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
+ No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

If you want to block Pod access to IMDS, then add the `--disable-pod-imds` option to the following command.

```
eksctl create nodegroup \
  --cluster my-cluster \
  --region region-code \
  --name my-mng \
  --node-ami-family ami-family \
  --node-type m5.large \
  --nodes 3 \
  --nodes-min 2 \
  --nodes-max 4 \
  --ssh-access \
  --ssh-public-key my-key
```

Your instances can optionally assign a significantly higher number of IP addresses to Pods, assign IP addresses to Pods from a different CIDR block than the instance’s, and be deployed to a cluster without internet access. For more information, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md), [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md), and [Deploy private clusters with limited internet access](private-clusters.md) for additional options to add to the previous command.

Managed node groups calculates and applies a single value for the maximum number of Pods that can run on each node of your node group, based on instance type. If you create a node group with different instance types, the smallest value calculated across all instance types is applied as the maximum number of Pods that can run on every instance type in the node group. For more information about how this value is calculated, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).

 **With a launch template** 

The launch template must already exist and must meet the requirements specified in [Launch template configuration basics](launch-templates.md#launch-template-basics). We recommend blocking Pod access to IMDS if the following conditions are true:
+ You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
+ No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

If you want to block Pod access to IMDS, then specify the necessary settings in the launch template.

1. Copy the following contents to your device. Replace the example values and then run the modified command to create the `eks-nodegroup.yaml` file. Several settings that you specify when deploying without a launch template are moved into the launch template. If you don’t specify a `version`, the template’s default version is used.

   ```
   cat >eks-nodegroup.yaml <<EOF
   apiVersion: eksctl.io/v1alpha5
   kind: ClusterConfig
   metadata:
     name: my-cluster
     region: region-code
   managedNodeGroups:
   - name: my-mng
     launchTemplate:
       id: lt-id
       version: "1"
   EOF
   ```

   For a complete list of `eksctl` config file settings, see [Config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation. Your instances can optionally assign a significantly higher number of IP addresses to Pods, assign IP addresses to Pods from a different CIDR block than the instance’s, and be deployed to a cluster without outbound internet access. For more information, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md), [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md), and [Deploy private clusters with limited internet access](private-clusters.md) for additional options to add to the config file.

   If you didn’t specify an AMI ID in your launch template, managed node groups calculates and applies a single value for the maximum number of Pods that can run on each node of your node group, based on instance type. If you create a node group with different instance types, the smallest value calculated across all instance types is applied as the maximum number of Pods that can run on every instance type in the node group. For more information about how this value is calculated, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).

   If you specified an AMI ID in your launch template, specify the maximum number of Pods that can run on each node of your node group if you’re using [custom networking](cni-custom-network.md) or want to [increase the number of IP addresses assigned to your instance](cni-increase-ip-addresses.md). For more information, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).

1. Deploy the nodegroup with the following command.

   ```
   eksctl create nodegroup --config-file eks-nodegroup.yaml
   ```

## AWS Management Console
<a name="console_create_managed_nodegroup"></a>

 **Create a managed node group using the AWS Management Console ** 

1. Wait for your cluster status to show as `ACTIVE`. You can’t create a managed node group for a cluster that isn’t already `ACTIVE`.

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create a managed node group in.

1. Select the **Compute** tab.

1. Choose **Add node group**.

1. On the **Configure node group** page, fill out the parameters accordingly, and then choose **Next**.
   +  **Name** – Enter a unique name for your managed node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.
   +  **Node IAM role** – Choose the node instance role to use with your node group. For more information, see [Amazon EKS node IAM role](create-node-role.md).
**Important**  
You can’t use the same role that is used to create any clusters.
We recommend using a role that’s not currently in use by any self-managed node group. Otherwise, you plan to use with a new self-managed node group. For more information, see [Delete a managed node group from your cluster](delete-managed-node-group.md).
   +  **Use launch template** – (Optional) Choose if you want to use an existing launch template. Select a **Launch Template Name**. Then, select a **Launch template version**. If you don’t select a version, then Amazon EKS uses the template’s default version. Launch templates allow for more customization of your node group, such as allowing you to deploy a custom AMI, assign a significantly higher number of IP addresses to Pods, assign IP addresses to Pods from a different CIDR block than the instance’s, and deploying nodes to a cluster without outbound internet access. For more information, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md), [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md), and [Deploy private clusters with limited internet access](private-clusters.md).

     The launch template must meet the requirements in [Customize managed nodes with launch templates](launch-templates.md). If you don’t use your own launch template, the Amazon EKS API creates a default Amazon EC2 launch template in your account and deploys the node group using the default launch template.

     If you implement [IAM roles for service accounts](iam-roles-for-service-accounts.md), assign necessary permissions directly to every Pod that requires access to AWS services, and no Pods in your cluster require access to IMDS for other reasons, such as retrieving the current AWS Region, then you can also disable access to IMDS for Pods that don’t use host networking in a launch template. For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).
   +  **Kubernetes labels** – (Optional) You can choose to apply Kubernetes labels to the nodes in your managed node group.
   +  **Kubernetes taints** – (Optional) You can choose to apply Kubernetes taints to the nodes in your managed node group. The available options in the **Effect** menu are ` NoSchedule `, ` NoExecute `, and ` PreferNoSchedule `. For more information, see [Recipe: Prevent pods from being scheduled on specific nodes](node-taints-managed-node-groups.md).
   +  **Tags** – (Optional) You can choose to tag your Amazon EKS managed node group. These tags don’t propagate to other resources in the node group, such as Auto Scaling groups or instances. For more information, see [Organize Amazon EKS resources with tags](eks-using-tags.md).

1. On the **Set compute and scaling configuration** page, fill out the parameters accordingly, and then choose **Next**.
   +  **AMI type** – Select an AMI type. If you are deploying Arm instances, be sure to review the considerations in [Amazon EKS optimized Arm Amazon Linux AMIs](eks-optimized-ami.md#arm-ami) before deploying.

     If you specified a launch template on the previous page, and specified an AMI in the launch template, then you can’t select a value. The value from the template is displayed. The AMI specified in the template must meet the requirements in [Specifying an AMI](launch-templates.md#launch-template-custom-ami).
   +  **Capacity type** – Select a capacity type. For more information about choosing a capacity type, see [Managed node group capacity types](managed-node-groups.md#managed-node-group-capacity-types). You can’t mix different capacity types within the same node group. If you want to use both capacity types, create separate node groups, each with their own capacity and instance types. See [Reserve GPUs for managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/capacity-blocks-mng.html) for information on provisioning and scaling GPU-accelerated worker nodes.
   +  **Instance types** – By default, one or more instance type is specified. To remove a default instance type, select the `X` on the right side of the instance type. Choose the instance types to use in your managed node group. For more information, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md).

     The console displays a set of commonly used instance types. If you need to create a managed node group with an instance type that’s not displayed, then use `eksctl`, the AWS CLI, AWS CloudFormation, or an SDK to create the node group. If you specified a launch template on the previous page, then you can’t select a value because the instance type must be specified in the launch template. The value from the launch template is displayed. If you selected **Spot** for **Capacity type**, then we recommend specifying multiple instance types to enhance availability.
   +  **Disk size** – Enter the disk size (in GiB) to use for your node’s root volume.

     If you specified a launch template on the previous page, then you can’t select a value because it must be specified in the launch template.
   +  **Desired size** – Specify the current number of nodes that the managed node group should maintain at launch.
**Note**  
Amazon EKS doesn’t automatically scale your node group in or out. However, you can configure the Kubernetes Cluster Autoscaler to do this for you. For more information, see [Cluster Autoscaler on AWS](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md).
   +  **Minimum size** – Specify the minimum number of nodes that the managed node group can scale in to.
   +  **Maximum size** – Specify the maximum number of nodes that the managed node group can scale out to.
   +  **Node group update configuration** – (Optional) You can select the number or percentage of nodes to be updated in parallel. These nodes will be unavailable during the update. For **Maximum unavailable**, select one of the following options and specify a **Value**:
     +  **Number** – Select and specify the number of nodes in your node group that can be updated in parallel.
     +  **Percentage** – Select and specify the percentage of nodes in your node group that can be updated in parallel. This is useful if you have a large number of nodes in your node group.
   +  **Node auto repair configuration** – (Optional) If you activate the **Enable node auto repair** checkbox, Amazon EKS will automatically replace nodes when detected issues occur. For more information, see [Detect node health issues and enable automatic node repair](node-health.md).
   +  **Warm pool configuration** – (Optional) If you activate the **Enable warm pool configuration** checkbox, Amazon EKS will create warm pools on the ASG. For more information, see [Decrease latency for applications with long boot times using warm pools with managed node groups](warm-pools-managed-node-groups.md).

1. On the **Specify networking** page, fill out the parameters accordingly, and then choose **Next**.
   +  **Subnets** – Choose the subnets to launch your managed nodes into.
**Important**  
If you are running a stateful application across multiple Availability Zones that is backed by Amazon EBS volumes and using the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md), you should configure multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the `--balance-similar-node-groups` feature.
**Important**  
If you choose a public subnet, and your cluster has only the public API server endpoint enabled, then the subnet must have `MapPublicIPOnLaunch` set to `true` for the instances to successfully join a cluster. If the subnet was created using `eksctl` or the [Amazon EKS vended AWS CloudFormation templates](creating-a-vpc.md) on or after March 26, 2020, then this setting is already set to `true`. If the subnets were created with `eksctl` or the AWS CloudFormation templates before March 26, 2020, then you need to change the setting manually. For more information, see [Modifying the public IPv4 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-public-ip).
If you use a launch template and specify multiple network interfaces, Amazon EC2 won’t auto-assign a public `IPv4` address, even if `MapPublicIpOnLaunch` is set to `true`. For nodes to join the cluster in this scenario, you must either enable the cluster’s private API server endpoint, or launch nodes in a private subnet with outbound internet access provided through an alternative method, such as a NAT Gateway. For more information, see [Amazon EC2 instance IP addressing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html) in the *Amazon EC2 User Guide*.
   +  **Configure SSH access to nodes** (Optional). Enabling SSH allows you to connect to your instances and gather diagnostic information if there are issues. We highly recommend enabling remote access when you create a node group. You can’t enable remote access after the node group is created.

     If you chose to use a launch template, then this option isn’t shown. To enable remote access to your nodes, specify a key pair in the launch template and ensure that the proper port is open to the nodes in the security groups that you specify in the launch template. For more information, see [Using custom security groups](launch-templates.md#launch-template-security-groups).
**Note**  
For Windows, this command doesn’t enable SSH. Instead, it associates your Amazon EC2 key pair with the instance and allows you to RDP into the instance.
   + For **SSH key pair** (Optional), choose an Amazon EC2 SSH key to use. For Linux information, see [Amazon EC2 key pairs and Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*. For Windows information, see [Amazon EC2 key pairs and Windows instances](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*. If you chose to use a launch template, then you can’t select one. When an Amazon EC2 SSH key is provided for node groups using Bottlerocket AMIs, the administrative container is also enabled. For more information, see [Admin container](https://github.com/bottlerocket-os/bottlerocket#admin-container) on GitHub.
   + For **Allow SSH remote access from**, if you want to limit access to specific instances, then select the security groups that are associated to those instances. If you don’t select specific security groups, then SSH access is allowed from anywhere on the internet (`0.0.0.0/0`).

1. On the **Review and create** page, review your managed node group configuration and choose **Create**.

   If nodes fail to join the cluster, then see [Nodes fail to join cluster](troubleshooting.md#worker-node-fail) in the Troubleshooting chapter.

1. Watch the status of your nodes and wait for them to reach the `Ready` status.

   ```
   kubectl get nodes --watch
   ```

1. (GPU nodes only) If you chose a GPU instance type and an Amazon EKS optimized accelerated AMI, then you must apply the [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin) as a DaemonSet on your cluster. Replace *vX.X.X* with your desired [NVIDIA/k8s-device-plugin](https://github.com/NVIDIA/k8s-device-plugin/releases) version before running the following command.

   ```
   kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/vX.X.X/deployments/static/nvidia-device-plugin.yml
   ```

## Install Kubernetes add-ons
<a name="_install_kubernetes_add_ons"></a>

Now that you have a working Amazon EKS cluster with nodes, you’re ready to start installing Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help you to extend the functionality of your cluster.
+ The [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that created the cluster is the only principal that can make calls to the Kubernetes API server with `kubectl` or the AWS Management Console. If you want other IAM principals to have access to your cluster, then you need to add them. For more information, see [Grant IAM users and roles access to Kubernetes APIs](grant-k8s-access.md) and [Required permissions](view-kubernetes-resources.md#view-kubernetes-resources-permissions).
+ We recommend blocking Pod access to IMDS if the following conditions are true:
  + You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
  + No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

  For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).
+ Configure the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md) to automatically adjust the number of nodes in your node groups.
+ Deploy a [sample application](sample-deployment.md) to your cluster.
+  [Organize and monitor cluster resources](eks-managing.md) with important tools for managing your cluster.

# Decrease latency for applications with long boot times using warm pools with managed node groups
<a name="warm-pools-managed-node-groups"></a>

When your applications have long initialization or boot times, scale-out events can cause delays—new nodes must fully boot and join the cluster before Pods can be scheduled on them. This latency can impact application availability during traffic spikes or rapid scaling events. Warm pools solve this problem by maintaining a pool of pre-initialized EC2 instances that have already completed the bootup process. During a scale-out event, instances move from the warm pool directly to your cluster, bypassing the time-consuming initialization steps and significantly reducing the time it takes for new capacity to become available. For more information, see [Decrease latency for applications that have long boot times using warm pools](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html) in the *Amazon EC2 Auto Scaling User Guide*.

Amazon EKS managed node groups support Amazon EC2 Auto Scaling warm pools. A warm pool maintains pre-initialized EC2 instances alongside your Auto Scaling group that can quickly join your cluster during scale-out events. Instances in the warm pool have already completed the bootup initialization process and can be kept in a `Stopped`, `Running`, or `Hibernated` state.

Amazon EKS manages warm pools throughout the node group lifecycle using the `AWSServiceRoleForAmazonEKSNodegroup` service-linked role to create, update, and delete warm pool resources.

## How it works
<a name="warm-pools-how-it-works"></a>

When you configure a warm pool, Amazon EKS creates an EC2 Auto Scaling warm pool attached to your node group’s Auto Scaling group. Instances launch into the warm pool, complete the bootup initialization process, and remain in the configured state (`Running`, `Stopped`, or `Hibernated`) until needed. During scale-out events, instances move from the warm pool to the Auto Scaling group, complete the Amazon EKS initialization process to join the cluster, and become available for pod scheduling. With instance reuse enabled, instances can return to the warm pool during scale-in events.

**Important**  
Always configure warm pools through the Amazon EKS API using `create-nodegroup` or `update-nodegroup-config`. Don’t manually modify warm pool settings using the EC2 Auto Scaling API, as this can cause conflicts with Amazon EKS management of the resources.

## Considerations
<a name="warm-pools-considerations"></a>

**Important**  
Before configuring warm pools, review the prerequisites and limitations in [Warm pools for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html) in the *Amazon EC2 Auto Scaling User Guide*. Not all instance types, AMIs, or configurations are supported.
+  **IAM permissions** – The `AWSServiceRoleForAmazonEKSNodegroup` service-linked role (created automatically with your first managed node group) includes the necessary warm pool management permissions.
+  **AMI limitations** – Warm pools don’t support custom AMIs. You must use Amazon EKS optimized AMIs.
+  **Bottlerocket limitations** – If using Bottlerocket AMIs, the `Hibernated` pool state isn’t supported. Use `Stopped` or `Running` pool states only. Additionally, the `reuseOnScaleIn` feature isn’t supported with Bottlerocket AMIs.
+  **Hibernation support** – The `Hibernated` pool state is only supported on specific instance types. See [Hibernation prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hibernating-prerequisites.html) in the *Amazon EC2 User Guide* for supported instance types.
+  **Cost impact** – Creating a warm pool when it’s not required can lead to unnecessary costs.
+  **Capacity planning** – Size your warm pool based on scaling patterns to balance cost and availability. Start with 10-20% of expected peak capacity.
+  **VPC networking** – Ensure sufficient IP addresses for both Auto Scaling group and warm pool instances.

## Configure warm pools
<a name="warm-pools-configuration"></a>

You can configure warm pools when creating a new managed node group or update an existing managed node group to add warm pool support.

### Configuration parameters
<a name="warm-pools-parameters"></a>
+  **enabled** – (boolean) Indicates your intent to attach a warm pool to the managed node group. Required to enable warm pool support.
+  **maxGroupPreparedCapacity** – (integer) Maximum total instances across warm pool and Auto Scaling group combined.
+  **minSize** – (integer) Minimum number of instances to maintain in the warm pool. Default: `0`.
+  **poolState** – (string) State for warm pool instances. Default: `Stopped`.
+  **reuseOnScaleIn** – (boolean) Whether instances return to the warm pool during scale-in events instead of terminating them. Default: `false`. Not supported with Bottlerocket AMIs.

### Using the AWS CLI
<a name="warm-pools-create-cli"></a>

You can configure a warm pool when creating a managed node group or add one to an existing node group.

 **Create a node group with a warm pool** 

```
aws eks create-nodegroup \
  --cluster-name my-cluster \
  --nodegroup-name my-nodegroup \
  --node-role arn:aws:iam::111122223333:role/AmazonEKSNodeRole \
  --subnets subnet-12345678 subnet-87654321 \
  --region us-east-1 \
  --scaling-config minSize=2,maxSize=10,desiredSize=3 \
  --warm-pool-config enabled=true,maxGroupPreparedCapacity=8,minSize=2,poolState=Stopped,reuseOnScaleIn=true
```

 **Add a warm pool to an existing node group** 

```
aws eks update-nodegroup-config \
  --cluster-name my-cluster \
  --nodegroup-name my-nodegroup \
  --region us-east-1 \
  --warm-pool-config enabled=true,maxGroupPreparedCapacity=8,minSize=2,poolState=Stopped,reuseOnScaleIn=true
```

## Update configuration
<a name="warm-pools-update"></a>

Update warm pool settings at any time using `update-nodegroup-config`. Existing warm pool instances aren’t immediately affected; new settings apply to instances entering the warm pool after the update.

```
aws eks update-nodegroup-config \
  --cluster-name my-cluster \
  --nodegroup-name my-nodegroup \
  --region us-east-1 \
  --warm-pool-config enabled=true,maxGroupPreparedCapacity=10,minSize=3,poolState=Running,reuseOnScaleIn=true
```

To disable the warm pool attached to your nodegroup, set `enabled=false`:

```
aws eks update-nodegroup-config \
  --cluster-name my-cluster \
  --nodegroup-name my-nodegroup \
  --region us-east-1 \
  --warm-pool-config enabled=false
```

## Additional resources
<a name="warm-pools-additional-resources"></a>
+  [Warm pools for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html) in the *Amazon EC2 Auto Scaling User Guide* 
+  [Simplify node lifecycle with managed node groups](managed-node-groups.md) 

# Update a managed node group for your cluster
<a name="update-managed-node-group"></a>

When you initiate a managed node group update, Amazon EKS automatically updates your nodes for you, completing the steps listed in [Understand each phase of node updates](managed-node-update-behavior.md). If you’re using an Amazon EKS optimized AMI, Amazon EKS automatically applies the latest security patches and operating system updates to your nodes as part of the latest AMI release version.

There are several scenarios where it’s useful to update your Amazon EKS managed node group’s version or configuration:
+ You have updated the Kubernetes version for your Amazon EKS cluster and want to update your nodes to use the same Kubernetes version.
+ A new AMI release version is available for your managed node group. For more information about AMI versions, see these sections:
  +  [Retrieve Amazon Linux AMI version information](eks-linux-ami-versions.md) 
  +  [Create nodes with optimized Bottlerocket AMIs](eks-optimized-ami-bottlerocket.md) 
  +  [Retrieve Windows AMI version information](eks-ami-versions-windows.md) 
+ You want to adjust the minimum, maximum, or desired count of the instances in your managed node group.
+ You want to add or remove Kubernetes labels from the instances in your managed node group.
+ You want to add or remove AWS tags from your managed node group.
+ You need to deploy a new version of a launch template with configuration changes, such as an updated custom AMI.
+ You have deployed version `1.9.0` or later of the Amazon VPC CNI add-on, enabled the add-on for prefix delegation, and want new AWS Nitro System instances in a node group to support a significantly increased number of Pods. For more information, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md).
+ You have enabled IP prefix delegation for Windows nodes and want new AWS Nitro System instances in a node group to support a significantly increased number of Pods. For more information, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md).

If there’s a newer AMI release version for your managed node group’s Kubernetes version, you can update your node group’s version to use the newer AMI version. Similarly, if your cluster is running a Kubernetes version that’s newer than your node group, you can update the node group to use the latest AMI release version to match your cluster’s Kubernetes version.

When a node in a managed node group is terminated due to a scaling operation or update, the Pods in that node are drained first. For more information, see [Understand each phase of node updates](managed-node-update-behavior.md).

## Update a node group version
<a name="mng-update"></a>

You can update a node group version with either of the following:
+  [`eksctl`](#eksctl_update_managed_nodegroup) 
+  [AWS Management Console](#console_update_managed_nodegroup) 

The version that you update to can’t be greater than the control plane’s version.

## `eksctl`
<a name="eksctl_update_managed_nodegroup"></a>

 **Update a managed node group using `eksctl` ** 

Update a managed node group to the latest AMI release of the same Kubernetes version that’s currently deployed on the nodes with the following command. Replace every *example value* with your own values.

```
eksctl upgrade nodegroup \
  --name=node-group-name \
  --cluster=my-cluster \
  --region=region-code
```

**Note**  
If you’re upgrading a node group that’s deployed with a launch template to a new launch template version, add `--launch-template-version version-number ` to the preceding command. The launch template must meet the requirements described in [Customize managed nodes with launch templates](launch-templates.md). If the launch template includes a custom AMI, the AMI must meet the requirements in [Specifying an AMI](launch-templates.md#launch-template-custom-ami). When you upgrade your node group to a newer version of your launch template, every node is recycled to match the new configuration of the launch template version that’s specified.

You can’t directly upgrade a node group that’s deployed without a launch template to a new launch template version. Instead, you must deploy a new node group using the launch template to update the node group to a new launch template version.

You can upgrade a node group to the same version as the control plane’s Kubernetes version. For example, if you have a cluster running Kubernetes `1.35`, you can upgrade nodes currently running Kubernetes `1.34` to version `1.35` with the following command.

```
eksctl upgrade nodegroup \
  --name=node-group-name \
  --cluster=my-cluster \
  --region=region-code \
  --kubernetes-version=1.35
```

## AWS Management Console
<a name="console_update_managed_nodegroup"></a>

 **Update a managed node group using the AWS Management Console ** 

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the cluster that contains the node group to update.

1. If at least one node group has an available update, a box appears at the top of the page notifying you of the available update. If you select the **Compute** tab, you’ll see **Update now** in the **AMI release version** column in the **Node groups** table for the node group that has an available update. To update the node group, choose **Update now**.

   You won’t see a notification for node groups that were deployed with a custom AMI. If your nodes are deployed with a custom AMI, complete the following steps to deploy a new updated custom AMI.

   1. Create a new version of your AMI.

   1. Create a new launch template version with the new AMI ID.

   1. Upgrade the nodes to the new version of the launch template.

1. On the **Update node group version** dialog box, activate or deactivate the following options:
   +  **Update node group version** – This option is unavailable if you deployed a custom AMI or your Amazon EKS optimized AMI is currently on the latest version for your cluster.
   +  **Change launch template version** – This option is unavailable if the node group is deployed without a custom launch template. You can only update the launch template version for a node group that has been deployed with a custom launch template. Select the **Launch template version** that you want to update the node group to. If your node group is configured with a custom AMI, then the version that you select must also specify an AMI. When you upgrade to a newer version of your launch template, every node is recycled to match the new configuration of the launch template version specified.

1. For **Update strategy**, select one of the following options:
   +  **Rolling update** – This option respects the Pod disruption budgets for your cluster. Updates fail if there’s a Pod disruption budget issue that causes Amazon EKS to be unable to gracefully drain the Pods that are running on this node group.
   +  **Force update** – This option doesn’t respect Pod disruption budgets. Updates occur regardless of Pod disruption budget issues by forcing node restarts to occur.

1. Choose **Update**.

## Edit a node group configuration
<a name="mng-edit"></a>

You can modify some of the configurations of a managed node group.

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the cluster that contains the node group to edit.

1. Select the **Compute** tab.

1. Select the node group to edit, and then choose **Edit**.

1. (Optional) On the **Edit node group** page, do the following:

   1. Edit the **Node group scaling configuration**.
      +  **Desired size** – Specify the current number of nodes that the managed node group should maintain.
      +  **Minimum size** – Specify the minimum number of nodes that the managed node group can scale in to.
      +  **Maximum size** – Specify the maximum number of nodes that the managed node group can scale out to. For the maximum number of nodes supported in a node group, see [View and manage Amazon EKS and Fargate service quotas](service-quotas.md).

   1. (Optional) Add or remove **Kubernetes labels** to the nodes in your node group. The labels shown here are only the labels that you have applied with Amazon EKS. Other labels may exist on your nodes that aren’t shown here.

   1. (Optional) Add or remove **Kubernetes taints** to the nodes in your node group. Added taints can have the effect of either ` NoSchedule `, ` NoExecute `, or ` PreferNoSchedule `. For more information, see [Recipe: Prevent pods from being scheduled on specific nodes](node-taints-managed-node-groups.md).

   1. (Optional) Add or remove **Tags** from your node group resource. These tags are only applied to the Amazon EKS node group. They don’t propagate to other resources, such as subnets or Amazon EC2 instances in the node group.

   1. (Optional) Edit the **Node Group update configuration**. Select either **Number** or **Percentage**.
      +  **Number** – Select and specify the number of nodes in your node group that can be updated in parallel. These nodes will be unavailable during update.
      +  **Percentage** – Select and specify the percentage of nodes in your node group that can be updated in parallel. These nodes will be unavailable during update. This is useful if you have many nodes in your node group.

   1. When you’re finished editing, choose **Save changes**.

**Important**  
When updating the node group configuration, modifying the [https://docs.aws.amazon.com/eks/latest/APIReference/API_NodegroupScalingConfig.html](https://docs.aws.amazon.com/eks/latest/APIReference/API_NodegroupScalingConfig.html) does not respect Pod disruption budgets (PDBs). Unlike the [update node group](managed-node-update-behavior.md) process (which drains nodes and respects PDBs during the upgrade phase), updating the scaling configuration causes nodes to be terminated immediately through an Auto Scaling Group (ASG) scale-down call. This happens without considering PDBs, regardless of the target size you’re scaling down to. That means when you reduce the `desiredSize` of an Amazon EKS managed node group, Pods are evicted as soon as the nodes are terminated, without honoring any PDBs.

# Understand each phase of node updates
<a name="managed-node-update-behavior"></a>

The Amazon EKS managed worker node upgrade strategy has four different phases described in the following sections.

## Setup phase
<a name="managed-node-update-set-up"></a>

The setup phase has these steps:

1. It creates a new Amazon EC2 launch template version for the Auto Scaling Group that’s associated with your node group. The new launch template version uses the target AMI or a custom launch template version for the update.

1. It updates the Auto Scaling Group to use the latest launch template version.

1. It determines the maximum quantity of nodes to upgrade in parallel using the `updateConfig` property for the node group. The maximum unavailable has a quota of 100 nodes. The default value is one node. For more information, see the [updateConfig](https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateNodegroupConfig.html#API_UpdateNodegroupConfig_RequestSyntax) property in the *Amazon EKS API Reference*.

## Scale up phase
<a name="managed-node-update-scale-up"></a>

When upgrading the nodes in a managed node group, the upgraded nodes are launched in the same Availability Zone as those that are being upgraded. To guarantee this placement, we use Amazon EC2’s Availability Zone Rebalancing. For more information, see [Availability Zone Rebalancing](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html#AutoScalingBehavior.InstanceUsage) in the *Amazon EC2 Auto Scaling User Guide*. To meet this requirement, it’s possible that we’d launch up to two instances per Availability Zone in your managed node group.

The scale up phase has these steps:

1. It increments the Auto Scaling Group’s maximum size and desired size by the larger of either:
   + Up to twice the number of Availability Zones that the Auto Scaling Group is deployed in.
   + The maximum unavailable of upgrade.

     For example, if your node group has five Availability Zones and `maxUnavailable` as one, the upgrade process can launch a maximum of 10 nodes. However when `maxUnavailable` is 20 (or anything higher than 10), the process would launch 20 new nodes.

1. After scaling the Auto Scaling Group, it checks if the nodes using the latest configuration are present in the node group. This step succeeds only when it meets these criteria:
   + At least one new node is launched in every Availability Zone where the node exists.
   + Every new node should be in `Ready` state.
   + New nodes should have Amazon EKS applied labels.

     These are the Amazon EKS applied labels on the worker nodes in a regular node group:
     +  `eks.amazonaws.com/nodegroup-image=$amiName` 
     +  `eks.amazonaws.com/nodegroup=$nodeGroupName` 

     These are the Amazon EKS applied labels on the worker nodes in a custom launch template or AMI node group:
     +  `eks.amazonaws.com/nodegroup-image=$amiName` 
     +  `eks.amazonaws.com/nodegroup=$nodeGroupName` 
     +  `eks.amazonaws.com/sourceLaunchTemplateId=$launchTemplateId` 
     +  `eks.amazonaws.com/sourceLaunchTemplateVersion=$launchTemplateVersion` 
**Note**  
When an update or upgrade is initiated without changes to the scaling configuration, the workflow uses the live Auto Scaling group values as the starting point, not the node group’s stored scaling configuration. For more information, see [Managed node groups concepts](managed-node-groups.md#managed-node-group-concepts).

1. It marks nodes as unschedulable to avoid scheduling new Pods. It also labels nodes with `node.kubernetes.io/exclude-from-external-load-balancers=true` to remove the old nodes from load balancers before terminating the nodes.

The following are known reasons which lead to a `NodeCreationFailure` error in this phase:

 **Insufficient capacity in the Availability Zone**   
There is a possibility that the Availability Zone might not have capacity of requested instance types. It’s recommended to configure multiple instance types while creating a managed node group.

 **EC2 instance limits in your account**   
You may need to increase the number of Amazon EC2 instances your account can run simultaneously using Service Quotas. For more information, see [EC2 Service Quotas](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html) in the *Amazon Elastic Compute Cloud User Guide for Linux Instances*.

 **Custom user data**   
Custom user data can sometimes break the bootstrap process. This scenario can lead to the `kubelet` not starting on the node or nodes not getting expected Amazon EKS labels on them. For more information, see [Specifying an AMI](launch-templates.md#launch-template-custom-ami).

 **Any changes which make a node unhealthy or not ready**   
Node disk pressure, memory pressure, and similar conditions can lead to a node not going to `Ready` state.

 **Each node most bootstrap within 15 minutes**   
If any node takes more than 15 minutes to bootstrap and join the cluster, it will cause the upgrade to time out. This is the total runtime for bootstrapping a new node measured from when a new node is required to when it joins the cluster. When upgrading a managed node group, the time counter starts as soon as the Auto Scaling Group size increases.

## Upgrade phase
<a name="managed-node-update-upgrade"></a>

The upgrade phase behaves in two different ways, depending on the *update strategy*. There are two update strategies: **default** and **minimal**.

We recommend the default strategy in most scenarios. It creates new nodes before terminating the old ones, so that the available capacity is maintained during the upgrade phase. The minimal strategy is useful in scenarios where you are constrained to resources or costs, for example with hardware accelerators such as GPUs. It terminating the old nodes before creating the new ones, so that total capacity never increases beyond your configured quantity.

The *default* update strategy has these steps:

1. It increases the quantity of nodes (desired count) in the Auto Scaling Group, causing the node group to create additional nodes.

1. It randomly selects a node that needs to be upgraded, up to the maximum unavailable configured for the node group.

1. It drains the Pods from the node. If the Pods don’t leave the node within 15 minutes and there’s no force flag, the upgrade phase fails with a `PodEvictionFailure` error. For this scenario, you can apply the force flag with the `update-nodegroup-version` request to delete the Pods.

1. It cordons the node after every Pod is evicted and waits for 60 seconds. This is done so that the service controller doesn’t send any new requests to this node and removes this node from its list of active nodes.

1. It sends a termination request to the Auto Scaling Group for the cordoned node.

1. It repeats the previous upgrade steps until there are no nodes in the node group that are deployed with the earlier version of the launch template.

The *minimal* update strategy has these steps:

1. It cordons all nodes of the node group in the beginning, so that the service controller doesn’t send any new requests to these nodes.

1. It randomly selects a node that needs to be upgraded, up to the maximum unavailable configured for the node group.

1. It drains the Pods from the selected nodes. If the Pods don’t leave the node within 15 minutes and there’s no force flag, the upgrade phase fails with a `PodEvictionFailure` error. For this scenario, you can apply the force flag with the `update-nodegroup-version` request to delete the Pods.

1. After every Pod is evicted and waits for 60 seconds, it sends a termination request to the Auto Scaling Group for the selected nodes. The Auto Scaling Group creates new nodes (same as the number of selected nodes) to replace the missing capacity.

1. It repeats the previous upgrade steps until there are no nodes in the node group that are deployed with the earlier version of the launch template.

### `PodEvictionFailure` errors during the upgrade phase
<a name="_podevictionfailure_errors_during_the_upgrade_phase"></a>

The following are known reasons which lead to a `PodEvictionFailure` error in this phase:

 **Aggressive PDB**   
Aggressive PDB is defined on the Pod or there are multiple PDBs pointing to the same Pod.

 **Deployment tolerating all the taints**   
Once every Pod is evicted, it’s expected for the node to be empty because the node is [tainted](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) in the earlier steps. However, if the deployment tolerates every taint, then the node is more likely to be non-empty, leading to Pod eviction failure.

## Scale down phase
<a name="managed-node-update-scale-down"></a>

The scale down phase decrements the Auto Scaling group maximum size and desired size by one to return to values before the update started.

If the Upgrade workflow determines that the Cluster Autoscaler is scaling up the node group during the scale down phase of the workflow, it exits immediately without bringing the node group back to its original size.

**Note**  

```
If your node group has a warm pool enabled, warm pool instances are drained before the scale-up operation begins. This is because warm pool instances have not been updated to the new launch template configuration — during the scale-up phase, they would be pulled into the Auto Scaling Group instead of launching new instances with the updated configuration, which would break the upgrade process. Draining the warm pool ensures that only new instances with the updated configuration are launched. Once the scale-down operation completes, the warm pool is restored, and the new instances in the warm pool are launched with the updated launch template configuration.
```
For more information about warm pools, see [Decrease latency for applications with long boot times using warm pools with managed node groups](warm-pools-managed-node-groups.md).

# Customize managed nodes with launch templates
<a name="launch-templates"></a>

For the highest level of customization, you can deploy managed nodes with your own launch template based on the steps on this page. Using a launch template allows capabilities such as to provide bootstrap arguments during deployment of a node (e.g., extra [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) arguments), assign IP addresses to Pods from a different CIDR block than the IP address assigned to the node, deploy your own custom AMI to nodes, or deploy your own custom CNI to nodes.

When you give your own launch template upon first creating a managed node group, you will also have greater flexibility later. As long as you deploy a managed node group with your own launch template, you can iteratively update it with a different version of the same launch template. When you update your node group to a different version of your launch template, all nodes in the group are recycled to match the new configuration of the specified launch template version.

Managed node groups are always deployed with a launch template to be used with the Amazon EC2 Auto Scaling group. When you don’t provide a launch template, the Amazon EKS API creates one automatically with default values in your account. However, we don’t recommend that you modify auto-generated launch templates. Furthermore, existing node groups that don’t use a custom launch template can’t be updated directly. Instead, you must create a new node group with a custom launch template to do so.

## Launch template configuration basics
<a name="launch-template-basics"></a>

You can create an Amazon EC2 Auto Scaling launch template with the AWS Management Console, AWS CLI, or an AWS SDK. For more information, see [Creating a Launch Template for an Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html) in the *Amazon EC2 Auto Scaling User Guide*. Some of the settings in a launch template are similar to the settings used for managed node configuration. When deploying or updating a node group with a launch template, some settings must be specified in either the node group configuration or the launch template. Don’t specify a setting in both places. If a setting exists where it shouldn’t, then operations such as creating or updating a node group fail.

The following table lists the settings that are prohibited in a launch template. It also lists similar settings, if any are available, that are required in the managed node group configuration. The listed settings are the settings that appear in the console. They might have similar but different names in the AWS CLI and SDK.


| Launch template – Prohibited | Amazon EKS node group configuration | 
| --- | --- | 
|   **Subnet** under **Network interfaces** (**Add network interface**)  |   **Subnets** under **Node group network configuration** on the **Specify networking** page  | 
|   **IAM instance profile** under **Advanced details**   |   **Node IAM role** under **Node group configuration** on the **Configure Node group** page  | 
|   **Shutdown behavior** and **Stop - Hibernate behavior** under **Advanced details**. Retain default **Don’t include in launch template setting** in launch template for both settings.  |  No equivalent. Amazon EKS must control the instance lifecycle, not the Auto Scaling group.  | 

The following table lists the prohibited settings in a managed node group configuration. It also lists similar settings, if any are available, which are required in a launch template. The listed settings are the settings that appear in the console. They might have similar names in the AWS CLI and SDK.


| Amazon EKS node group configuration – Prohibited | Launch template | 
| --- | --- | 
|  (Only if you specified a custom AMI in a launch template) **AMI type** under **Node group compute configuration** on **Set compute and scaling configuration** page – Console displays **Specified in launch template** and the AMI ID that was specified. If **Application and OS Images (Amazon Machine Image)** wasn’t specified in the launch template, you can select an AMI in the node group configuration.  |   **Application and OS Images (Amazon Machine Image)** under **Launch template contents** – You must specify an ID if you have either of the following requirements: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html)  | 
|   **Disk size** under **Node group compute configuration** on **Set compute and scaling configuration** page – Console displays **Specified in launch template**.  |   **Size** under **Storage (Volumes)** (**Add new volume**). You must specify this in the launch template.  | 
|   **SSH key pair** under **Node group configuration** on the **Specify Networking** page – The console displays the key that was specified in the launch template or displays **Not specified in launch template**.  |   **Key pair name** under **Key pair (login)**.  | 
|  You can’t specify source security groups that are allowed remote access when using a launch template.  |   **Security groups** under **Network settings** for the instance or **Security groups** under **Network interfaces** (**Add network interface**), but not both. For more information, see [Using custom security groups](#launch-template-security-groups).  | 

**Note**  
If you deploy a node group using a launch template, specify zero or one **Instance type** under **Launch template contents** in a launch template. Alternatively, you can specify 0–20 instance types for **Instance types** on the **Set compute and scaling configuration** page in the console. Or, you can do so using other tools that use the Amazon EKS API. If you specify an instance type in a launch template, and use that launch template to deploy your node group, then you can’t specify any instance types in the console or using other tools that use the Amazon EKS API. If you don’t specify an instance type in a launch template, in the console, or using other tools that use the Amazon EKS API, the `t3.medium` instance type is used. If your node group is using the Spot capacity type, then we recommend specifying multiple instance types using the console. For more information, see [Managed node group capacity types](managed-node-groups.md#managed-node-group-capacity-types).
If any containers that you deploy to the node group use the Instance Metadata Service Version 2, make sure to set the **Metadata response hop limit** to `2` in your launch template. For more information, see [Instance metadata and user data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) in the *Amazon EC2 User Guide*.
Launch templates do not support the `InstanceRequirements` feature that allows flexible instance type selection.

## Tagging Amazon EC2 instances
<a name="launch-template-tagging"></a>

You can use the `TagSpecification` parameter of a launch template to specify which tags to apply to Amazon EC2 instances in your node group. The IAM entity calling the `CreateNodegroup` or `UpdateNodegroupVersion` APIs must have permissions for `ec2:RunInstances` and `ec2:CreateTags`, and the tags must be added to the launch template.

## Using custom security groups
<a name="launch-template-security-groups"></a>

You can use a launch template to specify custom Amazon EC2 [security groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html) to apply to instances in your node group. This can be either in the instance level security groups parameter or as part of the network interface configuration parameters. However, you can’t create a launch template that specifies both instance level and network interface security groups. Consider the following conditions that apply to using custom security groups with managed node groups:
+ When using the AWS Management Console, Amazon EKS only allows launch templates with a single network interface specification.
+ By default, Amazon EKS applies the [cluster security group](sec-group-reqs.md) to the instances in your node group to facilitate communication between nodes and the control plane. If you specify custom security groups in the launch template using either option mentioned earlier, Amazon EKS doesn’t add the cluster security group. So, you must ensure that the inbound and outbound rules of your security groups enable communication with the endpoint of your cluster. If your security group rules are incorrect, the worker nodes can’t join the cluster. For more information about security group rules, see [View Amazon EKS security group requirements for clusters](sec-group-reqs.md).
+ If you need SSH access to the instances in your node group, include a security group that allows that access.

## Amazon EC2 user data
<a name="launch-template-user-data"></a>

The launch template includes a section for custom user data. You can specify configuration settings for your node group in this section without manually creating individual custom AMIs. For more information about the settings available for Bottlerocket, see [Using user data](https://github.com/bottlerocket-os/bottlerocket#using-user-data) on GitHub.

You can supply Amazon EC2 user data in your launch template using `cloud-init` when launching your instances. For more information, see the [cloud-init](https://cloudinit.readthedocs.io/en/latest/index.html) documentation. Your user data can be used to perform common configuration operations. This includes the following operations:
+  [Including users or groups](https://cloudinit.readthedocs.io/en/latest/topics/examples.html#including-users-and-groups) 
+  [Installing packages](https://cloudinit.readthedocs.io/en/latest/topics/examples.html#install-arbitrary-packages) 

Amazon EC2 user data in launch templates that are used with managed node groups must be in the [MIME multi-part archive](https://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive) format for Amazon Linux AMIs and TOML format for Bottlerocket AMIs. This is because your user data is merged with Amazon EKS user data required for nodes to join the cluster. Don’t specify any commands in your user data that starts or modifies `kubelet`. This is performed as part of the user data merged by Amazon EKS. Certain `kubelet` parameters, such as setting labels on nodes, can be configured directly through the managed node groups API.

**Note**  
For more information about advanced `kubelet` customization, including manually starting it or passing in custom configuration parameters, see [Specifying an AMI](#launch-template-custom-ami). If a custom AMI ID is specified in a launch template, Amazon EKS doesn’t merge user data.

The following details provide more information about the user data section.

 **Amazon Linux 2 user data**   
You can combine multiple user data blocks together into a single MIME multi-part file. For example, you can combine a cloud boothook that configures the Docker daemon with a user data shell script that installs a custom package. A MIME multi-part file consists of the following components:  
+ The content type and part boundary declaration – `Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="` 
+ The MIME version declaration – `MIME-Version: 1.0` 
+ One or more user data blocks, which contain the following components:
  + The opening boundary, which signals the beginning of a user data block – `--==MYBOUNDARY==` 
  + The content type declaration for the block: `Content-Type: text/cloud-config; charset="us-ascii"`. For more information about content types, see the [cloud-init](https://cloudinit.readthedocs.io/en/latest/topics/format.html) documentation.
  + The content of the user data (for example, a list of shell commands or `cloud-init` directives).
  + The closing boundary, which signals the end of the MIME multi-part file: `--==MYBOUNDARY==--` 

  The following is an example of a MIME multi-part file that you can use to create your own.

```
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash
echo "Running custom user data script"

--==MYBOUNDARY==--
```

 **Amazon Linux 2023 user data**   
Amazon Linux 2023 (AL2023) introduces a new node initialization process `nodeadm` that uses a YAML configuration schema. If you’re using self-managed node groups or an AMI with a launch template, you’ll now need to provide additional cluster metadata explicitly when creating a new node group. An [example](https://awslabs.github.io/amazon-eks-ami/nodeadm/) of the minimum required parameters is as follows, where `apiServerEndpoint`, `certificateAuthority`, and service `cidr` are now required:  

```
---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
  cluster:
    name: my-cluster
    apiServerEndpoint: https://example.com
    certificateAuthority: Y2VydGlmaWNhdGVBdXRob3JpdHk=
    cidr: 10.100.0.0/16
```
You’ll typically set this configuration in your user data, either as-is or embedded within a MIME multi-part document:  

```
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="BOUNDARY"

--BOUNDARY
Content-Type: application/node.eks.aws

---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig spec: [...]

--BOUNDARY--
```
In AL2, the metadata from these parameters was discovered from the Amazon EKS `DescribeCluster` API call. With AL2023, this behavior has changed since the additional API call risks throttling during large node scale ups. This change doesn’t affect you if you’re using managed node groups without a launch template or if you’re using Karpenter. For more information on `certificateAuthority` and service `cidr`, see [https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeCluster.html](https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeCluster.html) in the *Amazon EKS API Reference*.  
Here’s a complete example of AL2023 user data that combines a shell script for customizing the node (like installing packages or pre-caching container images) with the required `nodeadm` configuration. This example shows common customizations including: \$1 Installing additional system packages \$1 Pre-caching container images to improve Pod startup time \$1 Setting up HTTP proxy configuration \$1 Configuring `kubelet` flags for node labeling  

```
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="BOUNDARY"

--BOUNDARY
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset

# Install additional packages
yum install -y htop jq iptables-services

# Pre-cache commonly used container images
nohup docker pull public.ecr.aws/eks-distro/kubernetes/pause:3.2 &

# Configure HTTP proxy if needed
cat > /etc/profile.d/http-proxy.sh << 'EOF'
export HTTP_PROXY="http://proxy.example.com:3128"
export HTTPS_PROXY="http://proxy.example.com:3128"
export NO_PROXY="localhost,127.0.0.1,169.254.169.254,.internal"
EOF

--BOUNDARY
Content-Type: application/node.eks.aws

apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
  cluster:
    name: my-cluster
    apiServerEndpoint: https://example.com
    certificateAuthority: Y2VydGlmaWNhdGVBdXRob3JpdHk=
    cidr: 10.100.0.0/16
  kubelet:
    config:
      clusterDNS:
      - 10.100.0.10
    flags:
    - --node-labels=app=my-app,environment=production

--BOUNDARY--
```

 **Bottlerocket user data**   
Bottlerocket structures user data in the TOML format. You can provide user data to be merged with the user data provided by Amazon EKS. For example, you can provide additional `kubelet` settings.  

```
[settings.kubernetes.system-reserved]
cpu = "10m"
memory = "100Mi"
ephemeral-storage= "1Gi"
```
For more information about the supported settings, see [Bottlerocket documentation](https://github.com/bottlerocket-os/bottlerocket). You can configure node labels and [taints](node-taints-managed-node-groups.md) in your user data. However, we recommend that you configure these within your node group instead. Amazon EKS applies these configurations when you do so.  
When user data is merged, formatting isn’t preserved, but the content remains the same. The configuration that you provide in your user data overrides any settings that are configured by Amazon EKS. So, if you set `settings.kubernetes.max-pods` or `settings.kubernetes.cluster-dns-ip`, these values in your user data are applied to the nodes.  
Amazon EKS doesn’t support all valid TOML. The following is a list of known unsupported formats:  
+ Quotes within quoted keys: `'quoted "value"' = "value"` 
+ Escaped quotes in values: `str = "I’m a string. \"You can quote me\""` 
+ Mixed floats and integers: `numbers = [ 0.1, 0.2, 0.5, 1, 2, 5 ]` 
+ Mixed types in arrays: `contributors = ["[foo@example.com](mailto:foo@example.com)", { name = "Baz", email = "[baz@example.com](mailto:baz@example.com)" }]` 
+ Bracketed headers with quoted keys: `[foo."bar.baz"]` 

 **Windows user data**   
Windows user data uses PowerShell commands. When creating a managed node group, your custom user data combines with Amazon EKS managed user data. Your PowerShell commands come first, followed by the managed user data commands, all within one `<powershell></powershell>` tag.  
When creating Windows node groups, Amazon EKS updates the `aws-auth` `ConfigMap` to allow Linux-based nodes to join the cluster. The service doesn’t automatically configure permissions for Windows AMIs. If you’re using Windows nodes, you’ll need to manage access either via the access entry API or by updating the `aws-auth` `ConfigMap` directly. For more information, see [Deploy Windows nodes on EKS clusters](windows-support.md).
When no AMI ID is specified in the launch template, don’t use the Windows Amazon EKS Bootstrap script in user data to configure Amazon EKS.
Example user data is as follows.  

```
<powershell>
Write-Host "Running custom user data script"
</powershell>
```

## Specifying an AMI
<a name="launch-template-custom-ami"></a>

If you have either of the following requirements, then specify an AMI ID in the `ImageId` field of your launch template. Select the requirement you have for additional information.

### Provide user data to pass arguments to the `bootstrap.sh` file included with an Amazon EKS optimized Linux/Bottlerocket AMI
<a name="mng-specify-eks-ami"></a>

Bootstrapping is a term used to describe adding commands that can be run when an instance starts. For example, bootstrapping allows using extra [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) arguments. You can pass arguments to the `bootstrap.sh` script by using `eksctl` without specifying a launch template. Or you can do so by specifying the information in the user data section of a launch template.

 **eksctl without specifying a launch template**   
Create a file named *my-nodegroup.yaml* with the following contents. Replace every *example value* with your own values. The `--apiserver-endpoint`, `--b64-cluster-ca`, and `--dns-cluster-ip` arguments are optional. However, defining them allows the `bootstrap.sh` script to avoid making a `describeCluster` call. This is useful in private cluster setups or clusters where you’re scaling in and out nodes frequently. For more information on the `bootstrap.sh` script, see the [bootstrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/main/templates/al2/runtime/bootstrap.sh) file on GitHub.  
+ The only required argument is the cluster name (*my-cluster*).
+ To retrieve an optimized AMI ID for `ami-1234567890abcdef0 `, see the following sections:
  +  [Retrieve recommended Amazon Linux AMI IDs](retrieve-ami-id.md) 
  +  [Retrieve recommended Bottlerocket AMI IDs](retrieve-ami-id-bottlerocket.md) 
  +  [Retrieve recommended Microsoft Windows AMI IDs](retrieve-windows-ami-id.md) 
+ To retrieve the *certificate-authority* for your cluster, run the following command.

  ```
  aws eks describe-cluster --query "cluster.certificateAuthority.data" --output text --name my-cluster --region region-code
  ```
+ To retrieve the *api-server-endpoint* for your cluster, run the following command.

  ```
  aws eks describe-cluster --query "cluster.endpoint" --output text --name my-cluster --region region-code
  ```
+ The value for `--dns-cluster-ip` is your service CIDR with `.10` at the end. To retrieve the *service-cidr* for your cluster, run the following command. For example, if the returned value for is `ipv4 10.100.0.0/16`, then your value is *10.100.0.10*.

  ```
  aws eks describe-cluster --query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" --output text --name my-cluster --region region-code
  ```
+ This example provides a `kubelet` argument to set a custom `max-pods` value using the `bootstrap.sh` script included with the Amazon EKS optimized AMI. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters. For help with selecting *my-max-pods-value*, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence). For more information about how `maxPods` is determined when using managed node groups, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).

  ```
  ---
  apiVersion: eksctl.io/v1alpha5
  kind: ClusterConfig
  
  metadata:
    name: my-cluster
    region: region-code
  
  managedNodeGroups:
    - name: my-nodegroup
      ami: ami-1234567890abcdef0
      instanceType: m5.large
      privateNetworking: true
      disableIMDSv1: true
      labels: { x86-al2-specified-mng }
      overrideBootstrapCommand: |
        #!/bin/bash
        /etc/eks/bootstrap.sh my-cluster \
          --b64-cluster-ca certificate-authority \
          --apiserver-endpoint api-server-endpoint \
          --dns-cluster-ip service-cidr.10 \
          --kubelet-extra-args '--max-pods=my-max-pods-value' \
          --use-max-pods false
  ```

  For every available `eksctl` `config` file option, see [Config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation. The `eksctl` utility still creates a launch template for you and populates its user data with the data that you provide in the `config` file.

  Create a node group with the following command.

  ```
  eksctl create nodegroup --config-file=my-nodegroup.yaml
  ```

 **User data in a launch template**   
Specify the following information in the user data section of your launch template. Replace every *example value* with your own values. The `--apiserver-endpoint`, `--b64-cluster-ca`, and `--dns-cluster-ip` arguments are optional. However, defining them allows the `bootstrap.sh` script to avoid making a `describeCluster` call. This is useful in private cluster setups or clusters where you’re scaling in and out nodes frequently. For more information on the `bootstrap.sh` script, see the [bootstrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/main/templates/al2/runtime/bootstrap.sh) file on GitHub.  
+ The only required argument is the cluster name (*my-cluster*).
+ To retrieve the *certificate-authority* for your cluster, run the following command.

  ```
  aws eks describe-cluster --query "cluster.certificateAuthority.data" --output text --name my-cluster --region region-code
  ```
+ To retrieve the *api-server-endpoint* for your cluster, run the following command.

  ```
  aws eks describe-cluster --query "cluster.endpoint" --output text --name my-cluster --region region-code
  ```
+ The value for `--dns-cluster-ip` is your service CIDR with `.10` at the end. To retrieve the *service-cidr* for your cluster, run the following command. For example, if the returned value for is `ipv4 10.100.0.0/16`, then your value is *10.100.0.10*.

  ```
  aws eks describe-cluster --query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" --output text --name my-cluster --region region-code
  ```
+ This example provides a `kubelet` argument to set a custom `max-pods` value using the `bootstrap.sh` script included with the Amazon EKS optimized AMI. For help with selecting *my-max-pods-value*, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence). For more information about how `maxPods` is determined when using managed node groups, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).

  ```
  MIME-Version: 1.0
  Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
  
  --==MYBOUNDARY==
  Content-Type: text/x-shellscript; charset="us-ascii"
  
  #!/bin/bash
  set -ex
  /etc/eks/bootstrap.sh my-cluster \
    --b64-cluster-ca certificate-authority \
    --apiserver-endpoint api-server-endpoint \
    --dns-cluster-ip service-cidr.10 \
    --kubelet-extra-args '--max-pods=my-max-pods-value' \
    --use-max-pods false
  
  --==MYBOUNDARY==--
  ```

### Provide user data to pass arguments to the `Start-EKSBootstrap.ps1` file included with an Amazon EKS optimized Windows AMI
<a name="mng-specify-eks-ami-windows"></a>

Bootstrapping is a term used to describe adding commands that can be run when an instance starts. You can pass arguments to the `Start-EKSBootstrap.ps1` script by using `eksctl` without specifying a launch template. Or you can do so by specifying the information in the user data section of a launch template.

If you want to specify a custom Windows AMI ID, keep in mind the following considerations:
+ You must use a launch template and give the required bootstrap commands in the user data section. To retrieve your desired Windows ID, you can use the table in [Create nodes with optimized Windows AMIs](eks-optimized-windows-ami.md).
+ There are several limits and conditions. For example, you must add `eks:kube-proxy-windows` to your AWS IAM Authenticator configuration map. For more information, see [Limits and conditions when specifying an AMI ID](#mng-ami-id-conditions).

Specify the following information in the user data section of your launch template. Replace every *example value* with your own values. The `-APIServerEndpoint`, `-Base64ClusterCA`, and `-DNSClusterIP` arguments are optional. However, defining them allows the `Start-EKSBootstrap.ps1` script to avoid making a `describeCluster` call.
+ The only required argument is the cluster name (*my-cluster*).
+ To retrieve the *certificate-authority* for your cluster, run the following command.

  ```
  aws eks describe-cluster --query "cluster.certificateAuthority.data" --output text --name my-cluster --region region-code
  ```
+ To retrieve the *api-server-endpoint* for your cluster, run the following command.

  ```
  aws eks describe-cluster --query "cluster.endpoint" --output text --name my-cluster --region region-code
  ```
+ The value for `--dns-cluster-ip` is your service CIDR with `.10` at the end. To retrieve the *service-cidr* for your cluster, run the following command. For example, if the returned value for is `ipv4 10.100.0.0/16`, then your value is *10.100.0.10*.

  ```
  aws eks describe-cluster --query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" --output text --name my-cluster --region region-code
  ```
+ For additional arguments, see [Bootstrap script configuration parameters](eks-optimized-windows-ami.md#bootstrap-script-configuration-parameters).
**Note**  
If you’re using custom service CIDR, then you need to specify it using the `-ServiceCIDR` parameter. Otherwise, the DNS resolution for Pods in the cluster will fail.

```
<powershell>
[string]$EKSBootstrapScriptFile = "$env:ProgramFiles\Amazon\EKS\Start-EKSBootstrap.ps1"
& $EKSBootstrapScriptFile -EKSClusterName my-cluster `
	 -Base64ClusterCA certificate-authority `
	 -APIServerEndpoint api-server-endpoint `
	 -DNSClusterIP service-cidr.10
</powershell>
```

### Run a custom AMI due to specific security, compliance, or internal policy requirements
<a name="mng-specify-custom-ami"></a>

For more information, see [Amazon Machine Images (AMI)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) in the *Amazon EC2 User Guide*. The Amazon EKS AMI build specification contains resources and configuration scripts for building a custom Amazon EKS AMI based on Amazon Linux. For more information, see [Amazon EKS AMI Build Specification](https://github.com/awslabs/amazon-eks-ami/) on GitHub. To build custom AMIs installed with other operating systems, see [Amazon EKS Sample Custom AMIs](https://github.com/aws-samples/amazon-eks-custom-amis) on GitHub.

You cannot use dynamic parameter references for AMI IDs in Launch Templates used with managed node groups.

**Important**  
When specifying an AMI, Amazon EKS does not validate the Kubernetes version embedded in your AMI against your cluster’s control plane version. You are responsible for ensuring that the Kubernetes version of your custom AMI conforms to the [Kubernetes version skew policy](https://kubernetes.io/releases/version-skew-policy):  
The `kubelet` version on your nodes must not be newer than your cluster version
The `kubelet` version on your nodes must be equal to or up to 3 minor versions behind your cluster version (for Kubernetes version `1.28` or higher), or up to 2 minor versions behind your cluster version (for Kubernetes version `1.27` or lower)  
Creating managed node groups with version skew violations may result in:
Nodes failing to join the cluster
Undefined behavior or API incompatibilities
Cluster instability or workload failures
When specifying an AMI, Amazon EKS doesn’t merge any user data. Rather, you’re responsible for supplying the required `bootstrap` commands for nodes to join the cluster. If your nodes fail to join the cluster, the Amazon EKS `CreateNodegroup` and `UpdateNodegroupVersion` actions also fail.

## Limits and conditions when specifying an AMI ID
<a name="mng-ami-id-conditions"></a>

The following are the limits and conditions involved with specifying an AMI ID with managed node groups:
+ You must create a new node group to switch between specifying an AMI ID in a launch template and not specifying an AMI ID.
+ You aren’t notified in the console when a newer AMI version is available. To update your node group to a newer AMI version, you need to create a new version of your launch template with an updated AMI ID. Then, you need to update the node group with the new launch template version.
+ The following fields can’t be set in the API if you specify an AMI ID:
  +  `amiType` 
  +  `releaseVersion` 
  +  `version` 
+ Any `taints` set in the API are applied asynchronously if you specify an AMI ID. To apply taints prior to a node joining the cluster, you must pass the taints to `kubelet` in your user data using the `--register-with-taints` command line flag. For more information, see [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) in the Kubernetes documentation.
+ When specifying a custom AMI ID for Windows managed node groups, add `eks:kube-proxy-windows` to your AWS IAM Authenticator configuration map. This is required for DNS to function properly.

  1. Open the AWS IAM Authenticator configuration map for editing.

     ```
     kubectl edit -n kube-system cm aws-auth
     ```

  1. Add this entry to the `groups` list under each `rolearn` associated with Windows nodes. Your configuration map should look similar to [aws-auth-cm-windows.yaml](https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm-windows.yaml).

     ```
     - eks:kube-proxy-windows
     ```

  1. Save the file and exit your text editor.
+ For any AMI that uses a custom launch template, the default `HttpPutResponseHopLimit` for managed node groups is set to `2`.

# Delete a managed node group from your cluster
<a name="delete-managed-node-group"></a>

This topic describes how you can delete an Amazon EKS managed node group. When you delete a managed node group, Amazon EKS first sets the minimum, maximum, and desired size of your Auto Scaling group to zero. This then causes your node group to scale down.

Before each instance is terminated, Amazon EKS sends a signal to drain that node. During the drain process, Kubernetes does the following for each pod on the node: runs any configured `preStop` lifecycle hooks, sends `SIGTERM` signals to the containers, then waits for the `terminationGracePeriodSeconds` for graceful shutdown. If the node hasn’t been drained after 5 minutes, Amazon EKS lets Auto Scaling continue the forced termination of the instance. After all instances have been terminated, the Auto Scaling group is deleted.

**Important**  
If you delete a managed node group that uses a node IAM role that isn’t used by any other managed node group in the cluster, the role is removed from the `aws-auth` `ConfigMap`. If any of the self-managed node groups in the cluster are using the same node IAM role, the self-managed nodes move to the `NotReady` status. Additionally, the cluster operation is also disrupted. To add a mapping for the role you’re using only for the self-managed node groups, see [Create access entries](creating-access-entries.md), if your cluster’s platform version is at least minimum version listed in the prerequisites section of [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md). If your platform version is earlier than the required minimum version for access entries, you can add the entry back to the `aws-auth` `ConfigMap`. For more information, enter `eksctl create iamidentitymapping --help` in your terminal.

You can delete a managed node group with:
+  [`eksctl`](#eksctl-delete-managed-nodegroup) 
+  [AWS Management Console](#console-delete-managed-nodegroup) 
+  [AWS CLI](#awscli-delete-managed-nodegroup) 

## `eksctl`
<a name="eksctl-delete-managed-nodegroup"></a>

 **Delete a managed node group with `eksctl` ** 

Enter the following command. Replace every `<example value>` with your own values.

```
eksctl delete nodegroup \
  --cluster <my-cluster> \
  --name <my-mng> \
  --region <region-code>
```

For more options, see [Deleting and draining nodegroups](https://eksctl.io/usage/nodegroups/#deleting-and-draining-nodegroups) in the `eksctl` documentation.

## AWS Management Console
<a name="console-delete-managed-nodegroup"></a>

 **Delete a managed node group with AWS Management Console ** 

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. On the **Clusters** page, choose the cluster that contains the node group to delete.

1. On the selected cluster page, choose the **Compute** tab.

1. In the **Node groups** section, choose the node group to delete. Then choose **Delete**.

1. In the **Delete node group** confirmation dialog box, enter the name of the node group. Then choose **Delete**.

## AWS CLI
<a name="awscli-delete-managed-nodegroup"></a>

 **Delete a managed node group with AWS CLI** 

1. Enter the following command. Replace every `<example value>` with your own values.

   ```
   aws eks delete-nodegroup \
     --cluster-name <my-cluster> \
     --nodegroup-name <my-mng> \
     --region <region-code>
   ```

1. If `cli_pager=` is set in the CLI config, use the arrow keys on your keyboard to scroll through the response output. Press the `q` key when you’re finished.

   For more options, see the ` [delete-nodegroup](https://docs.aws.amazon.com/cli/latest/reference/eks/delete-nodegroup.html) ` command in the * AWS CLI Command Reference*.