

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Create a managed node group for your cluster
<a name="create-managed-node-group"></a>

This topic describes how you can launch Amazon EKS managed node groups of nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them.

If this is your first time launching an Amazon EKS managed node group, we recommend that you instead follow one of our guides in [Get started with Amazon EKS](getting-started.md). These guides provide walkthroughs for creating an Amazon EKS cluster with nodes.

**Important**  
Amazon EKS nodes are standard Amazon EC2 instances. You’re billed based on the normal Amazon EC2 prices. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/).
You can’t create managed nodes in an AWS Region where you have AWS Outposts or AWS Wavelength enabled. You can create self-managed nodes instead. For more information, see [Create self-managed Amazon Linux nodes](launch-workers.md), [Create self-managed Microsoft Windows nodes](launch-windows-workers.md), and [Create self-managed Bottlerocket nodes](launch-node-bottlerocket.md). You can also create a self-managed Amazon Linux node group on an Outpost. For more information, see [Create Amazon Linux nodes on AWS Outposts](eks-outposts-self-managed-nodes.md).
If you don’t [specify an AMI ID](launch-templates.md#launch-template-custom-ami) for the `bootstrap.sh` file included with Amazon EKS optimized Linux or Bottlerocket, managed node groups enforce a maximum number on the value of `maxPods`. For instances with less than 30 vCPUs, the maximum number is `110`. For instances with greater than 30 vCPUs, the maximum number jumps to `250`. This enforcement overrides other `maxPods` configurations, including `maxPodsExpression`. For more information about how `maxPods` is determined and how to customize it, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).
+ An existing Amazon EKS cluster. To deploy one, see [Create an Amazon EKS cluster](create-cluster.md).
+ An existing IAM role for the nodes to use. To create one, see [Amazon EKS node IAM role](create-node-role.md). If this role doesn’t have either of the policies for the VPC CNI, the separate role that follows is required for the VPC CNI pods.
+ (Optional, but recommended) The Amazon VPC CNI plugin for Kubernetes add-on configured with its own IAM role that has the necessary IAM policy attached to it. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).
+ Familiarity with the considerations listed in [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md). Depending on the instance type you choose, there may be additional prerequisites for your cluster and VPC.
+ To add a Windows managed node group, you must first enable Windows support for your cluster. For more information, see [Deploy Windows nodes on EKS clusters](windows-support.md).

You can create a managed node group with either of the following:
+  [`eksctl`](#eksctl_create_managed_nodegroup) 
+  [AWS Management Console](#console_create_managed_nodegroup) 

## `eksctl`
<a name="eksctl_create_managed_nodegroup"></a>

 **Create a managed node group with eksctl** 

This procedure requires `eksctl` version `0.215.0` or later. You can check your version with the following command:

```
eksctl version
```

For instructions on how to install or upgrade `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

1. (Optional) If the **AmazonEKS\$1CNI\$1Policy** managed IAM policy is attached to your [Amazon EKS node IAM role](create-node-role.md), we recommend assigning it to an IAM role that you associate to the Kubernetes `aws-node` service account instead. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

1. Create a managed node group with or without using a custom launch template. Manually specifying a launch template allows for greater customization of a node group. For example, it can allow deploying a custom AMI or providing arguments to the `boostrap.sh` script in an Amazon EKS optimized AMI. For a complete list of every available option and default, enter the following command.

   ```
   eksctl create nodegroup --help
   ```

   In the following command, replace *my-cluster* with the name of your cluster and replace *my-mng* with the name of your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.
**Important**  
If you don’t use a custom launch template when first creating a managed node group, don’t use one at a later time for the node group. If you didn’t specify a custom launch template, the system auto-generates a launch template that we don’t recommend that you modify manually. Manually modifying this auto-generated launch template might cause errors.

 **Without a launch template** 

 `eksctl` creates a default Amazon EC2 launch template in your account and deploys the node group using a launch template that it creates based on options that you specify. Before specifying a value for `--node-type`, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md).

Replace *ami-family* with an allowed keyword. For more information, see [Setting the node AMI Family](https://eksctl.io/usage/custom-ami-support/#setting-the-node-ami-family) in the `eksctl` documentation. Replace *my-key* with the name of your Amazon EC2 key pair or public key. This key is used to SSH into your nodes after they launch.

**Note**  
For Windows, this command doesn’t enable SSH. Instead, it associates your Amazon EC2 key pair with the instance and allows you to RDP into the instance.

If you don’t already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For Linux information, see [Amazon EC2 key pairs and Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*. For Windows information, see [Amazon EC2 key pairs and Windows instances](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.

We recommend blocking Pod access to IMDS if the following conditions are true:
+ You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
+ No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

If you want to block Pod access to IMDS, then add the `--disable-pod-imds` option to the following command.

```
eksctl create nodegroup \
  --cluster my-cluster \
  --region region-code \
  --name my-mng \
  --node-ami-family ami-family \
  --node-type m5.large \
  --nodes 3 \
  --nodes-min 2 \
  --nodes-max 4 \
  --ssh-access \
  --ssh-public-key my-key
```

Your instances can optionally assign a significantly higher number of IP addresses to Pods, assign IP addresses to Pods from a different CIDR block than the instance’s, and be deployed to a cluster without internet access. For more information, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md), [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md), and [Deploy private clusters with limited internet access](private-clusters.md) for additional options to add to the previous command.

Managed node groups calculate and apply a single value for the maximum number of Pods that can run on each node of your node group, based on instance type. If you create a node group with different instance types, the smallest value calculated across all instance types is applied as the maximum number of Pods that can run on every instance type in the node group. For more information about how this value is calculated, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).

 **With a launch template** 

The launch template must already exist and must meet the requirements specified in [Launch template configuration basics](launch-templates.md#launch-template-basics). We recommend blocking Pod access to IMDS if the following conditions are true:
+ You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
+ No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

If you want to block Pod access to IMDS, then specify the necessary settings in the launch template.

1. Copy the following contents to your device. Replace the example values and then run the modified command to create the `eks-nodegroup.yaml` file. Several settings that you specify when deploying without a launch template are moved into the launch template. If you don’t specify a `version`, the template’s default version is used.

   ```
   cat >eks-nodegroup.yaml <<EOF
   apiVersion: eksctl.io/v1alpha5
   kind: ClusterConfig
   metadata:
     name: my-cluster
     region: region-code
   managedNodeGroups:
   - name: my-mng
     launchTemplate:
       id: lt-id
       version: "1"
   EOF
   ```

   For a complete list of `eksctl` config file settings, see [Config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation. Your instances can optionally assign a significantly higher number of IP addresses to Pods, assign IP addresses to Pods from a different CIDR block than the instance’s, and be deployed to a cluster without outbound internet access. For more information, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md), [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md), and [Deploy private clusters with limited internet access](private-clusters.md) for additional options to add to the config file.

   If you didn’t specify an AMI ID in your launch template, managed node groups calculate and apply a single value for the maximum number of Pods that can run on each node of your node group, based on instance type. If you create a node group with different instance types, the smallest value calculated across all instance types is applied as the maximum number of Pods that can run on every instance type in the node group. For more information about how this value is calculated, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).

   If you specified an AMI ID in your launch template, specify the maximum number of Pods that can run on each node of your node group if you’re using [custom networking](cni-custom-network.md) or want to [increase the number of IP addresses assigned to your instance](cni-increase-ip-addresses.md). For more information, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).

1. Deploy the nodegroup with the following command.

   ```
   eksctl create nodegroup --config-file eks-nodegroup.yaml
   ```

## AWS Management Console
<a name="console_create_managed_nodegroup"></a>

 **Create a managed node group using the AWS Management Console ** 

1. Wait for your cluster status to show as `ACTIVE`. You can’t create a managed node group for a cluster that isn’t already `ACTIVE`.

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create a managed node group in.

1. Select the **Compute** tab.

1. Choose **Add node group**.

1. On the **Configure node group** page, fill out the parameters accordingly, and then choose **Next**.
   +  **Name** – Enter a unique name for your managed node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.
   +  **Node IAM role** – Choose the node instance role to use with your node group. For more information, see [Amazon EKS node IAM role](create-node-role.md).
**Important**  
You can’t use the same role that is used to create any clusters.
We recommend using a role that’s not currently in use by any self-managed node group, unless you plan to use it with a new self-managed node group. For more information, see [Delete a managed node group from your cluster](delete-managed-node-group.md).
   +  **Use launch template** – (Optional) Choose if you want to use an existing launch template. Select a **Launch Template Name**. Then, select a **Launch template version**. If you don’t select a version, then Amazon EKS uses the template’s default version. Launch templates allow for more customization of your node group, such as allowing you to deploy a custom AMI, assign a significantly higher number of IP addresses to Pods, assign IP addresses to Pods from a different CIDR block than the instance’s, and deploying nodes to a cluster without outbound internet access. For more information, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md), [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md), and [Deploy private clusters with limited internet access](private-clusters.md).

     The launch template must meet the requirements in [Customize managed nodes with launch templates](launch-templates.md). If you don’t use your own launch template, the Amazon EKS API creates a default Amazon EC2 launch template in your account and deploys the node group using the default launch template.

     If you implement [IAM roles for service accounts](iam-roles-for-service-accounts.md), assign necessary permissions directly to every Pod that requires access to AWS services, and no Pods in your cluster require access to IMDS for other reasons, such as retrieving the current AWS Region, then you can also disable access to IMDS for Pods that don’t use host networking in a launch template. For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).
   +  **Kubernetes labels** – (Optional) You can choose to apply Kubernetes labels to the nodes in your managed node group.
   +  **Kubernetes taints** – (Optional) You can choose to apply Kubernetes taints to the nodes in your managed node group. The available options in the **Effect** menu are ` NoSchedule `, ` NoExecute `, and ` PreferNoSchedule `. For more information, see [Recipe: Prevent pods from being scheduled on specific nodes](node-taints-managed-node-groups.md).
   +  **Tags** – (Optional) You can choose to tag your Amazon EKS managed node group. These tags don’t propagate to other resources in the node group, such as Auto Scaling groups or instances. For more information, see [Organize Amazon EKS resources with tags](eks-using-tags.md).

1. On the **Set compute and scaling configuration** page, fill out the parameters accordingly, and then choose **Next**.
   +  **AMI type** – Select an AMI type. If you are deploying Arm instances, be sure to review the considerations in [Amazon EKS optimized Arm Amazon Linux AMIs](eks-optimized-ami.md#arm-ami) before deploying.

     If you specified a launch template on the previous page, and specified an AMI in the launch template, then you can’t select a value. The value from the template is displayed. The AMI specified in the template must meet the requirements in [Specifying an AMI](launch-templates.md#launch-template-custom-ami).
   +  **Capacity type** – Select a capacity type. For more information about choosing a capacity type, see [Managed node group capacity types](managed-node-groups.md#managed-node-group-capacity-types). You can’t mix different capacity types within the same node group. If you want to use both capacity types, create separate node groups, each with their own capacity and instance types. See [Reserve GPUs for managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/capacity-blocks-mng.html) for information on provisioning and scaling GPU-accelerated worker nodes.
   +  **Instance types** – By default, one or more instance type is specified. To remove a default instance type, select the `X` on the right side of the instance type. Choose the instance types to use in your managed node group. For more information, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md).

     The console displays a set of commonly used instance types. If you need to create a managed node group with an instance type that’s not displayed, then use `eksctl`, the AWS CLI, AWS CloudFormation, or an SDK to create the node group. If you specified a launch template on the previous page, then you can’t select a value because the instance type must be specified in the launch template. The value from the launch template is displayed. If you selected **Spot** for **Capacity type**, then we recommend specifying multiple instance types to enhance availability.
   +  **Disk size** – Enter the disk size (in GiB) to use for your node’s root volume.

     If you specified a launch template on the previous page, then you can’t select a value because it must be specified in the launch template.
   +  **Desired size** – Specify the current number of nodes that the managed node group should maintain at launch.
**Note**  
Amazon EKS doesn’t automatically scale your node group in or out. However, you can configure the Kubernetes Cluster Autoscaler to do this for you. For more information, see [Cluster Autoscaler on AWS](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md).
   +  **Minimum size** – Specify the minimum number of nodes that the managed node group can scale in to.
   +  **Maximum size** – Specify the maximum number of nodes that the managed node group can scale out to.
   +  **Node group update configuration** – (Optional) You can select the number or percentage of nodes to be updated in parallel. These nodes will be unavailable during the update. For **Maximum unavailable**, select one of the following options and specify a **Value**:
     +  **Number** – Select and specify the number of nodes in your node group that can be updated in parallel.
     +  **Percentage** – Select and specify the percentage of nodes in your node group that can be updated in parallel. This is useful if you have a large number of nodes in your node group.
   +  **Node auto repair configuration** – (Optional) If you activate the **Enable node auto repair** checkbox, Amazon EKS will automatically replace nodes when detected issues occur. For more information, see [Detect node health issues and enable automatic node repair](node-health.md).
   +  **Warm pool configuration** – (Optional) If you activate the **Enable warm pool configuration** checkbox, Amazon EKS will create warm pools on the ASG. For more information, see [Decrease latency for applications with long boot times using warm pools with managed node groups](warm-pools-managed-node-groups.md).

1. On the **Specify networking** page, fill out the parameters accordingly, and then choose **Next**.
   +  **Subnets** – Choose the subnets to launch your managed nodes into.
**Important**  
If you are running a stateful application across multiple Availability Zones that is backed by Amazon EBS volumes and using the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md), you should configure multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the `--balance-similar-node-groups` feature.
**Important**  
If you choose a public subnet, and your cluster has only the public API server endpoint enabled, then the subnet must have `MapPublicIPOnLaunch` set to `true` for the instances to successfully join a cluster. If the subnet was created using `eksctl` or the [Amazon EKS vended AWS CloudFormation templates](creating-a-vpc.md) on or after March 26, 2020, then this setting is already set to `true`. If the subnets were created with `eksctl` or the AWS CloudFormation templates before March 26, 2020, then you need to change the setting manually. For more information, see [Modifying the public IPv4 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-public-ip).
If you use a launch template and specify multiple network interfaces, Amazon EC2 won’t auto-assign a public `IPv4` address, even if `MapPublicIpOnLaunch` is set to `true`. For nodes to join the cluster in this scenario, you must either enable the cluster’s private API server endpoint, or launch nodes in a private subnet with outbound internet access provided through an alternative method, such as a NAT Gateway. For more information, see [Amazon EC2 instance IP addressing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html) in the *Amazon EC2 User Guide*.
   +  **Configure SSH access to nodes** (Optional). Enabling SSH allows you to connect to your instances and gather diagnostic information if there are issues. We highly recommend enabling remote access when you create a node group. You can’t enable remote access after the node group is created.

     If you chose to use a launch template, then this option isn’t shown. To enable remote access to your nodes, specify a key pair in the launch template and ensure that the proper port is open to the nodes in the security groups that you specify in the launch template. For more information, see [Using custom security groups](launch-templates.md#launch-template-security-groups).
**Note**  
For Windows, this command doesn’t enable SSH. Instead, it associates your Amazon EC2 key pair with the instance and allows you to RDP into the instance.
   + For **SSH key pair** (Optional), choose an Amazon EC2 SSH key to use. For Linux information, see [Amazon EC2 key pairs and Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*. For Windows information, see [Amazon EC2 key pairs and Windows instances](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*. If you chose to use a launch template, then you can’t select one. When an Amazon EC2 SSH key is provided for node groups using Bottlerocket AMIs, the administrative container is also enabled. For more information, see [Admin container](https://github.com/bottlerocket-os/bottlerocket#admin-container) on GitHub.
   + For **Allow SSH remote access from**, if you want to limit access to specific instances, then select the security groups that are associated to those instances. If you don’t select specific security groups, then SSH access is allowed from anywhere on the internet (`0.0.0.0/0`).

1. On the **Review and create** page, review your managed node group configuration and choose **Create**.

   If nodes fail to join the cluster, then see [Nodes fail to join cluster](troubleshooting.md#worker-node-fail) in the Troubleshooting chapter.

1. Watch the status of your nodes and wait for them to reach the `Ready` status.

   ```
   kubectl get nodes --watch
   ```

1. (GPU nodes only) If you chose a GPU instance type and an Amazon EKS optimized accelerated AMI, then you must apply the [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin) as a DaemonSet on your cluster. Replace *vX.X.X* with your desired [NVIDIA/k8s-device-plugin](https://github.com/NVIDIA/k8s-device-plugin/releases) version before running the following command.

   ```
   kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/vX.X.X/deployments/static/nvidia-device-plugin.yml
   ```

## Install Kubernetes add-ons
<a name="_install_kubernetes_add_ons"></a>

Now that you have a working Amazon EKS cluster with nodes, you’re ready to start installing Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help you to extend the functionality of your cluster.
+ The [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that created the cluster is the only principal that can make calls to the Kubernetes API server with `kubectl` or the AWS Management Console. If you want other IAM principals to have access to your cluster, then you need to add them. For more information, see [Grant IAM users and roles access to Kubernetes APIs](grant-k8s-access.md) and [Required permissions](view-kubernetes-resources.md#view-kubernetes-resources-permissions).
+ We recommend blocking Pod access to IMDS if the following conditions are true:
  + You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
  + No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

  For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).
+ Configure the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md) to automatically adjust the number of nodes in your node groups.
+ Deploy a [sample application](sample-deployment.md) to your cluster.
+  [Organize and monitor cluster resources](eks-managing.md) with important tools for managing your cluster.

# Decrease latency for applications with long boot times using warm pools with managed node groups
<a name="warm-pools-managed-node-groups"></a>

When your applications have long initialization or boot times, scale-out events can cause delays—new nodes must fully boot and join the cluster before Pods can be scheduled on them. This latency can impact application availability during traffic spikes or rapid scaling events. Warm pools solve this problem by maintaining a pool of pre-initialized EC2 instances that have already completed the bootup process. During a scale-out event, instances move from the warm pool directly to your cluster, bypassing the time-consuming initialization steps and significantly reducing the time it takes for new capacity to become available. For more information, see [Decrease latency for applications that have long boot times using warm pools](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html) in the *Amazon EC2 Auto Scaling User Guide*.

Amazon EKS managed node groups support Amazon EC2 Auto Scaling warm pools. A warm pool maintains pre-initialized EC2 instances alongside your Auto Scaling group that can quickly join your cluster during scale-out events. Instances in the warm pool have already completed the bootup initialization process and can be kept in a `Stopped`, `Running`, or `Hibernated` state.

Amazon EKS manages warm pools throughout the node group lifecycle using the `AWSServiceRoleForAmazonEKSNodegroup` service-linked role to create, update, and delete warm pool resources.

## How it works
<a name="warm-pools-how-it-works"></a>

When you configure a warm pool, Amazon EKS creates an EC2 Auto Scaling warm pool attached to your node group’s Auto Scaling group. Instances launch into the warm pool, complete the bootup initialization process, and remain in the configured state (`Running`, `Stopped`, or `Hibernated`) until needed. During scale-out events, instances move from the warm pool to the Auto Scaling group, complete the Amazon EKS initialization process to join the cluster, and become available for pod scheduling. With instance reuse enabled, instances can return to the warm pool during scale-in events.

**Important**  
Always configure warm pools through the Amazon EKS API using `create-nodegroup` or `update-nodegroup-config`. Don’t manually modify warm pool settings using the EC2 Auto Scaling API, as this can cause conflicts with Amazon EKS management of the resources.

## Considerations
<a name="warm-pools-considerations"></a>

**Important**  
Before configuring warm pools, review the prerequisites and limitations in [Warm pools for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html) in the *Amazon EC2 Auto Scaling User Guide*. Not all instance types, AMIs, or configurations are supported.
+  **IAM permissions** – The `AWSServiceRoleForAmazonEKSNodegroup` service-linked role (created automatically with your first managed node group) includes the necessary warm pool management permissions.
+  **AMI limitations** – Warm pools don’t support custom AMIs. You must use Amazon EKS optimized AMIs.
+  **Bottlerocket limitations** – If using Bottlerocket AMIs, the `Hibernated` pool state isn’t supported. Use `Stopped` or `Running` pool states only. Additionally, the `reuseOnScaleIn` feature isn’t supported with Bottlerocket AMIs.
+  **Hibernation support** – The `Hibernated` pool state is only supported on specific instance types. See [Hibernation prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hibernating-prerequisites.html) in the *Amazon EC2 User Guide* for supported instance types.
+  **Cost impact** – Creating a warm pool when it’s not required can lead to unnecessary costs.
+  **Capacity planning** – Size your warm pool based on scaling patterns to balance cost and availability. Start with 10-20% of expected peak capacity.
+  **VPC networking** – Ensure sufficient IP addresses for both Auto Scaling group and warm pool instances.

## Configure warm pools
<a name="warm-pools-configuration"></a>

You can configure warm pools when creating a new managed node group or update an existing managed node group to add warm pool support.

### Configuration parameters
<a name="warm-pools-parameters"></a>
+  **enabled** – (boolean) Indicates your intent to attach a warm pool to the managed node group. Required to enable warm pool support.
+  **maxGroupPreparedCapacity** – (integer) Maximum total instances across warm pool and Auto Scaling group combined.
+  **minSize** – (integer) Minimum number of instances to maintain in the warm pool. Default: `0`.
+  **poolState** – (string) State for warm pool instances. Default: `Stopped`.
+  **reuseOnScaleIn** – (boolean) Whether instances return to the warm pool during scale-in events instead of terminating them. Default: `false`. Not supported with Bottlerocket AMIs.

### Using the AWS CLI
<a name="warm-pools-create-cli"></a>

You can configure a warm pool when creating a managed node group or add one to an existing node group.

 **Create a node group with a warm pool** 

```
aws eks create-nodegroup \
  --cluster-name my-cluster \
  --nodegroup-name my-nodegroup \
  --node-role arn:aws:iam::111122223333:role/AmazonEKSNodeRole \
  --subnets subnet-12345678 subnet-87654321 \
  --region us-east-1 \
  --scaling-config minSize=2,maxSize=10,desiredSize=3 \
  --warm-pool-config enabled=true,maxGroupPreparedCapacity=8,minSize=2,poolState=Stopped,reuseOnScaleIn=true
```

 **Add a warm pool to an existing node group** 

```
aws eks update-nodegroup-config \
  --cluster-name my-cluster \
  --nodegroup-name my-nodegroup \
  --region us-east-1 \
  --warm-pool-config enabled=true,maxGroupPreparedCapacity=8,minSize=2,poolState=Stopped,reuseOnScaleIn=true
```

## Update configuration
<a name="warm-pools-update"></a>

Update warm pool settings at any time using `update-nodegroup-config`. Existing warm pool instances aren’t immediately affected; new settings apply to instances entering the warm pool after the update.

```
aws eks update-nodegroup-config \
  --cluster-name my-cluster \
  --nodegroup-name my-nodegroup \
  --region us-east-1 \
  --warm-pool-config enabled=true,maxGroupPreparedCapacity=10,minSize=3,poolState=Running,reuseOnScaleIn=true
```

To disable the warm pool attached to your nodegroup, set `enabled=false`:

```
aws eks update-nodegroup-config \
  --cluster-name my-cluster \
  --nodegroup-name my-nodegroup \
  --region us-east-1 \
  --warm-pool-config enabled=false
```

## Additional resources
<a name="warm-pools-additional-resources"></a>
+  [Warm pools for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html) in the *Amazon EC2 Auto Scaling User Guide* 
+  [Simplify node lifecycle with managed node groups](managed-node-groups.md) 