

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Maintain nodes yourself with self-managed nodes
<a name="worker"></a>

A cluster contains one or more Amazon EC2 nodes that Pods are scheduled on. Amazon EKS nodes run in your AWS account and connect to the control plane of your cluster through the cluster API server endpoint. You’re billed for them based on Amazon EC2 prices. For more information, see [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/).

A cluster can contain several node groups. Each node group contains one or more nodes that are deployed in an [Amazon EC2 Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html). The instance type of the nodes within the group can vary, such as when using [attribute-based instance type selection](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet-attribute-based-instance-type-selection.html) with [Karpenter](https://karpenter.sh/). All instances in a node group must use the [Amazon EKS node IAM role](create-node-role.md).

Amazon EKS provides specialized Amazon Machine Images (AMIs) that are called Amazon EKS optimized AMIs. The AMIs are configured to work with Amazon EKS. Their components include `containerd`, `kubelet`, and the AWS IAM Authenticator. The AMIs also contain a specialized [bootstrap script](https://github.com/awslabs/amazon-eks-ami/blob/main/templates/al2/runtime/bootstrap.sh) that allows it to discover and connect to your cluster’s control plane automatically.

If you restrict access to the public endpoint of your cluster using CIDR blocks, we recommend that you also enable private endpoint access. This is so that nodes can communicate with the cluster. Without the private endpoint enabled, the CIDR blocks that you specify for public access must include the egress sources from your VPC. For more information, see [Cluster API server endpoint](cluster-endpoint.md).

To add self-managed nodes to your Amazon EKS cluster, see the topics that follow. If you launch self-managed nodes manually, add the following tag to each node while making sure that `<cluster-name>` matches your cluster. For more information, see [Adding and deleting tags on an individual resource](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#adding-or-deleting-tags). If you follow the steps in the guides that follow, the required tag is automatically added to nodes for you.


| Key | Value | 
| --- | --- | 
|   `kubernetes.io/cluster/<cluster-name>`   |   `owned`   | 

**Important**  
Tags in Amazon EC2 Instance Metadata Service (IMDS) are not compatible with EKS nodes. When Instance Metadata Tags are enabled, the use of forward slashes ('/') in tag values is prevented. This limitation can cause instance launch failures, particularly when using node management tools like Karpenter or Cluster Autoscaler, as these services rely on tags containing forward slashes for proper functionality.

For more information about nodes from a general Kubernetes perspective, see [Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) in the Kubernetes documentation.

**Topics**
+ [

# Create self-managed Amazon Linux nodes
](launch-workers.md)
+ [

# Create self-managed Bottlerocket nodes
](launch-node-bottlerocket.md)
+ [

# Create self-managed Microsoft Windows nodes
](launch-windows-workers.md)
+ [

# Create self-managed Ubuntu Linux nodes
](launch-node-ubuntu.md)
+ [

# Update self-managed nodes for your cluster
](update-workers.md)

# Create self-managed Amazon Linux nodes
<a name="launch-workers"></a>

This topic describes how you can launch Auto Scaling groups of Linux nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them. You can also launch self-managed Amazon Linux nodes with `eksctl` or the AWS Management Console. If you need to launch nodes on AWS Outposts, see [Create Amazon Linux nodes on AWS Outposts](eks-outposts-self-managed-nodes.md).
+ An existing Amazon EKS cluster. To deploy one, see [Create an Amazon EKS cluster](create-cluster.md). If you have subnets in the AWS Region where you have AWS Outposts, AWS Wavelength, or AWS Local Zones enabled, those subnets must not have been passed in when you created your cluster.
+ An existing IAM role for the nodes to use. To create one, see [Amazon EKS node IAM role](create-node-role.md). If this role doesn’t have either of the policies for the VPC CNI, the separate role that follows is required for the VPC CNI pods.
+ (Optional, but recommended) The Amazon VPC CNI plugin for Kubernetes add-on configured with its own IAM role that has the necessary IAM policy attached to it. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).
+ Familiarity with the considerations listed in [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md). Depending on the instance type you choose, there may be additional prerequisites for your cluster and VPC.

You can launch self-managed Linux nodes using either of the following:
+  [`eksctl`](#eksctl_create_managed_amazon_linux) 
+  [AWS Management Console](#console_create_managed_amazon_linux) 

## `eksctl`
<a name="eksctl_create_managed_amazon_linux"></a>

 **Launch self-managed Linux nodes using `eksctl` ** 

1. Install version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

1. (Optional) If the **AmazonEKS\$1CNI\$1Policy** managed IAM policy is attached to your [Amazon EKS node IAM role](create-node-role.md), we recommend assigning it to an IAM role that you associate to the Kubernetes `aws-node` service account instead. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

1. The following command creates a node group in an existing cluster. Replace *al-nodes* with a name for your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters. Replace *my-cluster* with the name of your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in. Replace the remaining *example value* with your own values. The nodes are created with the same Kubernetes version as the control plane, by default.

   Before choosing a value for `--node-type`, review [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md).

   Replace *my-key* with the name of your Amazon EC2 key pair or public key. This key is used to SSH into your nodes after they launch. If you don’t already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see [Amazon EC2 key pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.

   Create your node group with the following command.
**Important**  
If you want to deploy a node group to AWS Outposts, Wavelength, or Local Zone subnets, there are additional considerations:  
The subnets must not have been passed in when you created the cluster.
You must create the node group with a config file that specifies the subnets and ` [volumeType](https://eksctl.io/usage/schema/#nodeGroups-volumeType): gp2`. For more information, see [Create a nodegroup from a config file](https://eksctl.io/usage/nodegroups/#creating-a-nodegroup-from-a-config-file) and [Config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation.

   ```
   eksctl create nodegroup \
     --cluster my-cluster \
     --name al-nodes \
     --node-type t3.medium \
     --nodes 3 \
     --nodes-min 1 \
     --nodes-max 4 \
     --ssh-access \
     --managed=false \
     --ssh-public-key my-key
   ```

   To deploy a node group that:
   + can assign a significantly higher number of IP addresses to Pods than the default configuration, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md).
   + can assign `IPv4` addresses to Pods from a different CIDR block than that of the instance, see [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md).
   + can assign `IPv6` addresses to Pods and services, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).
   + don’t have outbound internet access, see [Deploy private clusters with limited internet access](private-clusters.md).

     For a complete list of all available options and defaults, enter the following command.

     ```
     eksctl create nodegroup --help
     ```

     If nodes fail to join the cluster, then see [Nodes fail to join cluster](troubleshooting.md#worker-node-fail) in the Troubleshooting chapter.

     An example output is as follows. Several lines are output while the nodes are created. One of the last lines of output is the following example line.

     ```
     [✔]  created 1 nodegroup(s) in cluster "my-cluster"
     ```

1. (Optional) Deploy a [sample application](sample-deployment.md) to test your cluster and Linux nodes.

1. We recommend blocking Pod access to IMDS if the following conditions are true:
   + You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
   + No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

   For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

## AWS Management Console
<a name="console_create_managed_amazon_linux"></a>

 **Step 1: Launch self-managed Linux nodes using AWS Management Console ** 

1. Download the latest version of the AWS CloudFormation template.

   ```
   curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2025-11-26/amazon-eks-nodegroup.yaml
   ```

1. Wait for your cluster status to show as `ACTIVE`. If you launch your nodes before the cluster is active, the nodes fail to register with the cluster and you will have to relaunch them.

1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/).

1. Choose **Create stack** and then select **With new resources (standard)**.

1. For **Specify template**, select **Upload a template file** and then select **Choose file**.

1. Select the `amazon-eks-nodegroup.yaml` file that you downloaded.

1. Select **Next**.

1. On the **Specify stack details** page, enter the following parameters accordingly, and then choose **Next**:
   +  **Stack name**: Choose a stack name for your AWS CloudFormation stack. For example, you can call it *my-cluster-nodes*. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.
   +  **ClusterName**: Enter the name that you used when you created your Amazon EKS cluster. This name must equal the cluster name or your nodes can’t join the cluster.
   +  **ClusterControlPlaneSecurityGroup**: Choose the **SecurityGroups** value from the AWS CloudFormation output that you generated when you created your [VPC](creating-a-vpc.md).

     The following steps show one operation to retrieve the applicable group.

     1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

     1. Choose the name of the cluster.

     1. Choose the **Networking** tab.

     1. Use the **Additional security groups** value as a reference when selecting from the **ClusterControlPlaneSecurityGroup** dropdown list.
   +  **ApiServerEndpoint**: Enter the API Server Endpoint for your EKS Cluster. This can be found in the Details section of the EKS Cluster Console
   +  **CertificateAuthorityData**: Enter the base64 encoded Certificate Authority data which can also be found in the EKS Cluster Console’s Details section.
   +  **ServiceCidr**: Enter the CIDR range used for allocating IP addresses to Kubernetes services within the cluster. This is found within the networking tab of the EKS Cluster Console.
   +  **AuthenticationMode**: Select the Authentication Mode in use in the EKS Cluster by reviewing the access tab within the EKS Cluster Console.
   +  **NodeGroupName**: Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that’s created for your nodes. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.
   +  **NodeAutoScalingGroupMinSize**: Enter the minimum number of nodes that your node Auto Scaling group can scale in to.
   +  **NodeAutoScalingGroupDesiredCapacity**: Enter the desired number of nodes to scale to when your stack is created.
   +  **NodeAutoScalingGroupMaxSize**: Enter the maximum number of nodes that your node Auto Scaling group can scale out to.
   +  **NodeInstanceType**: Choose an instance type for your nodes. For more information, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md).
   +  **NodeImageIdSSMParam**: Pre-populated with the Amazon EC2 Systems Manager parameter of a recent Amazon EKS optimized Amazon Linux 2023 AMI for a variable Kubernetes version. To use a different Kubernetes minor version supported with Amazon EKS, replace *1.XX* with a different [supported version](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html). We recommend specifying the same Kubernetes version as your cluster.

     You can also replace *amazon-linux-2023* with a different AMI type. For more information, see [Retrieve recommended Amazon Linux AMI IDs](retrieve-ami-id.md).
**Note**  
The Amazon EKS node AMIs are based on Amazon Linux. You can track security or privacy events for Amazon Linux 2023 at the [Amazon Linux Security Center](https://alas.aws.amazon.com/alas2023.html) or subscribe to the associated [RSS feed](https://alas.aws.amazon.com/AL2023/alas.rss). Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.
   +  **NodeImageId**: (Optional) If you’re using your own custom AMI (instead of an Amazon EKS optimized AMI), enter a node AMI ID for your AWS Region. If you specify a value here, it overrides any values in the **NodeImageIdSSMParam** field.
   +  **NodeVolumeSize**: Specify a root volume size for your nodes, in GiB.
   +  **NodeVolumeType**: Specify a root volume type for your nodes.
   +  **KeyName**: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your nodes with after they launch. If you don’t already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see [Amazon EC2 key pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.
   +  **VpcId**: Enter the ID for the [VPC](creating-a-vpc.md) that you created.
   +  **Subnets**: Choose the subnets that you created for your VPC. If you created your VPC using the steps that are described in [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md), specify only the private subnets within the VPC for your nodes to launch into. You can see which subnets are private by opening each subnet link from the **Networking** tab of your cluster.
**Important**  
If any of the subnets are public subnets, then they must have the automatic public IP address assignment setting enabled. If the setting isn’t enabled for the public subnet, then any nodes that you deploy to that public subnet won’t be assigned a public IP address and won’t be able to communicate with the cluster or other AWS services. If the subnet was deployed before March 26, 2020 using either of the [Amazon EKS AWS CloudFormation VPC templates](creating-a-vpc.md), or by using `eksctl`, then automatic public IP address assignment is disabled for public subnets. For information about how to enable public IP address assignment for a subnet, see [Modifying the public IPv4 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-public-ip). If the node is deployed to a private subnet, then it’s able to communicate with the cluster and other AWS services through a NAT gateway.
If the subnets don’t have internet access, make sure that you’re aware of the considerations and extra steps in [Deploy private clusters with limited internet access](private-clusters.md).
If you select AWS Outposts, Wavelength, or Local Zone subnets, the subnets must not have been passed in when you created the cluster.

1. Select your desired choices on the **Configure stack options** page, and then choose **Next**.

1. Select the check box to the left of **I acknowledge that AWS CloudFormation might create IAM resources.**, and then choose **Create stack**.

1. When your stack has finished creating, select it in the console and choose **Outputs**. If you are using the `EKS API` or `EKS API and ConfigMap` Authentication Modes, this is the last step.

1. If you are using the `ConfigMap` Authentication Mode, record the **NodeInstanceRole** for the node group that was created.

 **Step 2: Enable nodes to join your cluster** 

**Note**  
The following two steps are only needed if using the Configmap Authentication Mode within the EKS Cluster. Additionally, if you launched nodes inside a private VPC without outbound internet access, make sure to enable nodes to join your cluster from within the VPC.

1. Check to see if you already have an `aws-auth` `ConfigMap`.

   ```
   kubectl describe configmap -n kube-system aws-auth
   ```

1. If you are shown an `aws-auth` `ConfigMap`, then update it as needed.

   1. Open the `ConfigMap` for editing.

      ```
      kubectl edit -n kube-system configmap/aws-auth
      ```

   1. Add a new `mapRoles` entry as needed. Set the `rolearn` value to the **NodeInstanceRole** value that you recorded in the previous procedure.

      ```
      [...]
      data:
        mapRoles: |
          - rolearn: <ARN of instance role (not instance profile)>
            username: system:node:{{EC2PrivateDNSName}}
            groups:
              - system:bootstrappers
              - system:nodes
      [...]
      ```

   1. Save the file and exit your text editor.

1. If you received an error stating "`Error from server (NotFound): configmaps "aws-auth" not found`, then apply the stack `ConfigMap`.

   1. Download the configuration map.

      ```
      curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml
      ```

   1. In the `aws-auth-cm.yaml` file, set the `rolearn` value to the **NodeInstanceRole** value that you recorded in the previous procedure. You can do this with a text editor, or by replacing *my-node-instance-role* and running the following command:

      ```
      sed -i.bak -e 's|<ARN of instance role (not instance profile)>|my-node-instance-role|' aws-auth-cm.yaml
      ```

   1. Apply the configuration. This command may take a few minutes to finish.

      ```
      kubectl apply -f aws-auth-cm.yaml
      ```

1. Watch the status of your nodes and wait for them to reach the `Ready` status.

   ```
   kubectl get nodes --watch
   ```

   Enter `Ctrl`\$1`C` to return to a shell prompt.
**Note**  
If you receive any authorization or resource type errors, see [Unauthorized or access denied (`kubectl`)](troubleshooting.md#unauthorized) in the troubleshooting topic.

   If nodes fail to join the cluster, then see [Nodes fail to join cluster](troubleshooting.md#worker-node-fail) in the Troubleshooting chapter.

1. (GPU nodes only) If you chose a GPU instance type and the Amazon EKS optimized accelerated AMI, you must apply the [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin) as a DaemonSet on your cluster. Replace *vX.X.X* with your desired [NVIDIA/k8s-device-plugin](https://github.com/NVIDIA/k8s-device-plugin/releases) version before running the following command.

   ```
   kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/vX.X.X/deployments/static/nvidia-device-plugin.yml
   ```

 **Step 3: Additional actions** 

1. (Optional) Deploy a [sample application](sample-deployment.md) to test your cluster and Linux nodes.

1. (Optional) If the **AmazonEKS\$1CNI\$1Policy** managed IAM policy (if you have an `IPv4` cluster) or the *AmazonEKS\$1CNI\$1IPv6\$1Policy* (that you [created yourself](cni-iam-role.md#cni-iam-role-create-ipv6-policy) if you have an `IPv6` cluster) is attached to your [Amazon EKS node IAM role](create-node-role.md), we recommend assigning it to an IAM role that you associate to the Kubernetes `aws-node` service account instead. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

1. We recommend blocking Pod access to IMDS if the following conditions are true:
   + You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
   + No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

   For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

# Create self-managed Bottlerocket nodes
<a name="launch-node-bottlerocket"></a>

**Note**  
Managed node groups might offer some advantages for your use case. For more information, see [Simplify node lifecycle with managed node groups](managed-node-groups.md).

This topic describes how to launch Auto Scaling groups of [Bottlerocket](https://aws.amazon.com/bottlerocket/) nodes that register with your Amazon EKS cluster. Bottlerocket is a Linux-based open-source operating system from AWS that you can use for running containers on virtual machines or bare metal hosts. After the nodes join the cluster, you can deploy Kubernetes applications to them. For more information about Bottlerocket, see [Using a Bottlerocket AMI with Amazon EKS](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-EKS.md) on GitHub and [Custom AMI support](https://eksctl.io/usage/custom-ami-support/) in the `eksctl` documentation.

For information about in-place upgrades, see [Bottlerocket Update Operator](https://github.com/bottlerocket-os/bottlerocket-update-operator) on GitHub.

**Important**  
Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 instance prices. For more information, see [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/).
You can launch Bottlerocket nodes in Amazon EKS extended clusters on AWS Outposts, but you can’t launch them in local clusters on AWS Outposts. For more information, see [Deploy Amazon EKS on-premises with AWS Outposts](eks-outposts.md).
You can deploy to Amazon EC2 instances with `x86` or Arm processors. However, you can’t deploy to instances that have Inferentia chips.
Bottlerocket is compatible with AWS CloudFormation. However, there is no official CloudFormation template that can be copied to deploy Bottlerocket nodes for Amazon EKS.
Bottlerocket images don’t come with an SSH server or a shell. You can use out-of-band access methods to allow SSH enabling the admin container and to pass some bootstrapping configuration steps with user data. For more information, see these sections in the [bottlerocket README.md](https://github.com/bottlerocket-os/bottlerocket) on GitHub:  
 [Exploration](https://github.com/bottlerocket-os/bottlerocket#exploration) 
 [Admin container](https://github.com/bottlerocket-os/bottlerocket#admin-container) 
 [Kubernetes settings](https://github.com/bottlerocket-os/bottlerocket#kubernetes-settings) 

This procedure requires `eksctl` version `0.215.0` or later. You can check your version with the following command:

```
eksctl version
```

For instructions on how to install or upgrade `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.NOTE: This procedure only works for clusters that were created with `eksctl`.

1. Copy the following contents to your device. Replace *my-cluster* with the name of your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in. Replace *ng-bottlerocket* with a name for your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters. To deploy on Arm instances, replace *m5.large* with an Arm instance type. Replace *my-ec2-keypair-name* with the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your nodes with after they launch. If you don’t already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see [Amazon EC2 key pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*. Replace all remaining example values with your own values. Once you’ve made the replacements, run the modified command to create the `bottlerocket.yaml` file.

   If specifying an Arm Amazon EC2 instance type, then review the considerations in [Amazon EKS optimized Arm Amazon Linux AMIs](eks-optimized-ami.md#arm-ami) before deploying. For instructions on how to deploy using a custom AMI, see [Building Bottlerocket](https://github.com/bottlerocket-os/bottlerocket/blob/develop/BUILDING.md) on GitHub and [Custom AMI support](https://eksctl.io/usage/custom-ami-support/) in the `eksctl` documentation. To deploy a managed node group, deploy a custom AMI using a launch template. For more information, see [Customize managed nodes with launch templates](launch-templates.md).
**Important**  
To deploy a node group to AWS Outposts, AWS Wavelength, or AWS Local Zone subnets, don’t pass AWS Outposts, AWS Wavelength, or AWS Local Zone subnets when you create the cluster. You must specify the subnets in the following example. For more information see [Create a nodegroup from a config file](https://eksctl.io/usage/nodegroups/#creating-a-nodegroup-from-a-config-file) and [Config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation. Replace *region-code* with the AWS Region that your cluster is in.

   ```
   cat >bottlerocket.yaml <<EOF
   ---
   apiVersion: eksctl.io/v1alpha5
   kind: ClusterConfig
   
   metadata:
     name: my-cluster
     region: region-code
     version: '1.35'
   
   iam:
     withOIDC: true
   
   nodeGroups:
     - name: ng-bottlerocket
       instanceType: m5.large
       desiredCapacity: 3
       amiFamily: Bottlerocket
       ami: auto-ssm
       iam:
          attachPolicyARNs:
             - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
             - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
             - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
             - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
       ssh:
           allow: true
           publicKeyName: my-ec2-keypair-name
   EOF
   ```

1. Deploy your nodes with the following command.

   ```
   eksctl create nodegroup --config-file=bottlerocket.yaml
   ```

   An example output is as follows.

   Several lines are output while the nodes are created. One of the last lines of output is the following example line.

   ```
   [✔]  created 1 nodegroup(s) in cluster "my-cluster"
   ```

1. (Optional) Create a Kubernetes [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) on a Bottlerocket node using the [Amazon EBS CSI Plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver). The default Amazon EBS driver relies on file system tools that aren’t included with Bottlerocket. For more information about creating a storage class using the driver, see [Use Kubernetes volume storage with Amazon EBS](ebs-csi.md).

1. (Optional) By default, `kube-proxy` sets the `nf_conntrack_max` kernel parameter to a default value that may differ from what Bottlerocket originally sets at boot. To keep Bottlerocket’s [default setting](https://github.com/bottlerocket-os/bottlerocket-core-kit/blob/develop/packages/release/release-sysctl.conf), edit the `kube-proxy` configuration with the following command.

   ```
   kubectl edit -n kube-system daemonset kube-proxy
   ```

   Add `--conntrack-max-per-core` and `--conntrack-min` to the `kube-proxy` arguments that are in the following example. A setting of `0` implies no change.

   ```
         containers:
         - command:
           - kube-proxy
           - --v=2
           - --config=/var/lib/kube-proxy-config/config
           - --conntrack-max-per-core=0
           - --conntrack-min=0
   ```

1. (Optional) Deploy a [sample application](sample-deployment.md) to test your Bottlerocket nodes.

1. We recommend blocking Pod access to IMDS if the following conditions are true:
   + You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
   + No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

   For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

# Create self-managed Microsoft Windows nodes
<a name="launch-windows-workers"></a>

This topic describes how to launch Auto Scaling groups of Windows nodes that register with your Amazon EKS cluster. After the nodes join the cluster, you can deploy Kubernetes applications to them.

**Important**  
Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 instance prices. For more information, see [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/).
You can launch Windows nodes in Amazon EKS extended clusters on AWS Outposts, but you can’t launch them in local clusters on AWS Outposts. For more information, see [Deploy Amazon EKS on-premises with AWS Outposts](eks-outposts.md).

Enable Windows support for your cluster. We recommend that you review important considerations before you launch a Windows node group. For more information, see [Enable Windows support](windows-support.md#enable-windows-support).

You can launch self-managed Windows nodes with either of the following:
+  [`eksctl`](#eksctl_create_windows_nodes) 
+  [AWS Management Console](#console_create_windows_nodes) 

## `eksctl`
<a name="eksctl_create_windows_nodes"></a>

 **Launch self-managed Windows nodes using `eksctl` ** 

This procedure requires that you have installed `eksctl`, and that your `eksctl` version is at least `0.215.0`. You can check your version with the following command.

```
eksctl version
```

For instructions on how to install or upgrade `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

**Note**  
This procedure only works for clusters that were created with `eksctl`.

1. (Optional) If the **AmazonEKS\$1CNI\$1Policy** managed IAM policy (if you have an `IPv4` cluster) or the *AmazonEKS\$1CNI\$1IPv6\$1Policy* (that you [created yourself](cni-iam-role.md#cni-iam-role-create-ipv6-policy) if you have an `IPv6` cluster) is attached to your [Amazon EKS node IAM role](create-node-role.md), we recommend assigning it to an IAM role that you associate to the Kubernetes `aws-node` service account instead. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

1. This procedure assumes that you have an existing cluster. If you don’t already have an Amazon EKS cluster and an Amazon Linux node group to add a Windows node group to, we recommend that you follow [Get started with Amazon EKS – `eksctl`](getting-started-eksctl.md). This guide provides a complete walkthrough for how to create an Amazon EKS cluster with Amazon Linux nodes.

   Create your node group with the following command. Replace *region-code* with the AWS Region that your cluster is in. Replace *my-cluster* with your cluster name. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in. Replace *ng-windows* with a name for your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters. You can replace *2019* with `2022` to use Windows Server 2022 or `2025` to use Windows Server 2025. Replace the rest of the example values with your own values.
**Important**  
To deploy a node group to AWS Outposts, AWS Wavelength, or AWS Local Zone subnets, don’t pass the AWS Outposts, Wavelength, or Local Zone subnets when you create the cluster. Create the node group with a config file, specifying the AWS Outposts, Wavelength, or Local Zone subnets. For more information, see [Create a nodegroup from a config file](https://eksctl.io/usage/nodegroups/#creating-a-nodegroup-from-a-config-file) and [Config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation.

   ```
   eksctl create nodegroup \
       --region region-code \
       --cluster my-cluster \
       --name ng-windows \
       --node-type t2.large \
       --nodes 3 \
       --nodes-min 1 \
       --nodes-max 4 \
       --managed=false \
       --node-ami-family WindowsServer2019FullContainer
   ```
**Note**  
If nodes fail to join the cluster, see [Nodes fail to join cluster](troubleshooting.md#worker-node-fail) in the Troubleshooting guide.
To see the available options for `eksctl` commands, enter the following command.  

     ```
     eksctl command -help
     ```

   An example output is as follows. Several lines are output while the nodes are created. One of the last lines of output is the following example line.

   ```
   [✔]  created 1 nodegroup(s) in cluster "my-cluster"
   ```

1. (Optional) Deploy a [sample application](sample-deployment.md) to test your cluster and Windows nodes.

1. We recommend blocking Pod access to IMDS if the following conditions are true:
   + You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
   + No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

   For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

## AWS Management Console
<a name="console_create_windows_nodes"></a>

 **Prerequisites** 
+ An existing Amazon EKS cluster and a Linux node group. If you don’t have these resources, we recommend that you create them using one of our guides in [Get started with Amazon EKS](getting-started.md). These guides describe how to create an Amazon EKS cluster with Linux nodes.
+ An existing VPC and security group that meet the requirements for an Amazon EKS cluster. For more information, see [View Amazon EKS networking requirements for VPC and subnets](network-reqs.md) and [View Amazon EKS security group requirements for clusters](sec-group-reqs.md). The guides in [Get started with Amazon EKS](getting-started.md) create a VPC that meets the requirements. Alternatively, you can also follow [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md) to create one manually.
+ An existing Amazon EKS cluster that uses a VPC and security group that meets the requirements of an Amazon EKS cluster. For more information, see [Create an Amazon EKS cluster](create-cluster.md). If you have subnets in the AWS Region where you have AWS Outposts, AWS Wavelength, or AWS Local Zones enabled, those subnets must not have been passed in when you created the cluster.

 **Step 1: Launch self-managed Windows nodes using the AWS Management Console ** 

1. Wait for your cluster status to show as `ACTIVE`. If you launch your nodes before the cluster is active, the nodes fail to register with the cluster and you need to relaunch them.

1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/) 

1. Choose **Create stack**.

1. For **Specify template**, select **Amazon S3 URL**.

1. Copy the following URL and paste it into **Amazon S3 URL**.

   ```
   https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2023-02-09/amazon-eks-windows-nodegroup.yaml
   ```

1. Select **Next** twice.

1. On the **Quick create stack** page, enter the following parameters accordingly:
   +  **Stack name**: Choose a stack name for your AWS CloudFormation stack. For example, you can call it `my-cluster-nodes`.
   +  **ClusterName**: Enter the name that you used when you created your Amazon EKS cluster.
**Important**  
This name must exactly match the name that you used in [Step 1: Create your Amazon EKS cluster](getting-started-console.md#eks-create-cluster). Otherwise, your nodes can’t join the cluster.
   +  **ClusterControlPlaneSecurityGroup**: Choose the security group from the AWS CloudFormation output that you generated when you created your [VPC](creating-a-vpc.md). The following steps show one method to retrieve the applicable group.

     1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

     1. Choose the name of the cluster.

     1. Choose the **Networking** tab.

     1. Use the **Additional security groups** value as a reference when selecting from the **ClusterControlPlaneSecurityGroup** dropdown list.
   +  **NodeGroupName**: Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that’s created for your nodes. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.
   +  **NodeAutoScalingGroupMinSize**: Enter the minimum number of nodes that your node Auto Scaling group can scale in to.
   +  **NodeAutoScalingGroupDesiredCapacity**: Enter the desired number of nodes to scale to when your stack is created.
   +  **NodeAutoScalingGroupMaxSize**: Enter the maximum number of nodes that your node Auto Scaling group can scale out to.
   +  **NodeInstanceType**: Choose an instance type for your nodes. For more information, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md).
**Note**  
The supported instance types for the latest version of the [Amazon VPC CNI plugin for Kubernetes](https://github.com/aws/amazon-vpc-cni-k8s) are listed in [vpc\$1ip\$1resource\$1limit.go](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/pkg/vpc/vpc_ip_resource_limit.go) on GitHub. You might need to update your CNI version to use the latest supported instance types. For more information, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md).
   +  **NodeImageIdSSMParam**: Pre-populated with the Amazon EC2 Systems Manager parameter of the current recommended Amazon EKS optimized Windows Core AMI ID. To use the full version of Windows, replace *Core* with `Full`.
   +  **NodeImageId**: (Optional) If you’re using your own custom AMI (instead of an Amazon EKS optimized AMI), enter a node AMI ID for your AWS Region. If you specify a value for this field, it overrides any values in the **NodeImageIdSSMParam** field.
   +  **NodeVolumeSize**: Specify a root volume size for your nodes, in GiB.
   +  **KeyName**: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your nodes with after they launch. If you don’t already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see [Amazon EC2 key pairs](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.
**Note**  
If you don’t provide a key pair here, the AWS CloudFormation stack fails to be created.
   +  **BootstrapArguments**: Specify any optional arguments to pass to the node bootstrap script, such as extra `kubelet` arguments using `-KubeletExtraArgs`.
   +  **DisableIMDSv1**: By default, each node supports the Instance Metadata Service Version 1 (IMDSv1) and IMDSv2. You can disable IMDSv1. To prevent future nodes and Pods in the node group from using MDSv1, set **DisableIMDSv1** to **true**. For more information about IMDS, see [Configuring the instance metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html).
   +  **VpcId**: Select the ID for the [VPC](creating-a-vpc.md) that you created.
   +  **NodeSecurityGroups**: Select the security group that was created for your Linux node group when you created your [VPC](creating-a-vpc.md). If your Linux nodes have more than one security group attached to them, specify all of them. This for, for example, if the Linux node group was created with `eksctl`.
   +  **Subnets**: Choose the subnets that you created. If you created your VPC using the steps in [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md), then specify only the private subnets within the VPC for your nodes to launch into.
**Important**  
If any of the subnets are public subnets, then they must have the automatic public IP address assignment setting enabled. If the setting isn’t enabled for the public subnet, then any nodes that you deploy to that public subnet won’t be assigned a public IP address and won’t be able to communicate with the cluster or other AWS services. If the subnet was deployed before March 26, 2020 using either of the [Amazon EKS AWS CloudFormation VPC templates](creating-a-vpc.md), or by using `eksctl`, then automatic public IP address assignment is disabled for public subnets. For information about how to enable public IP address assignment for a subnet, see [Modifying the public IPv4 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-public-ip). If the node is deployed to a private subnet, then it’s able to communicate with the cluster and other AWS services through a NAT gateway.
If the subnets don’t have internet access, then make sure that you’re aware of the considerations and extra steps in [Deploy private clusters with limited internet access](private-clusters.md).
If you select AWS Outposts, Wavelength, or Local Zone subnets, then the subnets must not have been passed in when you created the cluster.

1. Acknowledge that the stack might create IAM resources, and then choose **Create stack**.

1. When your stack has finished creating, select it in the console and choose **Outputs**.

1. Record the **NodeInstanceRole** for the node group that was created. You need this when you configure your Amazon EKS Windows nodes.

 **Step 2: Enable nodes to join your cluster** 

1. Check to see if you already have an `aws-auth` `ConfigMap`.

   ```
   kubectl describe configmap -n kube-system aws-auth
   ```

1. If you are shown an `aws-auth` `ConfigMap`, then update it as needed.

   1. Open the `ConfigMap` for editing.

      ```
      kubectl edit -n kube-system configmap/aws-auth
      ```

   1. Add new `mapRoles` entries as needed. Set the `rolearn` values to the **NodeInstanceRole** values that you recorded in the previous procedures.

      ```
      [...]
      data:
        mapRoles: |
      - rolearn: <ARN of linux instance role (not instance profile)>
            username: system:node:{{EC2PrivateDNSName}}
            groups:
              - system:bootstrappers
              - system:nodes
          - rolearn: <ARN of windows instance role (not instance profile)>
            username: system:node:{{EC2PrivateDNSName}}
            groups:
              - system:bootstrappers
              - system:nodes
              - eks:kube-proxy-windows
      [...]
      ```

   1. Save the file and exit your text editor.

1. If you received an error stating "`Error from server (NotFound): configmaps "aws-auth" not found`, then apply the stock `ConfigMap`.

   1. Download the configuration map.

      ```
      curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm-windows.yaml
      ```

   1. In the `aws-auth-cm-windows.yaml` file, set the `rolearn` values to the applicable **NodeInstanceRole** values that you recorded in the previous procedures. You can do this with a text editor, or by replacing the example values and running the following command:

      ```
      sed -i.bak -e 's|<ARN of linux instance role (not instance profile)>|my-node-linux-instance-role|' \
          -e 's|<ARN of windows instance role (not instance profile)>|my-node-windows-instance-role|' aws-auth-cm-windows.yaml
      ```
**Important**  
Don’t modify any other lines in this file.
Don’t use the same IAM role for both Windows and Linux nodes.

   1. Apply the configuration. This command might take a few minutes to finish.

      ```
      kubectl apply -f aws-auth-cm-windows.yaml
      ```

1. Watch the status of your nodes and wait for them to reach the `Ready` status.

   ```
   kubectl get nodes --watch
   ```

   Enter `Ctrl`\$1`C` to return to a shell prompt.
**Note**  
If you receive any authorization or resource type errors, see [Unauthorized or access denied (`kubectl`)](troubleshooting.md#unauthorized) in the troubleshooting topic.

   If nodes fail to join the cluster, then see [Nodes fail to join cluster](troubleshooting.md#worker-node-fail) in the Troubleshooting chapter.

 **Step 3: Additional actions** 

1. (Optional) Deploy a [sample application](sample-deployment.md) to test your cluster and Windows nodes.

1. (Optional) If the **AmazonEKS\$1CNI\$1Policy** managed IAM policy (if you have an `IPv4` cluster) or the *AmazonEKS\$1CNI\$1IPv6\$1Policy* (that you [created yourself](cni-iam-role.md#cni-iam-role-create-ipv6-policy) if you have an `IPv6` cluster) is attached to your [Amazon EKS node IAM role](create-node-role.md), we recommend assigning it to an IAM role that you associate to the Kubernetes `aws-node` service account instead. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

1. We recommend blocking Pod access to IMDS if the following conditions are true:
   + You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
   + No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

   For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

# Create self-managed Ubuntu Linux nodes
<a name="launch-node-ubuntu"></a>

**Note**  
Managed node groups might offer some advantages for your use case. For more information, see [Simplify node lifecycle with managed node groups](managed-node-groups.md).

This topic describes how to launch Auto Scaling groups of [Ubuntu on Amazon Elastic Kubernetes Service (EKS)](https://cloud-images.ubuntu.com/aws-eks/) or [Ubuntu Pro on Amazon Elastic Kubernetes Service (EKS)](https://ubuntu.com/blog/ubuntu-pro-for-eks-is-now-generally-available) nodes that register with your Amazon EKS cluster. Ubuntu and Ubuntu Pro for EKS are based on the official Ubuntu Minimal LTS, include the custom AWS kernel that is jointly developed with AWS, and have been built specifically for EKS. Ubuntu Pro adds additional security coverage by supporting EKS extended support periods, kernel livepatch, FIPS compliance and the ability to run unlimited Pro containers.

After the nodes join the cluster, you can deploy containerized applications to them. For more information, visit the documentation for [Ubuntu on AWS](https://documentation.ubuntu.com/aws/en/latest/) and [Custom AMI support](https://eksctl.io/usage/custom-ami-support/) in the `eksctl` documentation.

**Important**  
Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 instance prices. For more information, see [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/).
You can launch Ubuntu nodes in Amazon EKS extended clusters on AWS Outposts, but you can’t launch them in local clusters on AWS Outposts. For more information, see [Deploy Amazon EKS on-premises with AWS Outposts](eks-outposts.md).
You can deploy to Amazon EC2 instances with `x86` or Arm processors. However, instances that have Inferentia chips might need to install the [Neuron SDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/) first.

This procedure requires `eksctl` version `0.215.0` or later. You can check your version with the following command:

```
eksctl version
```

For instructions on how to install or upgrade `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.NOTE: This procedure only works for clusters that were created with `eksctl`.

1. Copy the following contents to your device. Replace `my-cluster` with the name of your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic character and can’t be longer than 100 characters. Replace `ng-ubuntu` with a name for your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters. To deploy on Arm instances, replace `m5.large` with an Arm instance type. Replace `my-ec2-keypair-name` with the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your nodes with after they launch. If you don’t already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see [Amazon EC2 key pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the Amazon EC2 User Guide. Replace all remaining example values with your own values. Once you’ve made the replacements, run the modified command to create the `ubuntu.yaml` file.
**Important**  
To deploy a node group to AWS Outposts, AWS Wavelength, or AWS Local Zone subnets, don’t pass AWS Outposts, AWS Wavelength, or AWS Local Zone subnets when you create the cluster. You must specify the subnets in the following example. For more information see [Create a nodegroup from a config file](https://eksctl.io/usage/nodegroups/#creating-a-nodegroup-from-a-config-file) and [Config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation. Replace *region-code* with the AWS Region that your cluster is in.

   ```
   cat >ubuntu.yaml <<EOF
   ---
   apiVersion: eksctl.io/v1alpha5
   kind: ClusterConfig
   
   metadata:
     name: my-cluster
     region: region-code
     version: '1.35'
   
   iam:
     withOIDC: true
   
   nodeGroups:
     - name: ng-ubuntu
       instanceType: m5.large
       desiredCapacity: 3
       amiFamily: Ubuntu2204
       iam:
          attachPolicyARNs:
             - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
             - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
             - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
             - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
       ssh:
           allow: true
           publicKeyName: my-ec2-keypair-name
   EOF
   ```

   To create an Ubuntu Pro node group, just change the `amiFamily` value to `UbuntuPro2204`.

1. Deploy your nodes with the following command.

   ```
   eksctl create nodegroup --config-file=ubuntu.yaml
   ```

   An example output is as follows.

   Several lines are output while the nodes are created. One of the last lines of output is the following example line.

   ```
   [✔]  created 1 nodegroup(s) in cluster "my-cluster"
   ```

1. (Optional) Deploy a [sample application](sample-deployment.md) to test your Ubuntu nodes.

1. We recommend blocking Pod access to IMDS if the following conditions are true:
   + You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
   + No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

   For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

# Update self-managed nodes for your cluster
<a name="update-workers"></a>

When a new Amazon EKS optimized AMI is released, consider replacing the nodes in your self-managed node group with the new AMI. Likewise, if you have updated the Kubernetes version for your Amazon EKS cluster, update the nodes to use nodes with the same Kubernetes version.

**Important**  
This topic covers node updates for self-managed nodes. If you are using [managed node groups](managed-node-groups.md), see [Update a managed node group for your cluster](update-managed-node-group.md).

There are two basic ways to update self-managed node groups in your clusters to use a new AMI:

 ** [Migrate applications to a new node group](migrate-stack.md) **   
Create a new node group and migrate your Pods to that group. Migrating to a new node group is more graceful than simply updating the AMI ID in an existing AWS CloudFormation stack. This is because the migration process [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) the old node group as `NoSchedule` and drains the nodes after a new stack is ready to accept the existing Pod workload.

 ** [Update an AWS CloudFormation node stack](update-stack.md) **   
Update the AWS CloudFormation stack for an existing node group to use the new AMI. This method isn’t supported for node groups that were created with `eksctl`.

# Migrate applications to a new node group
<a name="migrate-stack"></a>

This topic describes how you can create a new node group, gracefully migrate your existing applications to the new group, and remove the old node group from your cluster. You can migrate to a new node group using `eksctl` or the AWS Management Console.
+  [`eksctl`](#eksctl_migrate_apps) 
+  [AWS Management Console and AWS CLI](#console_migrate_apps) 

## `eksctl`
<a name="eksctl_migrate_apps"></a>

 **Migrate your applications to a new node group with `eksctl` ** 

For more information on using eksctl for migration, see [Unmanaged nodegroups](https://eksctl.io/usage/nodegroup-unmanaged/) in the `eksctl` documentation.

This procedure requires `eksctl` version `0.215.0` or later. You can check your version with the following command:

```
eksctl version
```

For instructions on how to install or upgrade `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

**Note**  
This procedure only works for clusters and node groups that were created with `eksctl`.

1. Retrieve the name of your existing node groups, replacing *my-cluster* with your cluster name.

   ```
   eksctl get nodegroups --cluster=my-cluster
   ```

   An example output is as follows.

   ```
   CLUSTER      NODEGROUP          CREATED               MIN SIZE      MAX SIZE     DESIRED CAPACITY     INSTANCE TYPE     IMAGE ID
   default      standard-nodes   2019-05-01T22:26:58Z  1             4            3                    t3.medium         ami-05a71d034119ffc12
   ```

1. Launch a new node group with `eksctl` with the following command. In the command, replace every *example value* with your own values. The version number can’t be later than the Kubernetes version for your control plane. Also, it can’t be more than two minor versions earlier than the Kubernetes version for your control plane. We recommend that you use the same version as your control plane.

   We recommend blocking Pod access to IMDS if the following conditions are true:
   + You plan to assign IAM roles to all of your Kubernetes service accounts so that Pods only have the minimum permissions that they need.
   + No Pods in the cluster require access to the Amazon EC2 instance metadata service (IMDS) for other reasons, such as retrieving the current AWS Region.

     For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

     To block Pod access to IMDS, add the `--disable-pod-imds` option to the following command.
**Note**  
For more available flags and their descriptions, see https://eksctl.io/.

   ```
   eksctl create nodegroup \
     --cluster my-cluster \
     --version 1.35 \
     --name standard-nodes-new \
     --node-type t3.medium \
     --nodes 3 \
     --nodes-min 1 \
     --nodes-max 4 \
     --managed=false
   ```

1. When the previous command completes, verify that all of your nodes have reached the `Ready` state with the following command:

   ```
   kubectl get nodes
   ```

1. Delete the original node group with the following command. In the command, replace every *example value* with your cluster and node group names:

   ```
   eksctl delete nodegroup --cluster my-cluster --name standard-nodes-old
   ```

## AWS Management Console and AWS CLI
<a name="console_migrate_apps"></a>

 **Migrate your applications to a new node group with the AWS Management Console and AWS CLI** 

1. Launch a new node group by following the steps that are outlined in [Create self-managed Amazon Linux nodes](launch-workers.md).

1. When your stack has finished creating, select it in the console and choose **Outputs**.

1.  Record the **NodeInstanceRole** for the node group that was created. You need this to add the new Amazon EKS nodes to your cluster.
**Note**  
If you attached any additional IAM policies to your old node group IAM role, attach those same policies to your new node group IAM role to maintain that functionality on the new group. This applies to you if you added permissions for the [Kubernetes Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler), for example.

1. Update the security groups for both node groups so that they can communicate with each other. For more information, see [View Amazon EKS security group requirements for clusters](sec-group-reqs.md).

   1. Record the security group IDs for both node groups. This is shown as the **NodeSecurityGroup** value in the AWS CloudFormation stack outputs.

      You can use the following AWS CLI commands to get the security group IDs from the stack names. In these commands, `oldNodes` is the AWS CloudFormation stack name for your older node stack, and `newNodes` is the name of the stack that you are migrating to. Replace every *example value* with your own values.

      ```
      oldNodes="old_node_CFN_stack_name"
      newNodes="new_node_CFN_stack_name"
      
      oldSecGroup=$(aws cloudformation describe-stack-resources --stack-name $oldNodes \
      --query 'StackResources[?ResourceType==`AWS::EC2::SecurityGroup`].PhysicalResourceId' \
      --output text)
      newSecGroup=$(aws cloudformation describe-stack-resources --stack-name $newNodes \
      --query 'StackResources[?ResourceType==`AWS::EC2::SecurityGroup`].PhysicalResourceId' \
      --output text)
      ```

   1. Add ingress rules to each node security group so that they accept traffic from each other.

      The following AWS CLI commands add inbound rules to each security group that allow all traffic on all protocols from the other security group. This configuration allows Pods in each node group to communicate with each other while you’re migrating your workload to the new group.

      ```
      aws ec2 authorize-security-group-ingress --group-id $oldSecGroup \
      --source-group $newSecGroup --protocol -1
      aws ec2 authorize-security-group-ingress --group-id $newSecGroup \
      --source-group $oldSecGroup --protocol -1
      ```

1. Edit the `aws-auth` configmap to map the new node instance role in RBAC.

   ```
   kubectl edit configmap -n kube-system aws-auth
   ```

   Add a new `mapRoles` entry for the new node group.

   ```
   apiVersion: v1
   data:
     mapRoles: |
       - rolearn: ARN of instance role (not instance profile)
         username: system:node:{{EC2PrivateDNSName}}
         groups:
           - system:bootstrappers
           - system:nodes>
       - rolearn: arn:aws:iam::111122223333:role/nodes-1-16-NodeInstanceRole-U11V27W93CX5
         username: system:node:{{EC2PrivateDNSName}}
         groups:
           - system:bootstrappers
           - system:nodes
   ```

   Replace the *ARN of instance role (not instance profile)* snippet with the **NodeInstanceRole** value that you recorded in a [previous step](#node-instance-role-step). Then, save and close the file to apply the updated configmap.

1. Watch the status of your nodes and wait for your new nodes to join your cluster and reach the `Ready` status.

   ```
   kubectl get nodes --watch
   ```

1. (Optional) If you’re using the [Kubernetes Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler), scale the deployment down to zero (0) replicas to avoid conflicting scaling actions.

   ```
   kubectl scale deployments/cluster-autoscaler --replicas=0 -n kube-system
   ```

1. Use the following command to taint each of the nodes that you want to remove with `NoSchedule`. This is so that new Pods aren’t scheduled or rescheduled on the nodes that you’re replacing. For more information, see [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) in the Kubernetes documentation.

   ```
   kubectl taint nodes node_name key=value:NoSchedule
   ```

   If you’re upgrading your nodes to a new Kubernetes version, you can identify and taint all of the nodes of a particular Kubernetes version (in this case, `1.33`) with the following code snippet. The version number can’t be later than the Kubernetes version of your control plane. It also can’t be more than two minor versions earlier than the Kubernetes version of your control plane. We recommend that you use the same version as your control plane.

   ```
   K8S_VERSION=1.33
   nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==\"v$K8S_VERSION\")].metadata.name}")
   for node in ${nodes[@]}
   do
       echo "Tainting $node"
       kubectl taint nodes $node key=value:NoSchedule
   done
   ```

1.  Determine your cluster’s DNS provider.

   ```
   kubectl get deployments -l k8s-app=kube-dns -n kube-system
   ```

   An example output is as follows. This cluster is using CoreDNS for DNS resolution, but your cluster can return `kube-dns` instead):

   ```
   NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
   coredns   1         1         1            1           31m
   ```

1. If your current deployment is running fewer than two replicas, scale out the deployment to two replicas. Replace *coredns* with `kubedns` if your previous command output returned that instead.

   ```
   kubectl scale deployments/coredns --replicas=2 -n kube-system
   ```

1. Drain each of the nodes that you want to remove from your cluster with the following command:

   ```
   kubectl drain node_name --ignore-daemonsets --delete-local-data
   ```

   If you’re upgrading your nodes to a new Kubernetes version, identify and drain all of the nodes of a particular Kubernetes version (in this case, *1.33*) with the following code snippet.

   ```
   K8S_VERSION=1.33
   nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==\"v$K8S_VERSION\")].metadata.name}")
   for node in ${nodes[@]}
   do
       echo "Draining $node"
       kubectl drain $node --ignore-daemonsets --delete-local-data
   done
   ```

1. After your old nodes finished draining, revoke the security group inbound rules you authorized earlier. Then, delete the AWS CloudFormation stack to terminate the instances.
**Note**  
If you attached any additional IAM policies to your old node group IAM role, such as adding permissions for the [Kubernetes Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler), detach those additional policies from the role before you can delete your AWS CloudFormation stack.

   1. Revoke the inbound rules that you created for your node security groups earlier. In these commands, `oldNodes` is the AWS CloudFormation stack name for your older node stack, and `newNodes` is the name of the stack that you are migrating to.

      ```
      oldNodes="old_node_CFN_stack_name"
      newNodes="new_node_CFN_stack_name"
      
      oldSecGroup=$(aws cloudformation describe-stack-resources --stack-name $oldNodes \
      --query 'StackResources[?ResourceType==`AWS::EC2::SecurityGroup`].PhysicalResourceId' \
      --output text)
      newSecGroup=$(aws cloudformation describe-stack-resources --stack-name $newNodes \
      --query 'StackResources[?ResourceType==`AWS::EC2::SecurityGroup`].PhysicalResourceId' \
      --output text)
      aws ec2 revoke-security-group-ingress --group-id $oldSecGroup \
      --source-group $newSecGroup --protocol -1
      aws ec2 revoke-security-group-ingress --group-id $newSecGroup \
      --source-group $oldSecGroup --protocol -1
      ```

   1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/).

   1. Select your old node stack.

   1. Choose **Delete**.

   1. In the **Delete stack** confirmation dialog box, choose **Delete stack**.

1. Edit the `aws-auth` configmap to remove the old node instance role from RBAC.

   ```
   kubectl edit configmap -n kube-system aws-auth
   ```

   Delete the `mapRoles` entry for the old node group.

   ```
   apiVersion: v1
   data:
     mapRoles: |
       - rolearn: arn:aws:iam::111122223333:role/nodes-1-16-NodeInstanceRole-W70725MZQFF8
         username: system:node:{{EC2PrivateDNSName}}
         groups:
           - system:bootstrappers
           - system:nodes
       - rolearn: arn:aws:iam::111122223333:role/nodes-1-15-NodeInstanceRole-U11V27W93CX5
         username: system:node:{{EC2PrivateDNSName}}
         groups:
           - system:bootstrappers
           - system:nodes>
   ```

   Save and close the file to apply the updated configmap.

1. (Optional) If you are using the [Kubernetes Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler), scale the deployment back to one replica.
**Note**  
You must also tag your new Auto Scaling group appropriately (for example, `k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/my-cluster`) and update the command for your Cluster Autoscaler deployment to point to the newly tagged Auto Scaling group. For more information, see [Cluster Autoscaler on AWS](https://github.com/kubernetes/autoscaler/tree/cluster-autoscaler-release-1.3/cluster-autoscaler/cloudprovider/aws).

   ```
   kubectl scale deployments/cluster-autoscaler --replicas=1 -n kube-system
   ```

1. (Optional) Verify that you’re using the latest version of the [Amazon VPC CNI plugin for Kubernetes](https://github.com/aws/amazon-vpc-cni-k8s). You might need to update your CNI version to use the latest supported instance types. For more information, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md).

1. If your cluster is using `kube-dns` for DNS resolution (see [[migrate-determine-dns-step]](#migrate-determine-dns-step)), scale in the `kube-dns` deployment to one replica.

   ```
   kubectl scale deployments/kube-dns --replicas=1 -n kube-system
   ```

# Update an AWS CloudFormation node stack
<a name="update-stack"></a>

This topic describes how you can update an existing AWS CloudFormation self-managed node stack with a new AMI. You can use this procedure to update your nodes to a new version of Kubernetes following a cluster update. Otherwise, you can update to the latest Amazon EKS optimized AMI for an existing Kubernetes version.

**Important**  
This topic covers node updates for self-managed nodes. For information about using [Simplify node lifecycle with managed node groups](managed-node-groups.md), see [Update a managed node group for your cluster](update-managed-node-group.md).

The latest default Amazon EKS node AWS CloudFormation template is configured to launch an instance with the new AMI into your cluster before removing an old one, one at a time. This configuration ensures that you always have your Auto Scaling group’s desired count of active instances in your cluster during the rolling update.

**Note**  
This method isn’t supported for node groups that were created with `eksctl`. If you created your cluster or node group with `eksctl`, see [Migrate applications to a new node group](migrate-stack.md).

1. Determine the DNS provider for your cluster.

   ```
   kubectl get deployments -l k8s-app=kube-dns -n kube-system
   ```

   An example output is as follows. This cluster is using CoreDNS for DNS resolution, but your cluster might return `kube-dns` instead. Your output might look different depending on the version of `kubectl` that you’re using.

   ```
   NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
   coredns   1         1         1            1           31m
   ```

1. If your current deployment is running fewer than two replicas, scale out the deployment to two replicas. Replace *coredns* with `kube-dns` if your previous command output returned that instead.

   ```
   kubectl scale deployments/coredns --replicas=2 -n kube-system
   ```

1. (Optional) If you’re using the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md), scale the deployment down to zero (0) replicas to avoid conflicting scaling actions.

   ```
   kubectl scale deployments/cluster-autoscaler --replicas=0 -n kube-system
   ```

1.  Determine the instance type and desired instance count of your current node group. You enter these values later when you update the AWS CloudFormation template for the group.

   1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.

   1. In the left navigation pane, choose **Launch Configurations**, and note the instance type for your existing node launch configuration.

   1. In the left navigation pane, choose **Auto Scaling Groups**, and note the **Desired** instance count for your existing node Auto Scaling group.

1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/).

1. Select your node group stack, and then choose **Update**.

1. Select **Replace current template** and select **Amazon S3 URL**.

1. For **Amazon S3 URL**, paste the following URL into the text area to ensure that you’re using the latest version of the node AWS CloudFormation template. Then, choose **Next**:

   ```
   https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2022-12-23/amazon-eks-nodegroup.yaml
   ```

1. On the **Specify stack details** page, fill out the following parameters, and choose **Next**:
   +  **NodeAutoScalingGroupDesiredCapacity** – Enter the desired instance count that you recorded in a [previous step](#existing-worker-settings-step). Or, enter your new desired number of nodes to scale to when your stack is updated.
   +  **NodeAutoScalingGroupMaxSize** – Enter the maximum number of nodes to which your node Auto Scaling group can scale out. This value must be at least one node more than your desired capacity. This is so that you can perform a rolling update of your nodes without reducing your node count during the update.
   +  **NodeInstanceType** – Choose the instance type your recorded in a [previous step](#existing-worker-settings-step). Alternatively, choose a different instance type for your nodes. Before choosing a different instance type, review [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md). Each Amazon EC2 instance type supports a maximum number of elastic network interfaces (network interface) and each network interface supports a maximum number of IP addresses. Because each worker node and Pod ,is assigned its own IP address, it’s important to choose an instance type that will support the maximum number of Pods that you want to run on each Amazon EC2 node. For a list of the number of network interfaces and IP addresses supported by instance types, see [IP addresses per network interface per instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI). For example, the `m5.large` instance type supports a maximum of 30 IP addresses for the worker node and Pods.
**Note**  
The supported instance types for the latest version of the [Amazon VPC CNI plugin for Kubernetes](https://github.com/aws/amazon-vpc-cni-k8s) are shown in [vpc\$1ip\$1resource\$1limit.go](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/pkg/vpc/vpc_ip_resource_limit.go) on GitHub. You might need to update your Amazon VPC CNI plugin for Kubernetes version to use the latest supported instance types. For more information, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md).
**Important**  
Some instance types might not be available in all AWS Regions.
   +  **NodeImageIdSSMParam** – The Amazon EC2 Systems Manager parameter of the AMI ID that you want to update to. The following value uses the latest Amazon EKS optimized AMI for Kubernetes version `1.35`.

     ```
     /aws/service/eks/optimized-ami/1.35/amazon-linux-2/recommended/image_id
     ```

     You can replace *1.35* with a [platform-version](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html) that’s the same. Or, it should be up to one version earlier than the Kubernetes version running on your control plane. We recommend that you keep your nodes at the same version as your control plane. You can also replace *amazon-linux-2* with a different AMI type. For more information, see [Retrieve recommended Amazon Linux AMI IDs](retrieve-ami-id.md).
**Note**  
Using the Amazon EC2 Systems Manager parameter enables you to update your nodes in the future without having to look up and specify an AMI ID. If your AWS CloudFormation stack is using this value, any stack update always launches the latest recommended Amazon EKS optimized AMI for your specified Kubernetes version. This is even the case even if you don’t change any values in the template.
   +  **NodeImageId** – To use your own custom AMI, enter the ID for the AMI to use.
**Important**  
This value overrides any value specified for **NodeImageIdSSMParam**. If you want to use the **NodeImageIdSSMParam** value, ensure that the value for **NodeImageId** is blank.
   +  **DisableIMDSv1** – By default, each node supports the Instance Metadata Service Version 1 (IMDSv1) and IMDSv2. However, you can disable IMDSv1. Select **true** if you don’t want any nodes or any Pods scheduled in the node group to use IMDSv1. For more information about IMDS, see [Configuring the instance metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html). If you’ve implemented IAM roles for service accounts, assign necessary permissions directly to all Pods that require access to AWS services. This way, no Pods in your cluster require access to IMDS for other reasons, such as retrieving the current AWS Region. Then, you can also disable access to IMDSv2 for Pods that don’t use host networking. For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

1. (Optional) On the **Options** page, tag your stack resources. Choose **Next**.

1. On the **Review** page, review your information, acknowledge that the stack might create IAM resources, and then choose **Update stack**.
**Note**  
The update of each node in the cluster takes several minutes. Wait for the update of all nodes to complete before performing the next steps.

1. If your cluster’s DNS provider is `kube-dns`, scale in the `kube-dns` deployment to one replica.

   ```
   kubectl scale deployments/kube-dns --replicas=1 -n kube-system
   ```

1. (Optional) If you are using the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md), scale the deployment back to your desired amount of replicas.

   ```
   kubectl scale deployments/cluster-autoscaler --replicas=1 -n kube-system
   ```

1. (Optional) Verify that you’re using the latest version of the [Amazon VPC CNI plugin for Kubernetes](https://github.com/aws/amazon-vpc-cni-k8s). You might need to update your Amazon VPC CNI plugin for Kubernetes version to use the latest supported instance types. For more information, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md).