

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Automate cluster infrastructure with EKS Auto Mode
<a name="automode"></a>

**Tip**  
 [Register](https://aws-experience.com/emea/smb/events/series/get-hands-on-with-amazon-eks?trk=4a9b4147-2490-4c63-bc9f-f8a84b122c8c&sc_channel=el) for upcoming Amazon EKS Auto Mode workshops.

EKS Auto Mode extends AWS management of Kubernetes clusters beyond the cluster itself, to allow AWS to also set up and manage the infrastructure that enables the smooth operation of your workloads. You can delegate key infrastructure decisions and leverage the expertise of AWS for day-to-day operations. Cluster infrastructure managed by AWS includes many Kubernetes capabilities as core components, as opposed to add-ons, such as compute autoscaling, pod and service networking, application load balancing, cluster DNS, block storage, and GPU support.

To get started, you can deploy a new EKS Auto Mode cluster or enable EKS Auto Mode on an existing cluster. You can deploy, upgrade, or modify your EKS Auto Mode clusters using eksctl, the AWS CLI, the AWS Management Console, EKS APIs, or your preferred infrastructure-as-code tools.

With EKS Auto Mode, you can continue using your preferred Kubernetes-compatible tools. EKS Auto Mode integrates with AWS services like Amazon EC2, Amazon EBS, and ELB, leveraging AWS cloud resources that follow best practices. These resources are automatically scaled, cost-optimized, and regularly updated to help minimize operational costs and overhead.

## Features
<a name="_features"></a>

EKS Auto Mode provides the following high-level features:

 **Streamline Kubernetes Cluster Management**: EKS Auto Mode streamlines EKS management by providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise.

 **Application Availability**: EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for manual capacity planning and ensures application availability.

 **Efficiency**: EKS Auto Mode is designed to optimize compute costs while adhering to the flexibility defined by your NodePool and workload requirements. It also terminates unused instances and consolidates workloads onto other nodes to improve cost efficiency.

 **Security**: EKS Auto Mode uses AMIs that are treated as immutable, for your nodes. These AMIs enforce locked-down software, enable SELinux mandatory access controls, and provide read-only root file systems. Additionally, nodes launched by EKS Auto Mode have a maximum lifetime of 21 days (which you can reduce), after which they are automatically replaced with new nodes. This approach enhances your security posture by regularly cycling nodes, aligning with best practices already adopted by many customers.

 **Automated Upgrades**: EKS Auto Mode keeps your Kubernetes cluster, nodes, and related components up to date with the latest patches, while respecting your configured Pod Disruption Budgets (PDBs) and NodePool Disruption Budgets (NDBs). Up to the 21-day maximum lifetime, intervention might be required if blocking PDBs or other configurations prevent updates.

 **Managed Components**: EKS Auto Mode includes Kubernetes and AWS cloud features as core components that would otherwise have to be managed as add-ons. This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage.

 **Customizable NodePools and NodeClasses**: If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While you should not edit default NodePools and NodeClasses, you can add new custom NodePools or NodeClasses alongside the default configurations to meet your specific requirements.

## Automated Components
<a name="_automated_components"></a>

EKS Auto Mode streamlines the operation of your Amazon EKS clusters by automating key infrastructure components. Enabling EKS Auto Mode further reduces the tasks to manage your EKS clusters.

The following is a list of data plane components that are automated:
+  **Compute**: For many workloads, with EKS Auto Mode you can forget about many aspects of compute for your EKS clusters. These include:
  +  **Nodes**: EKS Auto Mode nodes are designed to be treated like appliances. EKS Auto Mode does the following:
    + Chooses an appropriate AMI that’s configured with many services needed to run your workloads without intervention.
    + Locks down access to files on the AMI using SELinux enforcing mode and a read-only root file system.
    + Prevents direct access to the nodes by disallowing SSH or SSM access.
    + Includes GPU support, with separate kernel drivers and plugins for NVIDIA and Neuron GPUs, enabling high-performance workloads.
    + Automatically handles [EC2 Spot Instance interruption notices](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-instance-termination-notices.html) and EC2 Instance health events
  +  **Auto scaling**: Relying on [Karpenter](https://karpenter.sh/docs/) auto scaling, EKS Auto Mode monitors for unschedulable Pods and makes it possible for new nodes to be deployed to run those pods. As workloads are terminated, EKS Auto Mode dynamically disrupts and terminates nodes when they are no longer needed, optimizing resource usage.
  +  **Upgrades**: Taking control of your nodes streamlines EKS Auto Mode’s ability to provide security patches and operating system and component upgrades as needed. Those upgrades are designed to provide minimal disruption of your workloads. EKS Auto Mode enforces a 21-day maximum node lifetime to ensure up-to-date software and APIs.
+  **Load balancing**: EKS Auto Mode streamlines load balancing by integrating with Amazon’s Elastic Load Balancing service, automating the provisioning and configuration of load balancers for Kubernetes Services and Ingress resources. It supports advanced features for both Application and Network Load Balancers, manages their lifecycle, and scales them to match cluster demands. This integration provides a production-ready load balancing solution adhering to AWS best practices, allowing you to focus on applications rather than infrastructure management.
+  **Storage**: EKS Auto Mode configures ephemeral storage for you by setting up volume types, volume sizes, encryption policies, and deletion policies upon node termination.
+  **Networking**: EKS Auto Mode automates critical networking tasks for Pod and service connectivity. This includes IPv4/IPv6 support and the use of secondary CIDR blocks for extending IP address spaces.
+  **Identity and Access Management**: You do not have to install the EKS Pod Identity Agent on EKS Auto Mode clusters.

For more information about these components, see [Learn how EKS Auto Mode works](auto-reference.md).

## Configuration
<a name="_configuration"></a>

While EKS Auto Mode will effectively manage most of your data plane services without your intervention, there might be times when you want to change the behavior of some of those services. You can modify the configuration of your EKS Auto Mode clusters in the following ways:
+  **Kubernetes DaemonSets**: Rather than modify services installed on your nodes, you can instead use Kubernetes daemonsets. Daemonsets are designed to be managed by Kubernetes, but run on every node in the cluster. In this way, you can add special services for monitoring or otherwise watching over your nodes.
+  **Custom NodePools and NodeClasses**: Default NodePools and NodeClasses are configured by EKS Auto Mode and you should not edit them. To customize node behavior, you can create additional NodePools or NodeClasses for use cases such as:
  + Selecting specific instance types (for example, accelerated processors or EC2 Spot instances).
  + Isolating workloads for security or cost-tracking purposes.
  + Configuring ephemeral storage settings like IOPS, size, and throughput.
+  **Load Balancing**: Some services, such as load balancing, that EKS Auto Mode runs as Kubernetes objects, can be configured directly on your EKS Auto Mode clusters.

For more information about options for configuring EKS Auto Mode, see [Configure EKS Auto Mode settings](settings-auto.md).

## Shared responsibility model
<a name="_shared_responsibility_model"></a>

The AWS Shared Responsibility Model defines security and compliance responsibilities between AWS and customers. The images and text below compare and contrast how customer and AWS responsibilities differ between EKS Auto Mode and EKS standard mode.

![\[Shared responsibility model with EKS Auto Mode and standard mode\]](http://docs.aws.amazon.com/eks/latest/userguide/images/eksautosrm.png)


EKS Auto Mode shifts much of the shared responsibility for Kubernetes infrastructure from customers to AWS. With EKS Auto Mode, AWS takes on more responsibility for cloud security, which was once the customer’s responsibility and is now shared. Customers can now focus more on their applications while AWS manages the underlying infrastructure.

 **Customer responsibility** 

Under EKS Auto Mode, customers continue to maintain responsibility for the application containers, including availability, security, and monitoring. They also maintain control over VPC infrastructure and EKS cluster configuration. This model lets customers concentrate on application-specific concerns while delegating cluster infrastructure management to AWS. Optional per-node features can be included in clusters through AWS add-ons.

 ** AWS responsibility** 

With EKS Auto Mode, AWS expands its responsibility to include the management of several additional critical components compared to those already managed in EKS clusters not using Auto Mode. In particular, EKS Auto Mode takes over the configuration, management, security, and scaling of the EC2 instances launched as well as cluster capabilities for load balancing, IP address management, networking policy, and block storage. The following components are managed by AWS in EKS Auto Mode:
+  **Auto Mode-launched EC2 Instances**: AWS handles the complete lifecycle of nodes by leveraging Amazon EC2 managed instances. EC2 managed instances take responsibility for operating system configuration, patching, monitoring, and health maintenance. In this model, both the instance itself and the guest operating system running on it are the responsibility of AWS. The nodes use variants of [Bottlerocket](https://aws.amazon.com/bottlerocket) AMIs that are optimized to run containers. The Bottlerocket AMIs have locked-down software, immutable root file systems, and secure network access (to prevent direct communications through SSH or SSM).
+  **Cluster Capabilities**: AWS manages compute autoscaling, Pod networking with network policy enforcement, Elastic Load Balancing integration, and storage drivers configuration.
+  **Cluster Control Plane**: AWS continues to manage the Kubernetes API server, cross-account ENIs, and the etcd database, as with standard EKS.
+  **Foundation Services and Global Infrastructure**: AWS maintains responsibility for the underlying compute, storage, networking, and monitoring services, as well as the global infrastructure of regions, local zones, and edge locations.

# Create a cluster with Amazon EKS Auto Mode
<a name="create-auto"></a>

This chapter explains how to create an Amazon EKS cluster with Auto Mode enabled using various tools and interfaces. Auto Mode simplifies cluster creation by automatically configuring and managing the cluster’s compute, networking, and storage infrastructure. You’ll learn how to create an Auto Mode cluster using the AWS CLI, AWS Management Console, or the eksctl command line tool.

**Note**  
EKS Auto Mode requires Kubernetes version 1.29 or greater.

Choose your preferred tool based on your needs: The AWS Management Console provides a visual interface ideal for learning about EKS Auto Mode features and creating individual clusters. The AWS CLI is best suited for scripting and automation tasks, particularly when integrating cluster creation into existing workflows or CI/CD pipelines. The eksctl CLI offers a Kubernetes-native experience and is recommended for users familiar with Kubernetes tooling who want simplified command line operations with sensible defaults.

Before you begin, ensure you have the necessary prerequisites installed and configured, including appropriate IAM permissions to create EKS clusters. To learn how to install CLI tools such as `kubectl`, `aws`, and `eksctl`, see [Set up to use Amazon EKS](setting-up.md).

You can use the AWS CLI, AWS Management Console, or eksctl CLI to create a cluster with Amazon EKS Auto Mode.

**Topics**
+ [

# Create an EKS Auto Mode Cluster with the eksctl CLI
](automode-get-started-eksctl.md)
+ [

# Create an EKS Auto Mode Cluster with the AWS CLI
](automode-get-started-cli.md)
+ [

# Create an EKS Auto Mode Cluster with the AWS Management Console
](automode-get-started-console.md)

# Create an EKS Auto Mode Cluster with the eksctl CLI
<a name="automode-get-started-eksctl"></a>

This topic shows you how to create an Amazon EKS Auto Mode cluster using the eksctl command line interface (CLI). You can create an Auto Mode cluster either by running a single CLI command or by applying a YAML configuration file. Both methods provide the same functionality, with the YAML approach offering more granular control over cluster settings.

The eksctl CLI simplifies the process of creating and managing EKS Auto Mode clusters by handling the underlying AWS resource creation and configuration. Before proceeding, ensure you have the necessary AWS credentials and permissions configured on your local machine. This guide assumes you’re familiar with basic Amazon EKS concepts and have already installed the required CLI tools.

**Note**  
You must install version `0.195.0` or greater of eksctl. For more information, see [eksctl releases](https://github.com/eksctl-io/eksctl/releases) on GitHub.

## Create an EKS Auto Mode cluster with a CLI command
<a name="_create_an_eks_auto_mode_cluster_with_a_cli_command"></a>

You must have the `aws` and `eksctl` tools installed. You must be logged into the AWS CLI with sufficient permissions to manage AWS resources including: EC2 instances, EC2 networking, EKS clusters, and IAM roles. For more information, see [Set up to use Amazon EKS](setting-up.md).

Run the following command to create a new EKS Auto Mode cluster with

```
eksctl create cluster --name=<cluster-name> --enable-auto-mode
```

## Create an EKS Auto Mode cluster with a YAML file
<a name="_create_an_eks_auto_mode_cluster_with_a_yaml_file"></a>

You must have the `aws` and `eksctl` tools installed. You must be logged into the AWS CLI with sufficient permissions to manage AWS resources including: EC2 instances, EC2 networking, EKS clusters, and IAM roles. For more information, see [Set up to use Amazon EKS](setting-up.md).

Review the EKS Auto Mode configuration options in the sample ClusterConfig resource below. For the full ClusterConfig specification, see the [eksctl documentation](https://eksctl.io/usage/creating-and-managing-clusters/).

 AWS suggests enabling EKS Auto Mode. If this is your first time creating an EKS Auto Mode cluster, leave the `nodeRoleARN` unspecified to create a Node IAM Role for EKS Auto Mode. If you already have a Node IAM Role in your AWS account, AWS suggests reusing it.

 AWS suggests not specifying any value for `nodePools`. EKS Auto Mode will create default node pools. You can use the Kubernetes API to create additional node pools.

```
# cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: <cluster-name>
  region: <aws-region>

iam:
  # ARN of the Cluster IAM Role
  # optional, eksctl creates a new role if not supplied
  # suggested to use one Cluster IAM Role per account
  serviceRoleARN: <arn-cluster-iam-role>

autoModeConfig:
  # defaults to false
  enabled: boolean
  # optional, defaults to [general-purpose, system].
  # suggested to leave unspecified
  # To disable creation of nodePools, set it to the empty array ([]).
  nodePools: []string
  # optional, eksctl creates a new role if this is not supplied
  # and nodePools are present.
  nodeRoleARN: string
```

Save the `ClusterConfig` file as `cluster.yaml`, and use the following command to create the cluster:

```
eksctl create cluster -f cluster.yaml
```

# Create an EKS Auto Mode Cluster with the AWS CLI
<a name="automode-get-started-cli"></a>

EKS Auto Mode Clusters automate routine cluster management tasks for compute, storage, and networking. For example, EKS Auto Mode Clusters automatically detect when additional nodes are required and provision new EC2 instances to meet workload demands.

This topic guides you through creating a new EKS Auto Mode Cluster using the AWS CLI and optionally deploying a sample workload.

## Prerequisites
<a name="_prerequisites"></a>
+ The latest version of the AWS Command Line Interface (AWS CLI) installed and configured on your device. To check your current version, use `aws --version`. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Quick configuration](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-configure-quickstart-config) with aws configure in the AWS Command Line Interface User Guide.
  + Login to the CLI with sufficient IAM permissions to create AWS resources including IAM Policies, IAM Roles, and EKS Clusters.
+ The kubectl command line tool installed on your device. AWS suggests you use the same kubectl version as the Kubernetes version of your EKS Cluster. To install or upgrade kubectl, see [Set up `kubectl` and `eksctl`](install-kubectl.md).

## Specify VPC subnets
<a name="_specify_vpc_subnets"></a>

Amazon EKS Auto Mode deploy nodes to VPC subnets. When creating an EKS cluster, you must specify the VPC subnets where the nodes will be deployed. You can use the default VPC subnets in your AWS account or create a dedicated VPC for critical workloads.
+  AWS suggests creating a dedicated VPC for your cluster. Learn how to [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md).
+ The EKS Console assists with creating a new VPC. Learn how to [Create an EKS Auto Mode Cluster with the AWS Management Console](automode-get-started-console.md).
+ Alternatively, you can use the default VPC of your AWS account. Use the following instructions to find the Subnet IDs.

### To find the Subnet IDs of your default VPC
<a name="auto-find-subnet"></a>

 **Using the AWS CLI:** 

1. Run the following command to list the default VPC and its subnets:

   ```
   aws ec2 describe-subnets --filters "Name=vpc-id,Values=$(aws ec2 describe-vpcs --query 'Vpcs[?IsDefault==`true`].VpcId' --output text)" --query 'Subnets[*].{ID:SubnetId,AZ:AvailabilityZone}' --output table
   ```

1. Save the output and note the **Subnet IDs**.

   Sample output:

   ```
   ----------------------------------------
   |             DescribeSubnets          |
   ----------------------------------------
   |   SubnetId        |   AvailabilityZone  |
   |--------------------|---------------------|
   |   subnet-012345678 |   us-west-2a        |
   |   subnet-234567890 |   us-west-2b        |
   |   subnet-345678901 |   us-west-2c        |
   ----------------------------------------
   ```

## IAM Roles for EKS Auto Mode Clusters
<a name="auto-mode-create-roles"></a>

### Cluster IAM Role
<a name="auto-roles-cluster-iam-role"></a>

EKS Auto Mode requires a Cluster IAM Role to perform actions in your AWS account, such as provisioning new EC2 instances. You must create this role to grant EKS the necessary permissions. AWS recommends attaching the following AWS managed policies to the Cluster IAM Role:
+  [AmazonEKSComputePolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonEKSComputePolicy) 
+  [AmazonEKSBlockStoragePolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonEKSBlockStoragePolicy) 
+  [AmazonEKSLoadBalancingPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonEKSLoadBalancingPolicy) 
+  [AmazonEKSNetworkingPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonEKSNetworkingPolicy) 
+  [AmazonEKSClusterPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazoneksclusterpolicy) 

### Node IAM Role
<a name="auto-roles-node-iam-role"></a>

When you create an EKS Auto Mode cluster, you specify a Node IAM Role. When EKS Auto Mode creates nodes to process pending workloads, each new EC2 instance node is assigned the Node IAM Role. This role allows the node to communicate with EKS but is generally not accessed by workloads running on the node.

If you want to grant permissions to workloads running on a node, use EKS Pod Identity. For more information, see [Learn how EKS Pod Identity grants pods access to AWS services](pod-identities.md).

You must create this role and attach the following AWS managed policy:
+  [AmazonEKSWorkerNodeMinimalPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonEKSWorkerNodeMinimalPolicy) 
+  [AmazonEC2ContainerRegistryPullOnly](https://docs.aws.amazon.com/AmazonECR/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AmazonEC2ContainerRegistryPullOnly) 

EKS Auto Mode also requires a Service-Linked Role, which is automatically created and configured by AWS. For more information, see [AWSServiceRoleForAmazonEKS](using-service-linked-roles-eks.md).

## Create an EKS Auto Mode Cluster IAM Role
<a name="_create_an_eks_auto_mode_cluster_iam_role"></a>

### Step 1: Create the Trust Policy
<a name="_step_1_create_the_trust_policy"></a>

Create a trust policy that allows the Amazon EKS service to assume the role. Save the policy as `trust-policy.json`:

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow", 
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
```

### Step 2: Create the IAM Role
<a name="_step_2_create_the_iam_role"></a>

Use the trust policy to create the Cluster IAM Role:

```
aws iam create-role \
    --role-name AmazonEKSAutoClusterRole \
    --assume-role-policy-document file://trust-policy.json
```

### Step 3: Note the Role ARN
<a name="_step_3_note_the_role_arn"></a>

Retrieve and save the ARN of the new role for use in subsequent steps:

```
aws iam get-role --role-name AmazonEKSAutoClusterRole --query "Role.Arn" --output text
```

### Step 4: Attach Required Policies
<a name="_step_4_attach_required_policies"></a>

Attach the following AWS managed policies to the Cluster IAM Role to grant the necessary permissions:

 **AmazonEKSClusterPolicy**:

```
aws iam attach-role-policy \
    --role-name AmazonEKSAutoClusterRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
```

 **AmazonEKSComputePolicy**:

```
aws iam attach-role-policy \
    --role-name AmazonEKSAutoClusterRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSComputePolicy
```

 **AmazonEKSBlockStoragePolicy**:

```
aws iam attach-role-policy \
    --role-name AmazonEKSAutoClusterRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSBlockStoragePolicy
```

 **AmazonEKSLoadBalancingPolicy**:

```
aws iam attach-role-policy \
    --role-name AmazonEKSAutoClusterRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSLoadBalancingPolicy
```

 **AmazonEKSNetworkingPolicy**:

```
aws iam attach-role-policy \
    --role-name AmazonEKSAutoClusterRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSNetworkingPolicy
```

## Create an EKS Auto Mode Node IAM Role
<a name="_create_an_eks_auto_mode_node_iam_role"></a>

### Step 1: Create the Trust Policy
<a name="_step_1_create_the_trust_policy_2"></a>

Create a trust policy that allows the Amazon EKS service to assume the role. Save the policy as `node-trust-policy.json`:

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

#### Step 2: Create the Node IAM Role
<a name="_step_2_create_the_node_iam_role"></a>

Use the **node-trust-policy.json** file from the previous step to define which entities can assume the role. Run the following command to create the Node IAM Role:

```
aws iam create-role \
    --role-name AmazonEKSAutoNodeRole \
    --assume-role-policy-document file://node-trust-policy.json
```

#### Step 3: Note the Role ARN
<a name="_step_3_note_the_role_arn_2"></a>

After creating the role, retrieve and save the ARN of the Node IAM Role. You will need this ARN in subsequent steps. Use the following command to get the ARN:

```
aws iam get-role --role-name AmazonEKSAutoNodeRole --query "Role.Arn" --output text
```

#### Step 4: Attach Required Policies
<a name="_step_4_attach_required_policies_2"></a>

Attach the following AWS managed policies to the Node IAM Role to provide the necessary permissions:

 **AmazonEKSWorkerNodeMinimalPolicy**:

```
aws iam attach-role-policy \
    --role-name AmazonEKSAutoNodeRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodeMinimalPolicy
```

 **AmazonEC2ContainerRegistryPullOnly**:

```
aws iam attach-role-policy \
    --role-name AmazonEKSAutoNodeRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly
```

## Create an EKS Auto Mode Cluster
<a name="_create_an_eks_auto_mode_cluster"></a>

### Overview
<a name="_overview"></a>

To create an EKS Auto Mode Cluster using the AWS CLI, you will need the following parameters:
+  `cluster-name`: The name of the cluster.
+  `k8s-version`: The Kubernetes version (e.g., 1.31).
+  `subnet-ids`: Subnet IDs identified in the previous steps.
+  `cluster-role-arn`: ARN of the Cluster IAM Role.
+  `node-role-arn`: ARN of the Node IAM Role.

#### Default Cluster Configurations
<a name="_default_cluster_configurations"></a>

Review these default values and features before creating the cluster:
+  `nodePools`: EKS Auto Mode includes general-purpose and system default Node Pools. Learn more about [Node Pools](create-node-pool.md).

 **Note:** Node Pools in EKS Auto Mode differ from Amazon EKS Managed Node Groups but can coexist in the same cluster.
+  `computeConfig.enabled`: Automates routine compute tasks, such as creating and deleting EC2 instances.
+  `kubernetesNetworkConfig.elasticLoadBalancing.enabled`: Automates load balancing tasks, including creating and deleting Elastic Load Balancers.
+  `storageConfig.blockStorage.enabled`: Automates storage tasks, such as creating and deleting Amazon EBS volumes.
+  `accessConfig.authenticationMode`: Requires EKS access entries. Learn more about [EKS authentication modes](grant-k8s-access.md).

#### Run the Command
<a name="_run_the_command"></a>

Use the following command to create the cluster:

```
aws eks create-cluster \
  --region ${AWS_REGION} \
  --cli-input-json \
  "{
      \"name\": \"${CLUSTER_NAME}\",
      \"version\": \"${K8S_VERSION}\",
      \"roleArn\": \"${CLUSTER_ROLE_ARN}\",
      \"resourcesVpcConfig\": {
        \"subnetIds\": ${SUBNETS_JSON},
        \"endpointPublicAccess\": true,
        \"endpointPrivateAccess\": true
      },
      \"computeConfig\": {
        \"enabled\": true,
        \"nodeRoleArn\":\"${NODE_ROLE_ARN}\",
        \"nodePools\": [\"general-purpose\", \"system\"]
      },
      \"kubernetesNetworkConfig\": {
        \"elasticLoadBalancing\": {
          \"enabled\": true
        }
      },
      \"storageConfig\": {
        \"blockStorage\": {
          \"enabled\": true
        }
      },
      \"accessConfig\": {
        \"authenticationMode\": \"API\"
      }
    }"
```

### Check Cluster Status
<a name="_check_cluster_status"></a>

#### Step 1: Verify Cluster Creation
<a name="_step_1_verify_cluster_creation"></a>

Run the following command to check the status of your cluster. Cluster creation typically takes about 15 minutes:

```
aws eks describe-cluster --name "${CLUSTER_NAME}" --output json
```

#### Step 2: Update kubeconfig
<a name="_step_2_update_kubeconfig"></a>

Once the cluster is ready, update your local kubeconfig file to enable `kubectl` to communicate with the cluster. This configuration uses the AWS CLI for authentication.

```
aws eks update-kubeconfig --name "${CLUSTER_NAME}"
```

#### Step 3: Verify Node Pools
<a name="_step_3_verify_node_pools"></a>

List the Node Pools in your cluster using the following command:

```
kubectl get nodepools
```

## Next Steps
<a name="_next_steps"></a>
+ Learn how to [deploy a sample workload](automode-workload.md) to your new EKS Auto Mode cluster.

# Create an EKS Auto Mode Cluster with the AWS Management Console
<a name="automode-get-started-console"></a>

Creating an EKS Auto Mode cluster in the AWS Management Console requires less configuration than other options. EKS integrates with AWS IAM and VPC Networking to help you create the resources associated with an EKS cluster.

You have two options to create a cluster in the console:
+ Quick configuration (with EKS Auto Mode)
+ Custom configuration

In this topic, you will learn how to create an EKS Auto Mode cluster using the Quick configuration option.

## Create an EKS Auto Mode using the quick configuration option
<a name="_create_an_eks_auto_mode_using_the_quick_configuration_option"></a>

You must be logged into the AWS Management Console with sufficient permissions to manage AWS resources including: EC2 instances, EC2 networking, EKS clusters, and IAM roles.

1. Navigate to the EKS Console

1. Click **Create cluster** 

1. Confirm the **Quick configuration** option is selected

1. Determine the following values, or use the defaults for a test cluster.
   + Cluster **Name** 
   + Kubernetes Version

1. Select the Cluster IAM Role. If this is your first time creating an EKS Auto Mode cluster, use the **Create recommended role** option.
   + Optionally, you can reuse a single Cluster IAM Role in your AWS account for all EKS Auto Mode clusters.
   + The Cluster IAM Role includes required permissions for EKS Auto Mode to manage resources including EC2 instances, EBS volumes, and EC2 load balancers.
   + The **Create recommended role** option pre-fills all fields with recommended values. Select **Next** and then **Create**. The role will use the suggested `AmazonEKSAutoClusterRole` name.
   + If you recently created a new role, use the **Refresh** icon to reload the role selection dropdown.

1. Select the Node IAM Role. If this is your first time creating an EKS Auto Mode cluster, use the **Create recommended role** option.
   + Optionally, you can reuse a single Node IAM Role in your AWS account for all EKS Auto Mode clusters.
   + The Node IAM Role includes required permissions for Auto Mode nodes to connect to the cluster. The Node IAM Role must include permissions to retrieve ECR images for your containers.
   + The **Create recommended role** option pre-fills all fields with recommended values. Select **Next** and then **Create**. The role will use the suggested `AmazonEKSAutoNodeRole` name.
   + If you recently created a new role, use the **Refresh** icon to reload the role selection dropdown.

1. Select the VPC for your EKS Auto Mode cluster. Choose the **Create VPC** to create a new VPC for EKS, or choose a VPC you previously created for EKS.
   + If you use the VPC Console to create a new VPC, AWS suggests you create at least one NAT Gateway per Availability Zone. Otherwise, you can use all other defaults.
   + For more information and details of IPv6 cluster requirements, see [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md).

1. (optional) EKS Auto Mode automatically populates the private subnets for your selected VPC. You can remove unwanted subnets.
   + EKS automatically selects private subnets from the VPC following best practices. You can optionally select additional subnets from the VPC, such as public subnets.

1. (optional) Select **View quick configuration defaults** to review all configuration values for the new cluster. The table indicates some values are not editable after the cluster is created.

1. Select **Create cluster** . Note it may take fifteen minutes for cluster creation to complete.

## Next Steps
<a name="_next_steps"></a>
+ Learn how to [Deploy a Sample Workload to your EKS Auto Mode cluster](sample-storage-workload.md) 

# Enable EKS Auto Mode on existing EKS clusters
<a name="migrate-auto"></a>

You can enable EKS Auto Mode on existing EKS Clusters.

 ** AWS supports the following migrations:** 
+ Migrating from Karpenter to EKS Auto Mode nodes. For more information, see [Migrate from Karpenter to EKS Auto Mode using kubectl](auto-migrate-karpenter.md).
+ Migrating from EKS Managed Node Groups to EKS Auto Mode nodes. For more information, see [Migrate from EKS Managed Node Groups to EKS Auto Mode](auto-migrate-mng.md).
+ Migrating from EKS Fargate to EKS Auto Mode. For more information, see [Migrate from EKS Fargate to EKS Auto Mode](auto-migrate-fargate.md).

 ** AWS does not support the following migrations:** 
+ Migrating volumes from the EBS CSI controller (using the Amazon EKS add-on) to EKS Auto Mode EBS CSI controller (managed by EKS Auto Mode). PVCs made with one can’t be mounted by the other, because they use two different Kubernetes volume provisioners.
  + The [https://github.com/awslabs/eks-auto-mode-ebs-migration-tool](https://github.com/awslabs/eks-auto-mode-ebs-migration-tool) (AWS Labs project) enables migration between standard EBS CSI StorageClass (`ebs.csi.aws.com`) and EKS Auto EBS CSI StorageClass (`ebs.csi.eks.amazonaws.com`). Note that migration requires deleting and re-creating existing PersistentVolumeClaim/PersistentVolume resources, so validation in a non-production environment is essential before implementation.
+ Migrating load balancers from the AWS Load Balancer Controller to EKS Auto Mode

  You can install the AWS Load Balancer Controller on an Amazon EKS Auto Mode cluster. Use the `IngressClass` or `loadBalancerClass` options to associate Service and Ingress resources with either the Load Balancer Controller or EKS Auto Mode.
+ Migrating EKS clusters with alternative CNIs or other unsupported networking configurations

## Migration reference
<a name="migration-reference"></a>

Use the following migration reference to configure Kubernetes resources to be owned by either self-managed controllers or EKS Auto Mode.


| Capability | Resource | Field | Self Managed | EKS Auto Mode | 
| --- | --- | --- | --- | --- | 
|  Block storage  |   `StorageClass`   |   `provisioner`   |   `ebs.csi.aws.com`   |   `ebs.csi.eks.amazonaws.com`   | 
|  Load balancing  |   `Service`   |   `loadBalancerClass`   |   `service.k8s.aws/nlb`   |   `eks.amazonaws.com/nlb`   | 
|  Load balancing  |   `IngressClass`   |   `controller`   |   `ingress.k8s.aws/alb`   |   `eks.amazonaws.com/alb`   | 
|  Load balancing  |   `IngressClassParams`   |   `apiversion`   |   `elbv2.k8s.aws/v1beta1`   |   `eks.amazonaws.com/v1`   | 
|  Load balancing  |   `TargetGroupBinding`   |   `apiversion`   |   `elbv2.k8s.aws/v1beta1`   |   `eks.amazonaws.com/v1`   | 
|  Compute  |   `NodeClass`   |   `apiVersion`   |   `karpenter.sh/v1`   |   `eks.amazonaws.com/v1`   | 

## Migrating EBS volumes
<a name="_migrating_ebs_volumes"></a>

When migrating workloads to EKS Auto Mode, you need to handle EBS volume migration due to different CSI driver provisioners:
+ EKS Auto Mode provisioner: `ebs.csi.eks.amazonaws.com` 
+ Open source EBS CSI provisioner: `ebs.csi.aws.com` 

Follow these steps to migrate your persistent volumes:

1.  **Modify volume retention policy**: Change the existing platform version’s (PV’s) `persistentVolumeReclaimPolicy` to `Retain` to ensure the underlying EBS volume is not deleted.

1.  **Remove PV from Kubernetes**: Delete the old PV resource while keeping the actual EBS volume intact.

1.  **Create a new PV with static provisioning**: Create a new PV that references the same EBS volume but works with the target CSI driver.

1.  **Bind to a new PVC**: Create a new PVC that specifically references your PV using the `volumeName` field.

### Considerations
<a name="_considerations"></a>
+ Ensure your applications are stopped before beginning this migration.
+ Back up your data before starting the migration process.
+ This process needs to be performed for each persistent volume.
+ The workload must be updated to use the new PVC.

## Migrating load balancers
<a name="_migrating_load_balancers"></a>

You cannot directly transfer existing load balancers from the self-managed AWS load balancer controller to EKS Auto Mode. Instead, you must implement a blue-green deployment strategy. This involves maintaining your existing load balancer configuration while creating new load balancers under the managed controller.

To minimize service disruption, we recommend a DNS-based traffic shifting approach. First, create new load balancers by using EKS Auto Mode while keeping your existing configuration operational. Then, use DNS routing (such as Route 53) to gradually shift traffic from the old load balancers to the new ones. Once traffic has been successfully migrated and you’ve verified the new configuration, you can decommission the old load balancers and self-managed controller.

# Enable EKS Auto Mode on an existing cluster
<a name="auto-enable-existing"></a>

This topic describes how to enable Amazon EKS Auto Mode on your existing Amazon EKS clusters. Enabling Auto Mode on an existing cluster requires updating IAM permissions and configuring core EKS Auto Mode settings. Once enabled, you can begin migrating your existing compute workloads to take advantage of Auto Mode’s simplified operations and automated infrastructure management.

**Important**  
Verify you have the minimum required version of certain Amazon EKS Add-ons installed before enabling EKS Auto Mode. For more information, see [Required add-on versions](#auto-addons-required).

Before you begin, ensure you have administrator access to your Amazon EKS cluster and permissions to modify IAM roles. The steps in this topic guide you through enabling Auto Mode using either the AWS Management Console or AWS CLI.

## AWS Management Console
<a name="auto-enable-existing-console"></a>

You must be logged into the AWS console with permission to manage IAM, EKS, and EC2 resources.

**Note**  
The Cluster IAM role of an EKS Cluster cannot be changed after the cluster is created. EKS Auto Mode requires additional permissions on this role. You must attach additional policies to the current role.

### Update Cluster IAM role
<a name="_update_cluster_iam_role"></a>

1. Open your cluster overview page in the AWS Management Console.

1. Under **Cluster IAM role ARN**, select **View in IAM**.

1. From the **Add Permissions** dropdown, select **Attach Policies**.

1. Use the **Search** box to find and select the following policies:
   +  `AmazonEKSComputePolicy` 
   +  `AmazonEKSBlockStoragePolicy` 
   +  `AmazonEKSLoadBalancingPolicy` 
   +  `AmazonEKSNetworkingPolicy` 
   +  `AmazonEKSClusterPolicy` 

1. Select **Add permissions** 

1. From the **Trust relationships** tab, select **Edit trust policy** 

1. Insert the following Cluster IAM Role trust policy, and select **Update policy** 

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
```

### Enable EKS Auto Mode
<a name="_enable_eks_auto_mode"></a>

1. Open your cluster overview page in the AWS Management Console.

1. Under **EKS Auto Mode** select **Manage** 

1. Toggle **EKS Auto Mode** to on.

1. From the **EKS Node Pool** dropdown, select the default node pools you want to create.
   + Learn more about Node Pools in EKS Auto Mode. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

1. If you have previously created an EKS Auto Mode Node IAM role this AWS account, select it in the **Node IAM Role** dropdown. If you have not created this role before, select **Create recommended Role** and follow the steps.

## AWS CLI
<a name="shared_aws_cli"></a>

### Prerequisites
<a name="_prerequisites"></a>
+ The Cluster IAM Role of the existing EKS Cluster must include sufficient permissions for EKS Auto Mode, such as the following policies:
  +  `AmazonEKSComputePolicy` 
  +  `AmazonEKSBlockStoragePolicy` 
  +  `AmazonEKSLoadBalancingPolicy` 
  +  `AmazonEKSNetworkingPolicy` 
  +  `AmazonEKSClusterPolicy` 
+ The Cluster IAM Role must have an updated trust policy including the `sts:TagSession` action. For more information on creating a Cluster IAM Role, see [Create an EKS Auto Mode Cluster with the AWS CLI](automode-get-started-cli.md).
+  `aws` CLI installed, logged in, and a sufficient version. You must have permission to manage IAM, EKS, and EC2 resources. For more information, see [Set up to use Amazon EKS](setting-up.md).

### Procedure
<a name="_procedure"></a>

Use the following commands to enable EKS Auto Mode on an existing cluster.

**Note**  
The compute, block storage, and load balancing capabilities must all be enabled or disabled in the same request.

```
aws eks update-cluster-config \
 --name $CLUSTER_NAME \
 --compute-config enabled=true \
 --kubernetes-network-config '{"elasticLoadBalancing":{"enabled": true}}' \
 --storage-config '{"blockStorage":{"enabled": true}}'
```

## Required add-on versions
<a name="auto-addons-required"></a>

If you’re planning to enable EKS Auto Mode on an existing cluster, you may need to update certain add-ons. Please note:
+ This applies only to existing clusters transitioning to EKS Auto Mode.
+ New clusters created with EKS Auto Mode enabled don’t require these updates.

If you have any of the following add-ons installed, ensure they are at least at the specified minimum version:


| Add-on name | Minimum required version | 
| --- | --- | 
|  Amazon VPC CNI plugin for Kubernetes  |  v1.19.0-eksbuild.1  | 
|  Kube-proxy  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/auto-enable-existing.html)  | 
|  Amazon EBS CSI driver  |  v1.37.0-eksbuild.1  | 
|  CSI snapshot controller  |  v8.1.0-eksbuild.2  | 
|  EKS Pod Identity Agent  |  v1.3.4-eksbuild.1  | 

For more information, see [Update an Amazon EKS add-on](updating-an-add-on.md).

## Next Steps
<a name="_next_steps"></a>
+ To migrate Manage Node Group workloads, see [Migrate from EKS Managed Node Groups to EKS Auto Mode](auto-migrate-mng.md).
+ To migrate from Self-Managed Karpenter, see [Migrate from Karpenter to EKS Auto Mode using kubectl](auto-migrate-karpenter.md).

# Migrate from Karpenter to EKS Auto Mode using kubectl
<a name="auto-migrate-karpenter"></a>

This topic walks you through the process of migrating workloads from Karpenter to Amazon EKS Auto Mode using kubectl. The migration can be performed gradually, allowing you to move workloads at your own pace while maintaining cluster stability and application availability throughout the transition.

The step-by-step approach outlined below enables you to run Karpenter and EKS Auto Mode side by side during the migration period. This dual-operation strategy helps ensure a smooth transition by allowing you to validate workload behavior on EKS Auto Mode before completely decommissioning Karpenter. You can migrate applications individually or in groups, providing flexibility to accommodate your specific operational requirements and risk tolerance.

## Prerequisites
<a name="_prerequisites"></a>

Before beginning the migration, ensure you have:
+ Karpenter v1.1 or later installed on your cluster. For more information, see [Upgrading to 1.1.0\$1](https://karpenter.sh/docs/upgrading/upgrade-guide/#upgrading-to-110) in the Karpenter docs.
+  `kubectl` installed and connected to your cluster. For more information, see [Set up to use Amazon EKS](setting-up.md).

This topic assumes you are familiar with Karpenter and NodePools. For more information, see the [Karpenter Documentation.](https://karpenter.sh/) 

## Step 1: Enable EKS Auto Mode on the cluster
<a name="_step_1_enable_eks_auto_mode_on_the_cluster"></a>

Enable EKS Auto Mode on your existing cluster using the AWS CLI or Management Console. For more information, see [Enable EKS Auto Mode on an existing cluster](auto-enable-existing.md).

**Note**  
While enabling EKS Auto Mode, don’t enable the `general purpose` nodepool at this stage during transition. This node pool is not selective.  
For more information, see [Enable or Disable Built-in NodePools](set-builtin-node-pools.md).

## Step 2: Create a tainted EKS Auto Mode NodePool
<a name="_step_2_create_a_tainted_eks_auto_mode_nodepool"></a>

Create a new NodePool for EKS Auto Mode with a taint. This ensures that existing pods won’t automatically schedule on the new EKS Auto Mode nodes. This node pool uses the `default` `NodeClass` built into EKS Auto Mode. For more information, see [Create a Node Class for Amazon EKS](create-node-class.md).

Example node pool with taint:

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: eks-auto-mode
spec:
  template:
    spec:
      requirements:
        - key: "eks.amazonaws.com/instance-category"
          operator: In
          values: ["c", "m", "r"]
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default
      taints:
        - key: "eks-auto-mode"
          effect: "NoSchedule"
```

Update the requirements for the node pool to match the Karpenter configuration you are migrating from. You need at least one requirement.

## Step 3: Update workloads for migration
<a name="_step_3_update_workloads_for_migration"></a>

Identify and update the workloads you want to migrate to EKS Auto Mode. Add both tolerations and node selectors to these workloads:

```
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      tolerations:
      - key: "eks-auto-mode"
        effect: "NoSchedule"
      nodeSelector:
        eks.amazonaws.com/compute-type: auto
```

This change allows the workload to be scheduled on the new EKS Auto Mode nodes.

EKS Auto Mode uses different labels than Karpenter. Labels related to EC2 managed instances start with `eks.amazonaws.com`. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

## Step 4: Gradually migrate workloads
<a name="_step_4_gradually_migrate_workloads"></a>

Repeat Step 3 for each workload you want to migrate. This allows you to move workloads individually or in groups, based on your requirements and risk tolerance.

## Step 5: Remove the original Karpenter NodePool
<a name="_step_5_remove_the_original_karpenter_nodepool"></a>

Once all workloads have been migrated, you can remove the original Karpenter NodePool:

```
kubectl delete nodepool <original-nodepool-name>
```

## Step 6: Remove taint from EKS Auto Mode NodePool (Optional)
<a name="_step_6_remove_taint_from_eks_auto_mode_nodepool_optional"></a>

If you want EKS Auto Mode to become the default for new workloads, you can remove the taint from the EKS Auto Mode NodePool:

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: eks-auto-mode
spec:
  template:
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default
      # Remove the taints section
```

## Step 7: Remove node selectors from workloads (Optional)
<a name="_step_7_remove_node_selectors_from_workloads_optional"></a>

If you’ve removed the taint from the EKS Auto Mode NodePool, you can optionally remove the node selectors from your workloads, as EKS Auto Mode is now the default:

```
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      # Remove the nodeSelector section
      tolerations:
      - key: "eks-auto-mode"
        effect: "NoSchedule"
```

## Step 8: Uninstall Karpenter from your cluster
<a name="_step_8_uninstall_karpenter_from_your_cluster"></a>

The steps to remove Karpenter depend on how you installed it. For more information, see the [Karpenter install instructions](https://karpenter.sh/docs/getting-started/getting-started-with-karpenter/#create-a-cluster-and-add-karpenter).

# Migrate from EKS Managed Node Groups to EKS Auto Mode
<a name="auto-migrate-mng"></a>

When transitioning your Amazon EKS cluster to use EKS auto mode, you can smoothly migrate your existing workloads from managed node groups (MNGs) using the eksctl CLI tool. This process ensures continuous application availability while EKS auto mode optimizes your compute resources. The migration can be performed with minimal disruption to your running applications.

This topic walks you through the steps to safely drain pods from your existing managed node groups and allow EKS auto mode to reschedule them on newly provisioned instances. By following this procedure, you can take advantage of EKS auto mode’s intelligent workload consolidation while maintaining your application’s availability throughout the migration.

## Prerequisites
<a name="_prerequisites"></a>
+ Cluster with EKS Auto Mode enabled
+  `eksctl` CLI installed and connected to your cluster. For more information, see [Set up to use Amazon EKS](setting-up.md).
+ Karpenter is not installed on the cluster.

## Procedure
<a name="_procedure"></a>

Use the following `eksctl` CLI command to initiate draining pods from the existing managed node group instances. EKS Auto Mode will create new nodes to back the displaced pods.

```
eksctl delete nodegroup --cluster=<clusterName> --name=<nodegroupName>
```

You will need to run this command for each managed node group in your cluster.

For more information on this command, see [Deleting and draining nodegroups](https://eksctl.io/usage/nodegroups/#deleting-and-draining-nodegroups) in the eksctl docs.

# Migrate from EKS Fargate to EKS Auto Mode
<a name="auto-migrate-fargate"></a>

This topic walks you through the process of migrating workloads from EKS Fargate to Amazon EKS Auto Mode using `kubectl`. The migration can be performed gradually, allowing you to move workloads at your own pace while maintaining cluster stability and application availability throughout the transition.

The step-by-step approach outlined below enables you to run EKS Fargate and EKS Auto Mode side by side during the migration period. This dual-operation strategy helps ensure a smooth transition by allowing you to validate workload behavior on EKS Auto Mode before completely decommissioning EKS Fargate. You can migrate applications individually or in groups, providing flexibility to accommodate your specific operational requirements and risk tolerance.

## Comparing Amazon EKS Auto Mode and EKS with AWS Fargate
<a name="comparing_amazon_eks_auto_mode_and_eks_with_shared_aws_fargate"></a>

Amazon EKS with AWS Fargate remains an option for customers who want to run EKS, but Amazon EKS Auto Mode is the recommended approach moving forward. EKS Auto Mode is fully Kubernetes conformant, supporting all upstream Kubernetes primitives and platform tools like Istio, which Fargate is unable to support. EKS Auto Mode also fully supports all EC2 runtime purchase options, including GPU and Spot instances, enabling customers to leverage negotiated EC2 discounts and other savings mechanisms These capabilities are not available when using EKS with Fargate.

Furthermore, EKS Auto Mode allows customers to achieve the same isolation model as Fargate, using standard Kubernetes scheduling capabilities to ensure each EC2 instance runs a single application container. By adopting Amazon EKS Auto Mode, customers can unlock the full benefits of running Kubernetes on AWS — a fully Kubernetes-conformant platform that provides the flexibility to leverage the entire breadth of EC2 and purchasing options while retaining the ease of use and abstraction from infrastructure management that Fargate provides.

### Achieving Fargate-like isolation in EKS Auto Mode
<a name="_achieving_fargate_like_isolation_in_eks_auto_mode"></a>

To replicate Fargate’s pod isolation model where each pod runs on its own dedicated instance, you can use Kubernetes topology spread constraints. This is the recommended approach for controlling pod distribution across nodes:

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: isolated-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: isolated-app
  template:
    metadata:
      labels:
        app: isolated-app
      annotations:
        eks.amazonaws.com/compute-type: ec2
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            app: isolated-app
        minDomains: 1
      containers:
      - name: app
        image: nginx
        ports:
        - containerPort: 80
```

In this configuration:
+  `maxSkew: 1` ensures that the difference in pod count between any two nodes is at most 1, effectively distributing one pod per node
+  `topologyKey: kubernetes.io/hostname` defines the node as the topology domain
+  `whenUnsatisfiable: DoNotSchedule` prevents scheduling if the constraint cannot be met
+  `minDomains: 1` ensures at least one domain (node) exists before scheduling

EKS Auto Mode will automatically provision new EC2 instances as needed to satisfy this constraint, providing the same isolation model as Fargate while giving you access to the full range of EC2 instance types and purchasing options.

Alternatively, you can use pod anti-affinity rules for stricter isolation:

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: isolated-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: isolated-app
  template:
    metadata:
      labels:
        app: isolated-app
      annotations:
        eks.amazonaws.com/compute-type: ec2
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - isolated-app
            topologyKey: kubernetes.io/hostname
      containers:
      - name: app
        image: nginx
        ports:
        - containerPort: 80
```

The `podAntiAffinity` rule with `requiredDuringSchedulingIgnoredDuringExecution` ensures that no two pods with the label `app: isolated-app` can be scheduled on the same node. This approach provides hard isolation guarantees similar to Fargate.

## Prerequisites
<a name="_prerequisites"></a>

Before beginning the migration, ensure you have
+ Set up a cluster with Fargate. For more information, see [Get started with AWS Fargate for your cluster](fargate-getting-started.md).
+ Installed and connected `kubectl` to your cluster. For more information, see [Set up to use Amazon EKS](setting-up.md).

## Step 1: Check the Fargate cluster
<a name="_step_1_check_the_fargate_cluster"></a>

1. Check if the EKS cluster with Fargate is running:

   ```
   kubectl get node
   ```

   ```
   NAME STATUS ROLES AGE VERSION
   fargate-ip-192-168-92-52.ec2.internal Ready <none> 25m v1.30.8-eks-2d5f260
   fargate-ip-192-168-98-196.ec2.internal Ready <none> 24m v1.30.8-eks-2d5f260
   ```

1. Check running pods:

   ```
   kubectl get pod -A
   ```

   ```
   NAMESPACE NAME READY STATUS RESTARTS AGE
   kube-system coredns-6659cb98f6-gxpjz 1/1 Running 0 26m
   kube-system coredns-6659cb98f6-gzzsx 1/1 Running 0 26m
   ```

1. Create a deployment in a file called `deployment_fargate.yaml`:

   ```
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: nginx-deployment
     labels:
       app: nginx
   spec:
     replicas: 3
     selector:
       matchLabels:
         app: nginx
     template:
       metadata:
         labels:
           app: nginx
         annotations:
           eks.amazonaws.com/compute-type: fargate
       spec:
         containers:
         - name: nginx
           image: nginx
           ports:
           - containerPort: 80
   ```

1. Apply the deployment:

   ```
   kubectl apply -f deployment_fargate.yaml
   ```

   ```
   deployment.apps/nginx-deployment created
   ```

1. Check the pods and deployments:

   ```
   kubectl get pod,deploy
   ```

   ```
   NAME                                    READY   STATUS    RESTARTS   AGE
   pod/nginx-deployment-5c7479459b-6trtm   1/1     Running   0          61s
   pod/nginx-deployment-5c7479459b-g8ssb   1/1     Running   0          61s
   pod/nginx-deployment-5c7479459b-mq4mf   1/1     Running   0          61s
   
   NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
   deployment.apps/nginx-deployment   3/3     3            3           61s
   ```

1. Check the node:

   ```
   kubectl get node -owide
   ```

   ```
   NAME                                    STATUS  ROLES  AGE VERSION             INTERNAL-IP     EXTERNAL-IP OS-IMAGE       KERNEL-VERSION                  CONTAINER-RUNTIME
   fargate-ip-192-168-111-43.ec2.internal  Ready   <none> 31s v1.30.8-eks-2d5f260 192.168.111.43  <none>      Amazon Linux 2 5.10.234-225.910.amzn2.x86_64  containerd://1.7.25
   fargate-ip-192-168-117-130.ec2.internal Ready   <none> 36s v1.30.8-eks-2d5f260 192.168.117.130 <none>      Amazon Linux 2 5.10.234-225.910.amzn2.x86_64  containerd://1.7.25
   fargate-ip-192-168-74-140.ec2.internal  Ready   <none> 36s v1.30.8-eks-2d5f260 192.168.74.140  <none>      Amazon Linux 2 5.10.234-225.910.amzn2.x86_64  containerd://1.7.25
   ```

## Step 2: Enable EKS Auto Mode on the cluster
<a name="_step_2_enable_eks_auto_mode_on_the_cluster"></a>

1. Enable EKS Auto Mode on your existing cluster using the AWS CLI or Management Console. For more information, see [Enable EKS Auto Mode on an existing cluster](auto-enable-existing.md).

1. Check the nodepool:

   ```
   kubectl get nodepool
   ```

   ```
   NAME              NODECLASS   NODES   READY   AGE
   general-purpose   default     1       True    6m58s
   system            default     0       True    3d14h
   ```

## Step 3: Update workloads for migration
<a name="_step_3_update_workloads_for_migration"></a>

Identify and update the workloads you want to migrate to EKS Auto Mode.

To migrate a workload from Fargate to EKS Auto Mode, apply the annotation `eks.amazonaws.com/compute-type: ec2`. This ensures that the workload will not be scheduled by Fargate, despite the Fargate profile, and will be caught up by the EKS Auto Mode NodePool. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

1. Modify your deployments (for example, the `deployment_fargate.yaml` file) to change the compute type to `ec2`:

   ```
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: nginx-deployment
     labels:
       app: nginx
   spec:
     replicas: 3
     selector:
       matchLabels:
         app: nginx
     template:
       metadata:
         labels:
           app: nginx
         annotations:
           eks.amazonaws.com/compute-type: ec2
       spec:
         containers:
         - name: nginx
           image: nginx
           ports:
           - containerPort: 80
   ```

1. Apply the deployment. This change allows the workload to be scheduled on the new EKS Auto Mode nodes:

   ```
   kubectl apply -f deployment_fargate.yaml
   ```

1. Check that the deployment is running in the EKS Auto Mode cluster:

   ```
   kubectl get pod -o wide
   ```

   ```
   NAME                               READY   STATUS    RESTARTS   AGE     IP               NODE                  NOMINATED NODE   READINESS GATES
   nginx-deployment-97967b68d-ffxxh   1/1     Running   0          3m31s   192.168.43.240   i-0845aafcb51630ffb   <none>           <none>
   nginx-deployment-97967b68d-mbcgj   1/1     Running   0          2m37s   192.168.43.241   i-0845aafcb51630ffb   <none>           <none>
   nginx-deployment-97967b68d-qpd8x   1/1     Running   0          2m35s   192.168.43.242   i-0845aafcb51630ffb   <none>           <none>
   ```

1. Verify there is no Fargate node running and deployment running in the EKS Auto Mode managed nodes:

   ```
   kubectl get node -owide
   ```

   ```
   NAME                STATUS ROLES  AGE   VERSION             INTERNAL-IP     EXTERNAL-IP OS-IMAGE                                         KERNEL-VERSION CONTAINER-RUNTIME
   i-0845aafcb51630ffb Ready  <none> 3m30s v1.30.8-eks-3c20087 192.168.41.125  3.81.118.95 Bottlerocket (EKS Auto) 2025.3.14 (aws-k8s-1.30) 6.1.129        containerd://1.7.25+bottlerocket
   ```

## Step 4: Gradually migrate workloads
<a name="_step_4_gradually_migrate_workloads"></a>

Repeat Step 3 for each workload you want to migrate. This allows you to move workloads individually or in groups, based on your requirements and risk tolerance.

## Step 5: Remove the original fargate profile
<a name="_step_5_remove_the_original_fargate_profile"></a>

Once all workloads have been migrated, you can remove the original `fargate` profile. Replace *<fargate profile name>* with the name of your Fargate profile:

```
aws eks delete-fargate-profile --cluster-name eks-fargate-demo-cluster --fargate-profile-name <fargate profile name>
```

## Step 6: Scale down CoreDNS
<a name="_step_6_scale_down_coredns"></a>

Because EKS Auto mode handles CoreDNS, you scale the `coredns` deployment down to 0:

```
kubectl scale deployment coredns -n kube-system —-replicas=0
```

# Run sample workloads in EKS Auto Mode clusters
<a name="auto-workloads"></a>

This chapter provides examples of how to deploy different types of workloads to Amazon EKS clusters running in Auto Mode. The examples demonstrate key workload patterns including sample applications, load-balanced web applications, stateful workloads using persistent storage, and workloads with specific node placement requirements. Each example includes complete manifests and step-by-step deployment instructions that you can use as templates for your own applications.

Before proceeding with the examples, ensure that you have an EKS cluster running in Auto Mode and that you have installed the AWS CLI and kubectl. For more information, see [Set up to use Amazon EKS](setting-up.md). The examples assume basic familiarity with Kubernetes concepts and kubectl commands.

You can use these use case-based samples to run workloads in EKS Auto Mode clusters.

 [Deploy a sample inflate workload to an Amazon EKS Auto Mode cluster](automode-workload.md)   
Shows how to deploy a sample workload to an EKS Auto Mode cluster using `kubectl` commands.

 [Deploy a Sample Load Balancer Workload to EKS Auto Mode](auto-elb-example.md)   
Shows how to deploy a containerized version of the 2048 game on Amazon EKS.

 [Deploy a sample stateful workload to EKS Auto Mode](sample-storage-workload.md)   
Shows how to deploy a sample stateful application to an EKS Auto Mode cluster.

 [Deploy an accelerated workload](auto-accelerated.md)   
Shows how to deploy hardware-accelerated workloads to nodes managed by EKS Auto Mode.

 [Control if a workload is deployed on EKS Auto Mode nodes](associate-workload.md)   
Shows how to use an annotation to control if a workload is deployed to nodes managed by EKS Auto Mode.

# Deploy a sample inflate workload to an Amazon EKS Auto Mode cluster
<a name="automode-workload"></a>

In this tutorial, you’ll learn how to deploy a sample workload to an EKS Auto Mode cluster and observe how it automatically provisions the required compute resources. You’ll use `kubectl` commands to watch the cluster’s behavior and see firsthand how Auto Mode simplifies Kubernetes operations on AWS. By the end of this tutorial, you’ll understand how EKS Auto Mode responds to workload deployments by automatically managing the underlying compute resources, without requiring manual node group configuration.

## Prerequisites
<a name="_prerequisites"></a>
+ An Amazon EKS Auto Mode cluster. Note the name and AWS region of the cluster.
+ An IAM principal, such as a user or role, with sufficient permissions to manage networking, compute, and EKS resources.
  + For more information, see [Creating roles and attaching policies in the IAM User Guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions_create-policies.html) in the IAM User Guide.
+  `aws` CLI installed and configured with an IAM identity.
+  `kubectl` CLI installed and connected to cluster.
  + For more information, see [Set up to use Amazon EKS](setting-up.md).

## Step 1: Review existing compute resources (optional)
<a name="_step_1_review_existing_compute_resources_optional"></a>

First, use `kubectl` to list the node pools on your cluster.

```
kubectl get nodepools
```

Sample Output:

```
general-purpose
```

In this tutorial, we will deploy a workload configured to use the `general-purpose` node pool. This node pool is built into EKS Auto Mode, and includes reasonable defaults for general workloads, such as microservices and web apps. You can create your own node pool. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

Second, use `kubectl` to list the nodes connected to your cluster.

```
kubectl get nodes
```

If you just created an EKS Auto Mode cluster, you will have no nodes.

In this tutorial you will deploy a sample workload. If you have no nodes, or the workload cannot fit on existing nodes, EKS Auto Mode will provision a new node.

## Step 2: Deploy a sample application to the cluster
<a name="_step_2_deploy_a_sample_application_to_the_cluster"></a>

Review the following Kubernetes Deployment and save it as `inflate.yaml` 

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate
spec:
  replicas: 1
  selector:
    matchLabels:
      app: inflate
  template:
    metadata:
      labels:
        app: inflate
    spec:
      terminationGracePeriodSeconds: 0
      nodeSelector:
        eks.amazonaws.com/compute-type: auto
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      containers:
        - name: inflate
          image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
          resources:
            requests:
              cpu: 1
          securityContext:
            allowPrivilegeEscalation: false
```

Note the `eks.amazonaws.com/compute-type: auto` selector requires the workload be deployed on an Amazon EKS Auto Mode node.

Apply the Deployment to your cluster.

```
kubectl apply -f inflate.yaml
```

## Step 3: Watch Kubernetes Events
<a name="_step_3_watch_kubernetes_events"></a>

Use the following command to watch Kubernetes events, including creating a new node. Use `ctrl+c` to stop watching events.

```
kubectl get events -w --sort-by '.lastTimestamp'
```

Use `kubectl` to list the nodes connected to your cluster again. Note the newly created node.

```
kubectl get nodes
```

## Step 4: View nodes and instances in the AWS console
<a name="step_4_view_nodes_and_instances_in_the_shared_aws_console"></a>

You can view EKS Auto Mode Nodes in the EKS console, and the associated EC2 instances in the EC2 console.

EC2 Instances deployed by EKS Auto Mode are restricted. You cannot run arbitrary commands on EKS Auto Mode nodes.

## Step 5: Delete the deployment
<a name="_step_5_delete_the_deployment"></a>

Use `kubectl` to delete the sample deployment

```
kubectl delete -f inflate.yaml
```

If you have no other workloads deployed to your cluster, the node created by EKS Auto Mode will be empty.

In the default configration, EKS Auto Mode detects nodes that have been empty for thirty seconds, and terminates them.

Use `kubectl` or the EC2 console to confirm the associated instance has been deleted.

# Deploy a Sample Load Balancer Workload to EKS Auto Mode
<a name="auto-elb-example"></a>

This guide walks you through deploying a containerized version of the 2048 game on Amazon EKS, complete with load balancing and internet accessibility.

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS Auto Mode cluster
+  `kubectl` configured to interact with your cluster
+ Appropriate IAM permissions for creating ALB resources

## Step 1: Create the Namespace
<a name="_step_1_create_the_namespace"></a>

First, create a dedicated namespace for the 2048 game application.

Create a file named `01-namespace.yaml`:

```
apiVersion: v1
kind: Namespace
metadata:
  name: game-2048
```

Apply the namespace configuration:

```
kubectl apply -f 01-namespace.yaml
```

## Step 2: Deploy the Application
<a name="_step_2_deploy_the_application"></a>

The application runs multiple replicas of the 2048 game container.

Create a file named `02-deployment.yaml`:

```
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: game-2048
  name: deployment-2048
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app-2048
  replicas: 5
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app-2048
    spec:
      containers:
        - image: public.ecr.aws/l6m2t8p7/docker-2048:latest
          imagePullPolicy: Always
          name: app-2048
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: "0.5"
```

**Note**  
If you receive an error loading the image `public.ecr.aws/l6m2t8p7/docker-2048:latest`, confirm your Node IAM role has sufficient permissions to pull images from ECR. For more information, see [Node IAM role](auto-learn-iam.md#auto-learn-node-iam-role). Also, the `docker-2048` image in the example is an `x86_64` image and will not run on other architectures.

 **Key components:** 
+ Deploys 5 replicas of the application
+ Uses a public ECR image
+ Requests 0.5 CPU cores per pod
+ Exposes port 80 for HTTP traffic

Apply the deployment:

```
kubectl apply -f 02-deployment.yaml
```

## Step 3: Create the Service
<a name="_step_3_create_the_service"></a>

The service exposes the deployment to the cluster network.

Create a file named `03-service.yaml`:

```
apiVersion: v1
kind: Service
metadata:
  namespace: game-2048
  name: service-2048
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    app.kubernetes.io/name: app-2048
```

 **Key components:** 
+ Creates a NodePort service
+ Maps port 80 to the container’s port 80
+ Uses label selector to find pods

Apply the service:

```
kubectl apply -f 03-service.yaml
```

## Step 4: Configure Load Balancing
<a name="_step_4_configure_load_balancing"></a>

You will set up an ingress to expose the application to the internet.

First, create the `IngressClass`. Create a file named `04-ingressclass.yaml`:

```
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/name: LoadBalancerController
  name: alb
spec:
  controller: eks.amazonaws.com/alb
```

**Note**  
EKS Auto Mode requires subnet tags to identify public and private subnets.  
If you created your cluster with `eksctl`, you already have these tags.  
Learn how to [Tag subnets for EKS Auto Mode](tag-subnets-auto.md).

Then create the Ingress resource. Create a file named `05-ingress.yaml`:

```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: game-2048
  name: ingress-2048
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service-2048
                port:
                  number: 80
```

 **Key components:** 
+ Creates an internet-facing ALB
+ Uses IP target type for direct pod routing
+ Routes all traffic (/) to the game service

Apply the ingress configurations:

```
kubectl apply -f 04-ingressclass.yaml
kubectl apply -f 05-ingress.yaml
```

## Step 5: Verify the Deployment
<a name="_step_5_verify_the_deployment"></a>

1. Check that all pods are running:

   ```
   kubectl get pods -n game-2048
   ```

1. Verify the service is created:

   ```
   kubectl get svc -n game-2048
   ```

1. Get the ALB endpoint:

   ```
   kubectl get ingress -n game-2048
   ```

The ADDRESS field in the ingress output will show your ALB endpoint. Wait 2-3 minutes for the ALB to provision and register all targets.

## Step 6: Access the Game
<a name="_step_6_access_the_game"></a>

Open your web browser and browse to the ALB endpoint URL from the earlier step. You should see the 2048 game interface.

## Step 7: Cleanup
<a name="_step_7_cleanup"></a>

To remove all resources created in this tutorial:

```
kubectl delete namespace game-2048
```

This will delete all resources in the namespace, including the deployment, service, and ingress resources.

## What’s Happening Behind the Scenes
<a name="_whats_happening_behind_the_scenes"></a>

1. The deployment creates 5 pods running the 2048 game

1. The service provides stable network access to these pods

1. EKS Auto Mode:
   + Creates an Application Load Balancer in AWS 
   + Configures target groups for the pods
   + Sets up routing rules to direct traffic to the service

## Troubleshooting
<a name="auto-elb-troubleshooting"></a>

If the game doesn’t load:
+ Ensure all pods are running: `kubectl get pods -n game-2048` 
+ Check ingress status: `kubectl describe ingress -n game-2048` 
+ Verify ALB health checks: Check the target group health in AWS Console

# Deploy a sample stateful workload to EKS Auto Mode
<a name="sample-storage-workload"></a>

This tutorial will guide you through deploying a sample stateful application to your EKS Auto Mode cluster. The application writes timestamps to a persistent volume, demonstrating EKS Auto Mode’s automatic EBS volume provisioning and persistence capabilities.

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS Auto Mode cluster
+ The AWS CLI configured with appropriate permissions
+  `kubectl` installed and configured
  + For more information, see [Set up to use Amazon EKS](setting-up.md).

## Step 1: Configure your environment
<a name="_step_1_configure_your_environment"></a>

1. Set your environment variables:

   ```
   export CLUSTER_NAME=my-auto-cluster
   export AWS_REGION="us-west-2"
   ```

1. Update your kubeconfig:

   ```
   aws eks update-kubeconfig --name "${CLUSTER_NAME}"
   ```

## Step 2: Create the storage class
<a name="_step_2_create_the_storage_class"></a>

The `StorageClass` defines how EKS Auto Mode will provision EBS volumes.

EKS Auto Mode does not create a `StorageClass` for you. You must create a `StorageClass` referencing `ebs.csi.eks.amazonaws.com` to use the storage capability of EKS Auto Mode.

1. Create a file named `storage-class.yaml`:

   ```
   apiVersion: storage.k8s.io/v1
   kind: StorageClass
   metadata:
     name: auto-ebs-sc
     annotations:
       storageclass.kubernetes.io/is-default-class: "true"
   provisioner: ebs.csi.eks.amazonaws.com
   volumeBindingMode: WaitForFirstConsumer
   parameters:
     type: gp3
     encrypted: "true"
   ```

1. Apply the `StorageClass`:

   ```
   kubectl apply -f storage-class.yaml
   ```

 **Key components:** 
+  `provisioner: ebs.csi.eks.amazonaws.com` - Uses EKS Auto Mode
+  `volumeBindingMode: WaitForFirstConsumer` - Delays volume creation until a pod needs it
+  `type: gp3` - Specifies the EBS volume type
+  `encrypted: "true"` - EBS will use the default `aws/ebs` key to encrypt volumes created with this class. This is optional, but recommended.
+  `storageclass.kubernetes.io/is-default-class: "true"` - Kubernetes will use this storage class by default, unless you specify a different volume class on a persistent volume claim. Use caution when setting this value if you are migrating from another storage controller. (optional)

## Step 3: Create the persistent volume claim
<a name="_step_3_create_the_persistent_volume_claim"></a>

The PVC requests storage from the `StorageClass`.

1. Create a file named `pvc.yaml`:

   ```
   apiVersion: v1
   kind: PersistentVolumeClaim
   metadata:
     name: auto-ebs-claim
   spec:
     accessModes:
       - ReadWriteOnce
     storageClassName: auto-ebs-sc
     resources:
       requests:
         storage: 8Gi
   ```

1. Apply the PVC:

   ```
   kubectl apply -f pvc.yaml
   ```

 **Key components:** 
+  `accessModes: ReadWriteOnce` - Volume can be mounted by one node at a time
+  `storage: 8Gi` - Requests an 8 GiB volume
+  `storageClassName: auto-ebs-sc` - References the `StorageClass` we created

## Step 4: Deploy the Application
<a name="_step_4_deploy_the_application"></a>

The Deployment runs a container that writes timestamps to the persistent volume.

1. Create a file named `deployment.yaml`:

   ```
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: inflate-stateful
   spec:
     replicas: 1
     selector:
       matchLabels:
         app: inflate-stateful
     template:
       metadata:
         labels:
           app: inflate-stateful
       spec:
         terminationGracePeriodSeconds: 0
         nodeSelector:
           eks.amazonaws.com/compute-type: auto
         containers:
           - name: bash
             image: public.ecr.aws/docker/library/bash:4.4
             command: ["/usr/local/bin/bash"]
             args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 60; done"]
             resources:
               requests:
                 cpu: "1"
             volumeMounts:
               - name: persistent-storage
                 mountPath: /data
         volumes:
           - name: persistent-storage
             persistentVolumeClaim:
               claimName: auto-ebs-claim
   ```

1. Apply the Deployment:

   ```
   kubectl apply -f deployment.yaml
   ```

 **Key components:** 
+ Simple bash container that writes timestamps to a file
+ Mounts the PVC at `/data` 
+ Requests 1 CPU core
+ Uses node selector for EKS managed nodes

## Step 5: Verify the Setup
<a name="_step_5_verify_the_setup"></a>

1. Check that the pod is running:

   ```
   kubectl get pods -l app=inflate-stateful
   ```

1. Verify the PVC is bound:

   ```
   kubectl get pvc auto-ebs-claim
   ```

1. Check the EBS volume:

   ```
   # Get the PV name
   PV_NAME=$(kubectl get pvc auto-ebs-claim -o jsonpath='{.spec.volumeName}')
   # Describe the EBS volume
   aws ec2 describe-volumes \
     --filters Name=tag:CSIVolumeName,Values=${PV_NAME}
   ```

1. Verify data is being written:

   ```
   kubectl exec "$(kubectl get pods -l app=inflate-stateful \
     -o=jsonpath='{.items[0].metadata.name}')" -- \
     cat /data/out.txt
   ```

## Step 6: Cleanup
<a name="_step_6_cleanup"></a>

Run the following command to remove all resources created in this tutorial:

```
# Delete all resources in one command
kubectl delete deployment/inflate-stateful pvc/auto-ebs-claim storageclass/auto-ebs-sc
```

## What’s Happening Behind the Scenes
<a name="_whats_happening_behind_the_scenes"></a>

1. The PVC requests storage from the `StorageClass` 

1. When the Pod is scheduled:

   1. EKS Auto Mode provisions an EBS volume

   1. Creates a PersistentVolume

   1. Attaches the volume to the node

1. The Pod mounts the volume and begins writing timestamps

## Snapshot Controller
<a name="_snapshot_controller"></a>

EKS Auto Mode is compatible with the Kubernetes CSI Snapshotter, also known as the snapshot controller. However, EKS Auto Mode does not include the snapshot controller. You are responsible for installing and configuring the snapshot controller. For more information, see [Enable snapshot functionality for CSI volumes](csi-snapshot-controller.md).

Review the following `VolumeSnapshotClass` that references the storage capability of EKS Auto Mode.

```
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: auto-ebs-vsclass
driver: ebs.csi.eks.amazonaws.com
deletionPolicy: Delete
```

 [Learn more about the Kubernetes CSI Snapshotter.](https://github.com/kubernetes-csi/external-snapshotter/blob/master/README.md#usage) 

# Deploy an accelerated workload
<a name="auto-accelerated"></a>

This tutorial demonstrates how Amazon EKS Auto Mode simplifies launching hardware-accelerated workloads. Amazon EKS Auto Mode streamlines operations beyond the cluster itself by automating key infrastructure components providing compute, networking, load balancing, storage, and Identity Access and Management capabilities out of the box.

Amazon EKS Auto Mode includes the drivers and device plugins required for certain instance types, such as NVIDIA and AWS Neuron drivers. You do not have to install or update these components.

EKS Auto Mode automatically manages drivers for these accelerators:
+  [AWS Trainium](https://aws.amazon.com/ai/machine-learning/trainium/) 
+  [AWS Inferentia](https://aws.amazon.com/ai/machine-learning/inferentia/) 
+  [NVIDIA GPUs on Amazon EC2 accelerated instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/ac.html) 

**Note**  
EKS Auto Mode includes the NVIDIA device plugin for Kubernetes. This plugin runs automatically and isn’t visible as a daemon set in your cluster.

Additional networking support:
+  [Elastic Fabric Adapter (EFA)](https://aws.amazon.com/hpc/efa/) 

Amazon EKS Auto Mode eliminates the toil of accelerator driver and device plugin management.

You can also benefit from cost savings by scaling the cluster to zero. You can configure EKS Auto Mode to terminate instances when no workloads are running. This is useful for batch based inference workloads.

The following provides an example of how to launch accelerated workloads with Amazon EKS Auto Mode.

## Prerequisites
<a name="_prerequisites"></a>
+ A Kubernetes cluster with Amazon EKS Auto Mode configured.
+ A `default` EKS Node class as created when the `general-purpose` or `system` Managed Node Pools are enabled.

## Step 1: Deploy a GPU workload
<a name="_step_1_deploy_a_gpu_workload"></a>

In this example, you will create a NodePool for NVIDIA based workloads that requires 45GB GPU memory. With EKS Auto Mode, you use Kubernetes scheduling constraints to define your instance requirements.

To deploy the Amazon EKS Auto Mode `NodePool` and the sample `workload`, review the following NodePool and Pod definition and save as `nodepool-gpu.yaml` and `pod.yaml`:

 **nodepool-gpu.yaml** 

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: gpu
spec:
  disruption:
    budgets:
    - nodes: 10%
    consolidateAfter: 1h
    consolidationPolicy: WhenEmpty
  template:
    metadata: {}
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default
      requirements:
        - key: "karpenter.sh/capacity-type"
          operator: In
          values: ["on-demand"]
        - key: "kubernetes.io/arch"
          operator: In
          values: ["amd64"]
        - key: "eks.amazonaws.com/instance-family"
          operator: In
          values:
          - g6e
          - g6
      taints:
        - key: nvidia.com/gpu
          effect: NoSchedule
      terminationGracePeriod: 24h0m0s
```

 **pod.yaml** 

```
apiVersion: v1
kind: Pod
metadata:
  name: nvidia-smi
spec:
  nodeSelector:
    eks.amazonaws.com/compute-type: auto
  restartPolicy: OnFailure
  containers:
  - name: nvidia-smi
    image: public.ecr.aws/amazonlinux/amazonlinux:2023-minimal
    args:
    - "nvidia-smi"
    resources:
      requests:
        memory: "30Gi"
        cpu: "3500m"
        nvidia.com/gpu: 1
      limits:
        memory: "30Gi"
        nvidia.com/gpu: 1
  tolerations:
  - key: nvidia.com/gpu
    effect: NoSchedule
    operator: Exists
```

Note the `eks.amazonaws.com/compute-type: auto` selector requires the workload be deployed on an Amazon EKS Auto Mode node. The NodePool also sets a taint that only allows pods with tolerations for Nvidia GPUs to be scheduled.

Apply the NodePool and workload to your cluster.

```
kubectl apply -f nodepool-gpu.yaml
kubectl apply -f pod.yaml
```

You should see the following output:

```
nodepool.karpenter.sh/gpu configured created
pod/nvidia-smi created
```

Wait a few seconds, and check the nodes in your cluster. You should now see a new node provisioned in your Amazon EKS Auto Mode cluster:

```
> kubectl get nodes

NAME        TYPE          CAPACITY    ZONE         NODE                  READY   AGE
gpu-dnknr   g6e.2xlarge   on-demand   us-west-2b   i-02315c7d7643cdee6   True    76s
```

## Step 2: Validate
<a name="_step_2_validate"></a>

You can see Amazon EKS Auto Mode launched a `g6e.2xlarge` rather than an `g6.2xlarge` as the workload required an instance with l40s `GPU`, according to the following Kubernetes scheduling constraints:

```
...
  nodeSelector:
    eks.amazonaws.com/instance-gpu-name: l40s
...
    requests:
        memory: "30Gi"
        cpu: "3500m"
        nvidia.com/gpu: 1
      limits:
        memory: "30Gi"
        nvidia.com/gpu: 1
```

Now, look at the containers logs, by running the following command:

```
kubectl logs nvidia-smi
```

Sample output:

```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.230.02             Driver Version: 535.230.02   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA L40S                    On  | 00000000:30:00.0 Off |                    0 |
| N/A   27C    P8              23W / 350W |      0MiB / 46068MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+
```

You can see that the container has detected it’s running on an instance with an `NVIDIA` GPU and that you’ve not had to install any device drivers, as this is managed by Amazon EKS Auto Mode.

## Step 3: Clean-up
<a name="_step_3_clean_up"></a>

To remove all objects created, use `kubectl` to delete the sample deployment and NodePool so the node is terminated:

```
kubectl delete -f nodepool-gpu.yaml
kubectl delete -f pod.yaml
```

## Example NodePools Reference
<a name="_example_nodepools_reference"></a>

### Create an NVIDIA NodePool
<a name="_create_an_nvidia_nodepool"></a>

The following NodePool defines:
+ Only launch instances of `g6e` and `g6` family
+ Consolidate nodes when empty for 1 hour
  + The 1 hour value for `consolidateAfter` supports spiky workloads and reduce node churn. You can tune `consolidateAfter` based on your workload requirements.

 **Example NodePool with GPU instance family and consolidation** 

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: gpu
spec:
  disruption:
    budgets:
    - nodes: 10%
    consolidateAfter: 1h
    consolidationPolicy: WhenEmpty
  template:
    metadata: {}
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default
      requirements:
        - key: "karpenter.sh/capacity-type"
          operator: In
          values: ["on-demand"]
        - key: "kubernetes.io/arch"
          operator: In
          values: ["amd64"]
        - key: "eks.amazonaws.com/instance-family"
          operator: In
          values:
          - g6e
          - g6
      terminationGracePeriod: 24h0m0s
```

Instead of to setting the `eks.amazonaws.com/instance-gpu-name` you might use `eks.amazonaws.com/instance-family` to specify the instance family. For other well-known labels which influence scheduling review, see [EKS Auto Mode Supported Labels](create-node-pool.md#auto-supported-labels).

If you have specific storage requirements you can tune the nodes ephemeral storage `iops`, `size` and `throughput` by creating your own [NodeClass](create-node-class.md) to reference in the NodePool. Learn more about the [configurable NodeClass options](create-node-class.md).

 **Example storage configuration for NodeClass** 

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: gpu
spec:
  ephemeralStorage:
    iops: 3000
    size: 80Gi
    throughput: 125
```

### Define an AWS Trainium and AWS Inferentia NodePool
<a name="define_an_shared_aws_trainium_and_shared_aws_inferentia_nodepool"></a>

The following NodePool has an `eks.amazonaws.com/instance-category` set that says, only launch instances of Inferentia and Trainium family:

```
        - key: "eks.amazonaws.com/instance-category"
          operator: In
          values:
            - inf
            - trn
```

# Configure EKS Auto Mode settings
<a name="settings-auto"></a>

This chapter describes how to configure specific aspects of your Amazon Elastic Kubernetes Service (EKS) Auto Mode clusters. While EKS Auto Mode manages most infrastructure components automatically, you can customize certain features to meet your workload requirements.

Using the configuration options described in this topic, you can modify networking settings, compute resources, and load balancing behaviors while maintaining the benefits of automated infrastructure management. Before making any configuration changes, review the available options in the following sections to determine which approach best suits your needs.


| What features do you want to configure? | Configuration option | 
| --- | --- | 
|   **Node networking and storage**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/settings-auto.html)  |   [Create a Node Class for Amazon EKS](create-node-class.md)   | 
|   **Node compute resources**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/settings-auto.html)  |   [Create a Node Pool for EKS Auto Mode](create-node-pool.md)   | 
|   **Static-capacity node pools**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/settings-auto.html)  |   [Static Capacity Node Pools in EKS Auto Mode](auto-static-capacity.md)   | 
|   **Application Load Balancer settings**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/settings-auto.html)  |   [Create an IngressClass to configure an Application Load Balancer](auto-configure-alb.md)   | 
|   **Network Load Balancer settings**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/settings-auto.html)  |   [Use Service Annotations to configure Network Load Balancers](auto-configure-nlb.md)   | 
|   **Storage Class settings**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/settings-auto.html)  |   [Create a storage class](create-storage-class.md)   | 
|   **Control ODCR Usage**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/settings-auto.html)  |   [Control deployment of workloads into Capacity Reservations with EKS Auto Mode](auto-odcr.md)   | 
|   **Node advanced security**  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/settings-auto.html)  |   [Configure advanced security settings for nodes](auto-advanced-security.md)   | 

# Create a Node Class for Amazon EKS
<a name="create-node-class"></a>

Amazon EKS Node Classes are templates that offer granular control over the configuration of your EKS Auto Mode managed nodes. A Node Class defines infrastructure-level settings that apply to groups of nodes in your EKS cluster, including network configuration, storage settings, and resource tagging. This topic explains how to create and configure a Node Class to meet your specific operational requirements.

When you need to customize how EKS Auto Mode provisions and configures EC2 instances beyond the default settings, creating a Node Class gives you precise control over critical infrastructure parameters. For example, you can specify private subnet placement for enhanced security, configure instance ephemeral storage for performance-sensitive workloads, or apply custom tagging for cost allocation.

## Create a Node Class
<a name="_create_a_node_class"></a>

To create a `NodeClass`, follow these steps:

1. Create a YAML file (for example, `nodeclass.yaml`) with your Node Class configuration

1. Apply the configuration to your cluster using `kubectl` 

1. Reference the Node Class in your Node Pool configuration. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

You need `kubectl` installed and configured. For more information, see [Set up to use Amazon EKS](setting-up.md).

### Basic Node Class Example
<a name="_basic_node_class_example"></a>

Here’s an example Node Class:

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: private-compute
spec:
  subnetSelectorTerms:
    - tags:
        Name: "private-subnet"
        kubernetes.io/role/internal-elb: "1"
  securityGroupSelectorTerms:
    - tags:
        Name: "eks-cluster-sg"
  ephemeralStorage:
    size: "160Gi"
```

This NodeClass increases the amount of ephemeral storage on the node.

Apply this configuration by using:

```
kubectl apply -f nodeclass.yaml
```

Next, reference the Node Class in your Node Pool configuration. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

## Create node class access entry
<a name="auto-node-access-entry"></a>

If you create a custom node class, you need to create an EKS Access Entry to permit the nodes to join the cluster. EKS automatically creates access entries when you use the built-in node class and node pools.

For information about how Access Entries work, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md).

When creating access entries for EKS Auto Mode node classes, you need to use the `EC2` access entry type.

### Create access entry with CLI
<a name="_create_access_entry_with_cli"></a>

 **To create an access entry for EC2 nodes and associate the EKS Auto Node Policy:** 

Update the following CLI commands with your cluster name, and node role ARN. The node role ARN is specified in the node class YAML.

```
# Create the access entry for EC2 nodes
aws eks create-access-entry \
  --cluster-name <cluster-name> \
  --principal-arn <node-role-arn> \
  --type EC2

# Associate the auto node policy
aws eks associate-access-policy \
  --cluster-name <cluster-name> \
  --principal-arn <node-role-arn> \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSAutoNodePolicy \
  --access-scope type=cluster
```

### Create access entry with CloudFormation
<a name="_create_access_entry_with_cloudformation"></a>

 **To create an access entry for EC2 nodes and associate the EKS Auto Node Policy:** 

Update the following CloudFormation with your cluster name, and node role ARN. The node role ARN is specified in the node class YAML.

```
EKSAutoNodeRoleAccessEntry:
  Type: AWS::EKS::AccessEntry
  Properties:
    ClusterName: <cluster-name>
    PrincipalArn: <node-role-arn>
    Type: "EC2"
    AccessPolicies:
      - AccessScope:
          Type: cluster
        PolicyArn: arn:aws:eks::aws:cluster-access-policy/AmazonEKSAutoNodePolicy
  DependsOn: [ <cluster-name> ] # previously defined in CloudFormation
```

For information about deploying CloudFormation stacks, see [Getting started with CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/GettingStarted.html) 

## Node Class Specification
<a name="auto-node-class-spec"></a>

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: my-node-class
spec:
  # Required fields

  # role and instanceProfile are mutually exclusive fields.
  role: MyNodeRole  # IAM role for EC2 instances
  # instanceProfile: eks-MyNodeInstanceProfile  # IAM instance-profile for EC2 instances

  subnetSelectorTerms:
    - tags:
        Name: "private-subnet"
        kubernetes.io/role/internal-elb: "1"
    # Alternative using direct subnet ID
    # - id: "subnet-0123456789abcdef0"

  securityGroupSelectorTerms:
    - tags:
        Name: "eks-cluster-sg"
    # Alternative approaches:
    # - id: "sg-0123456789abcdef0"
    # - name: "eks-cluster-security-group"

  # Optional: Pod subnet selector for advanced networking
  podSubnetSelectorTerms:
    - tags:
        Name: "pod-subnet"
        kubernetes.io/role/pod: "1"
    # Alternative using direct subnet ID
    # - id: "subnet-0987654321fedcba0"
  # must include Pod security group selector also
  podSecurityGroupSelectorTerms:
    - tags:
        Name: "eks-pod-sg"
    # Alternative using direct security group ID
    # - id: "sg-0123456789abcdef0"

  # Optional: Selects on-demand capacity reservations and capacity blocks
  # for EKS Auto Mode to prioritize.
  capacityReservationSelectorTerms:
    - id: cr-56fac701cc1951b03
    # Alternative Approaches
    - tags:
        Name: "targeted-odcr"
      # Optional owning account ID filter
      owner: "012345678901"

  # Optional fields
  snatPolicy: Random  # or Disabled

  networkPolicy: DefaultAllow  # or DefaultDeny
  networkPolicyEventLogs: Disabled  # or Enabled

  ephemeralStorage:
    size: "80Gi"    # Range: 1-59000Gi or 1-64000G or 1-58Ti or 1-64T
    iops: 3000      # Range: 3000-16000
    throughput: 125 # Range: 125-1000
    # Optional KMS key for encryption
    kmsKeyID: "arn:aws:kms:region:account:key/key-id"
    # Accepted formats:
    # KMS Key ID
    # KMS Key ARN
    # Key Alias Name
    # Key Alias ARN

  advancedNetworking:
    # Optional: Controls whether public IP addresses are assigned to instances that are launched with the nodeclass.
    # If not set, defaults to the MapPublicIpOnLaunch setting on the subnet.
    associatePublicIPAddress: false

    # Optional: Forward proxy, commonly requires certificateBundles as well
    # for EC2, see https://repost.aws/knowledge-center/eks-http-proxy-containerd-automation
    httpsProxy: http://192.0.2.4:3128 #commonly port 3128 (Squid) or 8080 (NGINX) #Max 255 characters
    #httpsProxy: http://[2001:db8::4]:3128 # IPv6 address with port, use []
    noProxy: #Max 50 entries
        - localhost #Max 255 characters each
        - 127.0.0.1
        #- ::1 # IPv6 localhost
        #- 0:0:0:0:0:0:0:1 # IPv6 localhost
        - 169.254.169.254 # EC2 Instance Metadata Service
        #- [fd00:ec2::254] # IPv6 EC2 Instance Metadata Service
        # Domains to exclude, put all VPC endpoints here
        - .internal
        - .eks.amazonaws.com
    # ipv4PrefixSize is default to Auto which is prefix and fallback to secondary IP. "32" is the secondary IP mode.
    ipv4PrefixSize: Auto # or "32"

    # enableV4Egress is default to true. Setting it to false when using network policy or blocking IPv4 traffic in IPv6 clusters
    enableV4Egress: false

  advancedSecurity:
    # Optional, US regions only: Specifying `fips: true` will cause nodes in the nodeclass to run FIPS compatible AMIs.
    fips: false

  # Optional: Custom certificate bundles.
  certificateBundles:
    - name: "custom-cert"
      data: "base64-encoded-cert-data"

  # Optional: Additional EC2 tags (with restrictions)
  tags:
    Environment: "production"
    Team: "platform"
    # Note: Cannot use restricted tags like:
    # - kubernetes.io/cluster/*
    # - karpenter.sh/provisioner-name
    # - karpenter.sh/nodepool
    # - karpenter.sh/nodeclaim
    # - karpenter.sh/managed-by
    # - eks.amazonaws.com/nodeclass
```

## Considerations
<a name="_considerations"></a>
+ If you want to verify how much local storage an instance has, you can describe the node to see the ephemeral storage resource.
+  **Volume Encryption** - EKS uses the configured custom KMS key to encrypt the read-only root volume of the instance and the read/write data volume.
+  **Replace the node IAM role** - If you change the node IAM role associated with a `NodeClass`, you will need to create a new Access Entry. EKS automatically creates an Access Entry for the node IAM role during cluster creation. The node IAM role requires the `AmazonEKSAutoNodePolicy` EKS Access Policy. For more information, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md).
+  **maximum Pod density** - EKS limits the maximum number of Pods on a node to 110. This limit is applied after the existing max Pods calculation. For more information, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md).
+  **Tags** - If you want to propagate tags from Kubernetes to EC2, you need to configure additional IAM permissions. For more information, see [Learn about identity and access in EKS Auto Mode](auto-learn-iam.md).
+  **Default node class** - Do not name your custom node class `default`. This is because EKS Auto Mode includes a `NodeClass` called `default` that is automatically provisioned when you enable at least one built-in `NodePool`. For information about enabling built-in `NodePools`, see [Enable or Disable Built-in NodePools](set-builtin-node-pools.md).
+  ** `subnetSelectorTerms` behavior with multiple subnets** - If there are multiple subnets that match the `subnetSelectorTerms` conditions or that you provide by ID, EKS Auto Mode creates nodes distributed across the subnets.
  + If the subnets are in different Availability Zones (AZs), you can use Kubernetes features like [Pod topology spread constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#pod-topology-spread-constraints) and [Topology Aware Routing](https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/) to spread Pods and traffic across the zones, respectively.
  + If there are multiple subnets *in the same AZ* that match the `subnetSelectorTerms`, EKS Auto Mode creates Pods on each node distributed across the subnets in that AZ. EKS Auto Mode creates secondary network interfaces on each node in the other subnets in the same AZ. It chooses based on the number of available IP addresses in each subnet, to use the subnets more efficiently. However, you can’t specify which subnet EKS Auto Mode uses for each Pod; if you need Pods to run in specific subnets, use [Separate subnets and security groups for Pods](#pod-subnet-selector) instead.

## Separate subnets and security groups for Pods
<a name="pod-subnet-selector"></a>

The `podSubnetSelectorTerms` and `podSecurityGroupSelectorTerms` fields enable advanced networking configurations by allowing Pods to use different subnets and security groups than their nodes. Both fields must be specified together. This separation provides enhanced control over network traffic routing and security policies.

**Note**  
This feature is different from the [Security Groups for Pods](security-groups-for-pods.md) (SGPP) feature used with the AWS VPC CNI for non-EKS Auto Mode compute. SGPP is not supported in EKS Auto Mode. Instead, use `podSecurityGroupSelectorTerms` in the `NodeClass` to apply separate security groups to Pod traffic. The security groups apply at the `NodeClass` level, meaning all Pods on nodes using that `NodeClass` share the same Pod security groups.

### How it works
<a name="_how_it_works"></a>

When you configure `podSubnetSelectorTerms` and `podSecurityGroupSelectorTerms`:

1. The node’s primary ENI uses the subnets and security groups from `subnetSelectorTerms` and `securityGroupSelectorTerms`. Only the node’s own IP address is assigned to this interface.

1. EKS Auto Mode creates secondary ENIs in the subnets matching `podSubnetSelectorTerms`, with the security groups from `podSecurityGroupSelectorTerms` attached. Pod IP addresses are allocated from these secondary ENIs using /28 prefixes by default, with automatic fallback to secondary IPs (/32) when a contiguous prefix block is not available. If `ipv4PrefixSize` is set to `"32"` in `advancedNetworking`, only secondary IPs are used.

1. The security groups specified in `podSecurityGroupSelectorTerms` apply to Pod traffic within the VPC. For traffic destined outside the VPC, Pods use the node’s primary ENI (and its security groups) because source network address translation (SNAT) translates the Pod IP to the node IP. You can modify this behavior with the `snatPolicy` field in the `NodeClass`.

### Use cases
<a name="_use_cases"></a>

Use `podSubnetSelectorTerms` and `podSecurityGroupSelectorTerms` when you need to:
+ Apply different security groups to control traffic for nodes and Pods separately.
+ Separate infrastructure traffic (node-to-node communication) from application traffic (Pod-to-Pod communication).
+ Apply different network configurations to node subnets than Pod subnets.
+ Configure reverse proxies or network filtering specifically for node traffic without affecting Pod traffic. Use `advancedNetworking` and `certificateBundles` to define your reverse proxy and any self-signed or private certificates for the proxy.

### Example configuration
<a name="_example_configuration"></a>

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: advanced-networking
spec:
  role: MyNodeRole

  # Subnets and security groups for EC2 instances (nodes)
  subnetSelectorTerms:
    - tags:
        Name: "node-subnet"
        kubernetes.io/role/internal-elb: "1"

  securityGroupSelectorTerms:
    - tags:
        Name: "eks-cluster-sg"

  # Separate subnets and security groups for Pods
  podSubnetSelectorTerms:
    - tags:
        Name: "pod-subnet"
        kubernetes.io/role/pod: "1"

  podSecurityGroupSelectorTerms:
  - tags:
      Name: "eks-pod-sg"
```

### Considerations for separate Pod subnets and security groups
<a name="_considerations_for_separate_pod_subnets_and_security_groups"></a>
+  **Security group scope**: The security groups from `podSecurityGroupSelectorTerms` are attached to the secondary ENIs and apply to Pod traffic within the VPC. When SNAT is enabled (the default `snatPolicy: Random`), traffic leaving the VPC is translated to the node’s primary ENI IP address, so the node’s security groups from `securityGroupSelectorTerms` apply to that traffic instead. If you set `snatPolicy: Disabled`, Pods use their own IP addresses for all traffic, and you must ensure that routing and security groups are configured accordingly.
+  **NodeClass-level granularity**: The Pod security groups apply to all Pods scheduled on nodes using the `NodeClass`. To apply different security groups to different workloads, create separate `NodeClass` and `NodePool` resources and use taints, tolerations, or node selectors to schedule workloads to the appropriate nodes.
+  **Reduced Pod density**: Fewer Pods can run on each node because the primary network interface of the node is reserved for the node IP and can’t be used for Pods.
+  **Subnet selector limitations**: The standard `subnetSelectorTerms` and `securityGroupSelectorTerms` configurations don’t apply to Pod subnet or security group selection.
+  **Network planning**: Ensure adequate IP address space in both node and Pod subnets to support your workload requirements.
+  **Routing configuration**: Verify that route table and network Access Control List (ACL) of the Pod subnets are properly configured for communication between node and Pod subnets.
+  **Availability Zones**: Verify that you’ve created Pod subnets across multiple AZs. If you are using a specific Pod subnet, it must be in the same AZ as the node subnet AZ.

## Secondary IP Mode for Pods
<a name="secondary-IP-mode"></a>

The `ipv4PrefixSize` field enables advanced networking configurations by allocating only secondary IP addresses to nodes. This feature doesn’t allocate prefixes (/28) to nodes and maintains only one secondary IP as MinimalIPTarget.

### Use cases
<a name="_use_cases_2"></a>

Use `ipv4PrefixSize` when you need to:
+  **Reduced IP utilization**: Only one IP address will be warmed up in every node.
+  **Lower pod churning rate**: Pod creation velocity is not a major concern.
+  **No prefix fragmentation**: Prefix-caused fragmentation is a major concern or blocker to use Auto Mode.

### Example configuration
<a name="_example_configuration_2"></a>

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: advanced-networking
spec:
  role: MyNodeRole

  advancedNetworking:
    ipv4PrefixSize: "32"
```

### Considerations for secondary IP mode
<a name="_considerations_for_secondary_ip_mode"></a>
+  **Reduced Pod creation velocity**: Since only one secondary IP is warmed up, the IPAM service needs more time to provision IPs when more pods are created.

## Disable IPv4 egress from IPv6 pods in IPv6 clusters.
<a name="enableV4Egress"></a>

The `enableV4Egress` field is `true` by default. For Auto Mode IPv6 clusters, the feature can be disabled so that Auto Mode won’t create an egress-only IPv4 interface for IPv6 pods. This is important because the IPv4 egress interface is not subject to Network Policy enforcement. Network policies are only enforced on the Pod’s primary interface (eth0).

### Use cases
<a name="_use_cases_3"></a>

Use `enableV4Egress` when you need to:
+  **Use IPv6 Cluster**: IPv4 egress traffic is allowed by default.
+  **Use Network Policy**: Currently EKS Network Policy doesn’t support dual stack. Disabling `enableV4Egress` can prevent pod traffic from egressing over IPv4 unexpectedly.

### Example configuration
<a name="_example_configuration_3"></a>

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: advanced-networking
spec:
  role: MyNodeRole

  advancedNetworking:
    enableV4Egress: false
```

### Considerations for disabling enableV4Egress
<a name="_considerations_for_disabling_enablev4egress"></a>
+  **Network Policy in IPv6 Cluster**: IPv6 clusters allow IPv4 traffic by default. Setting `enableV4Egress: false` blocks IPv4 egress traffic, providing enhanced security especially when used with Network Policies.

# Create a Node Pool for EKS Auto Mode
<a name="create-node-pool"></a>

Amazon EKS node pools offer a flexible way to manage compute resources in your Kubernetes cluster. This topic demonstrates how to create and configure node pools by using Karpenter, a node provisioning tool that helps optimize cluster scaling and resource utilization. With Karpenter’s NodePool resource, you can define specific requirements for your compute resources, including instance types, availability zones, architectures, and capacity types.

You cannot modify the built-in `system` and `general-purpose` node pools. You can only enable or disable them. For more information, see [Enable or Disable Built-in NodePools](set-builtin-node-pools.md).

The NodePool specification allows for fine-grained control over your EKS cluster’s compute resources through various supported labels and requirements. These include options for specifying EC2 instance categories, CPU configurations, availability zones, architectures (ARM64/AMD64), and capacity types (spot or on-demand). You can also set resource limits for CPU and memory usage, ensuring your cluster stays within required operational boundaries.

EKS Auto Mode leverages well-known Kubernetes labels to provide consistent and standardized ways of identifying node characteristics. These labels, such as `topology.kubernetes.io/zone` for availability zones and `kubernetes.io/arch` for CPU architecture, follow established Kubernetes conventions. Additionally, EKS-specific labels (prefixed with `eks.amazonaws.com/`) extend this functionality with AWS-specific attributes such as instance types, CPU manufacturers, GPU capabilities, and networking specifications. This standardized labeling system enables seamless integration with existing Kubernetes tools while providing deep AWS infrastructure integration.

## Create a NodePool
<a name="_create_a_nodepool"></a>

Follow these steps to create a NodePool for your Amazon EKS cluster:

1. Create a YAML file named `nodepool.yaml` with your required NodePool configuration. You can use the sample configuration below.

1. Apply the NodePool to your cluster:

   ```
   kubectl apply -f nodepool.yaml
   ```

1. Verify that the NodePool was created successfully:

   ```
   kubectl get nodepools
   ```

1. (Optional) Monitor the NodePool status:

   ```
   kubectl describe nodepool default
   ```

Ensure that your NodePool references a valid NodeClass that exists in your cluster. The NodeClass defines AWS-specific configurations for your compute resources. For more information, see [Create a Node Class for Amazon EKS](create-node-class.md).

## Sample NodePool
<a name="_sample_nodepool"></a>

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: my-node-pool
spec:
  template:
    metadata:
      labels:
        billing-team: my-team
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default

      requirements:
        - key: "eks.amazonaws.com/instance-category"
          operator: In
          values: ["c", "m", "r"]
        - key: "eks.amazonaws.com/instance-cpu"
          operator: In
          values: ["4", "8", "16", "32"]
        - key: "topology.kubernetes.io/zone"
          operator: In
          values: ["us-west-2a", "us-west-2b"]
        - key: "kubernetes.io/arch"
          operator: In
          values: ["arm64", "amd64"]

  limits:
    cpu: "1000"
    memory: 1000Gi
```

## EKS Auto Mode Supported Labels
<a name="auto-supported-labels"></a>

EKS Auto Mode supports the following well known labels.

**Note**  
EKS Auto Mode uses different labels than Karpenter. Labels related to EC2 managed instances start with `eks.amazonaws.com`.


| Label | Example | Description | 
| --- | --- | --- | 
|  topology.kubernetes.io/zone  |  us-east-2a  |   AWS region  | 
|  node.kubernetes.io/instance-type  |  g4dn.8xlarge  |   AWS instance type  | 
|  kubernetes.io/arch  |  amd64  |  Architectures are defined by [GOARCH values](https://github.com/golang/go/blob/master/src/internal/syslist/syslist.go#L58) on the instance  | 
|  karpenter.sh/capacity-type  |  spot  |  Capacity types include `spot`, `on-demand`   | 
|  eks.amazonaws.com/instance-hypervisor  |  nitro  |  Instance types that use a specific hypervisor  | 
|  eks.amazonaws.com/compute-type  |  auto  |  Identifies EKS Auto Mode managed nodes  | 
|  eks.amazonaws.com/instance-encryption-in-transit-supported  |  true  |  Instance types that support (or not) in-transit encryption  | 
|  eks.amazonaws.com/instance-category  |  g  |  Instance types of the same category, usually the string before the generation number  | 
|  eks.amazonaws.com/instance-generation  |  4  |  Instance type generation number within an instance category  | 
|  eks.amazonaws.com/instance-family  |  g4dn  |  Instance types of similar properties but different resource quantities  | 
|  eks.amazonaws.com/instance-size  |  8xlarge  |  Instance types of similar resource quantities but different properties  | 
|  eks.amazonaws.com/instance-cpu  |  32  |  Number of CPUs on the instance  | 
|  eks.amazonaws.com/instance-cpu-manufacturer  |   `aws`   |  Name of the CPU manufacturer  | 
|  eks.amazonaws.com/instance-memory  |  131072  |  Number of mebibytes of memory on the instance  | 
|  eks.amazonaws.com/instance-ebs-bandwidth  |  9500  |  Number of [maximum megabits](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html#ebs-optimization-performance) of EBS available on the instance  | 
|  eks.amazonaws.com/instance-network-bandwidth  |  131072  |  Number of [baseline megabits](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html) available on the instance  | 
|  eks.amazonaws.com/instance-gpu-name  |  t4  |  Name of the GPU on the instance, if available  | 
|  eks.amazonaws.com/instance-gpu-manufacturer  |  nvidia  |  Name of the GPU manufacturer  | 
|  eks.amazonaws.com/instance-gpu-count  |  1  |  Number of GPUs on the instance  | 
|  eks.amazonaws.com/instance-gpu-memory  |  16384  |  Number of mebibytes of memory on the GPU  | 
|  eks.amazonaws.com/instance-local-nvme  |  900  |  Number of gibibytes of local nvme storage on the instance  | 

**Note**  
EKS Auto Mode only supports certain instances, and has minimum size requirements. For more information, see [EKS Auto Mode supported instance reference](automode-learn-instances.md#auto-supported-instances).

## EKS Auto Mode Not Supported Labels
<a name="_eks_auto_mode_not_supported_labels"></a>

EKS Auto Mode does not support the following labels.
+ EKS Auto Mode only supports Linux
  +  `node.kubernetes.io/windows-build` 
  +  `kubernetes.io/os` 

## Disable built-in node pools
<a name="_disable_built_in_node_pools"></a>

If you create custom node pools, you can disable the built-in node pools. For more information, see [Enable or Disable Built-in NodePools](set-builtin-node-pools.md).

## Cluster without built-in node pools
<a name="_cluster_without_built_in_node_pools"></a>

You can create a cluster without the built-in node pools. This is helpful when your organization has created customized node pools.

**Note**  
When you create a cluster without built-in node pools, the `default` NodeClass is not automatically provisioned. You’ll need to create a custom NodeClass. For more information, see [Create a Node Class for Amazon EKS](create-node-class.md).

 **Overview:** 

1. Create an EKS cluster with the both `nodePools` and `nodeRoleArn` values empty.
   + Sample eksctl `autoModeConfig`:

     ```
     autoModeConfig:
       enabled: true
       nodePools: []
       # Do not set a nodeRoleARN
     ```

     For more information, see [Create an EKS Auto Mode Cluster with the eksctl CLI](automode-get-started-eksctl.md) 

1. Create a custom node class with a node role ARN
   + For more information, see [Create a Node Class for Amazon EKS](create-node-class.md) 

1. Create an access entry for the custom node class
   + For more information, see [Create node class access entry](create-node-class.md#auto-node-access-entry) 

1. Create a custom node pool, as described above.

## Disruption
<a name="_disruption"></a>

You can configure EKS Auto Mode to disrupt Nodes through your NodePool in multiple ways. You can use `spec.disruption.consolidationPolicy`, `spec.disruption.consolidateAfter`, or `spec.template.spec.expireAfter`. You can also rate limit EKS Auto Mode’s disruption through the NodePool’s `spec.disruption.budgets`. You can also control the time windows and number of simultaneous Nodes disrupted. For instructions on configuring this behavior, see [Disruption](https://karpenter.sh/docs/concepts/disruption/) in the Karpenter Documentation.

You can configure disruption for node pools to:
+ Identify when instances are underutilized, and consolidate workloads.
+ Create a node pool disruption budget to rate limit node terminations due to drift, emptiness, and consolidation.

By default, EKS Auto Mode:
+ Consolidates underutilized instances.
+ Terminates instances after 336 hours.
+ Sets a single disruption budget of 10% of nodes.
+ Allows Nodes to be replaced due to drift when a new Auto Mode AMI is released, which occurs roughly once per week.

## Termination Grace Period
<a name="_termination_grace_period"></a>

When a `terminationGracePeriod` is not explicitly defined on an EKS Auto NodePool, the system automatically applies a default 24-hour termination grace period to the associated NodeClaim. While EKS Auto customers will not see a `terminationGracePeriod` defaulted in their custom NodePool configurations, they will observe this default value on the NodeClaim. The functionality remains consistent whether the grace period is explicitly set on the NodePool or defaulted on the NodeClaim, ensuring predictable node termination behavior across the cluster.

# Static Capacity Node Pools in EKS Auto Mode
<a name="auto-static-capacity"></a>

Amazon EKS Auto Mode supports static capacity node pools that maintain a fixed number of nodes regardless of pod demand. Static capacity node pools are useful for workloads that require predictable capacity, reserved instances, or specific compliance requirements where you need to maintain a consistent infrastructure footprint.

Unlike dynamic node pools that scale based on pod scheduling demands, static capacity node pools maintain the number of nodes that you have configured.

## Configure a static capacity node pool
<a name="_configure_a_static_capacity_node_pool"></a>

To create a static capacity node pool, set the `replicas` field in your NodePool specification. The `replicas` field defines the exact number of nodes that the node pool will maintain. See [Examples](#static-capacity-examples) for how to configure `replicas`.

## Static capacity node pool considerations
<a name="_static_capacity_node_pool_considerations"></a>

Static capacity node pools have several important constraints and behaviors:

 **Configuration constraints:** 
+  **Cannot switch modes**: Once you set `replicas` on a node pool, you cannot remove it. The node pool cannot switch between static and dynamic modes.
+  **Limited resource limits**: Only the `limits.nodes` field is supported in the limits section. CPU and memory limits are not applicable.
+  **No weight field**: The `weight` field cannot be set on static capacity node pools since node selection is not based on priority.

 **Operational behavior:** 
+  **No consolidation**: Nodes in static capacity pools are not considered for consolidation.
+  **Scaling operations**: Scale operations bypass node disruption budgets but still respect PodDisruptionBudgets.
+  **Node replacement**: Nodes are still replaced for drift (such as AMI updates) and expiration based on your configuration.

## Best practices
<a name="_best_practices"></a>

 **Capacity planning:** 
+ Set `limits.nodes` higher than `replicas` to allow for temporary scaling during node replacement operations.
+ Consider the maximum capacity needed during node drift or AMI updates when setting limits.

 **Instance selection:** 
+ Use specific instance types when you have Reserved Instances or specific hardware requirements.
+ Avoid overly restrictive requirements that might limit instance availability during scaling.

 **Disruption management:** 
+ Configure appropriate disruption budgets to balance availability with maintenance operations.
+ Consider your application’s tolerance for node replacement when setting budget percentages.

 **Monitoring:** 
+ Regularly monitor the `status.nodes` field to ensure your desired capacity is maintained.
+ Set up alerts for when the actual node count deviates from the desired replicas.

 **Zone distribution:** 
+ For high availability, spread static capacity across multiple Availability Zones.
+ When you create a static capacity node pool that spans multiple availability zones, EKS Auto Mode distributes the nodes across the specified zones, but the distribution is not guaranteed to be even.
+ For predictable and even distribution across availability zones, create separate static capacity node pools, each pinned to a specific availability zone using the `topology.kubernetes.io/zone` requirement.
+ If you need 12 nodes evenly distributed across three zones, create three node pools with 4 replicas each, rather than one node pool with 12 replicas across three zones.

## Scale a static capacity node pool
<a name="_scale_a_static_capacity_node_pool"></a>

You can change the number of replicas in a static capacity node pool using the `kubectl scale` command:

```
# Scale down to 5 nodes
kubectl scale nodepool static-nodepool --replicas=5
```

When scaling down, EKS Auto Mode will terminate nodes gracefully, respecting PodDisruptionBudgets and allowing running pods to be rescheduled to remaining nodes.

## Monitor static capacity node pools
<a name="_monitor_static_capacity_node_pools"></a>

Use the following commands to monitor your static capacity node pools:

```
# View node pool status
kubectl get nodepool static-nodepool

# Get detailed information including current node count
kubectl describe nodepool static-nodepool

# Check the current number of nodes
kubectl get nodepool static-nodepool -o jsonpath='{.status.nodes}'
```

The `status.nodes` field shows the current number of nodes managed by the node pool, which should match your desired `replicas` count under normal conditions.

## Troubleshooting
<a name="_troubleshooting"></a>

 **Nodes not reaching desired replicas:** 
+ Check if the `limits.nodes` value is sufficient
+ Verify that your requirements don’t overly constrain instance selection
+ Review AWS service quotas for the instance types and regions you’re using

 **Node replacement taking too long:** 
+ Adjust disruption budgets to allow more concurrent replacements
+ Check if PodDisruptionBudgets are preventing node termination

 **Unexpected node termination:** 
+ Review the `expireAfter` and `terminationGracePeriod` settings
+ Check for manual node terminations or AWS maintenance events

## Examples
<a name="static-capacity-examples"></a>

### Basic static capacity node pool
<a name="_basic_static_capacity_node_pool"></a>

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: basic-static
spec:
  replicas: 5

  template:
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default

      requirements:
        - key: "eks.amazonaws.com/instance-category"
          operator: In
          values: ["m"]
        - key: "topology.kubernetes.io/zone"
          operator: In
          values: ["us-west-2a"]

  limits:
    nodes: 8  # Allow scaling up to 8 during operations
```

### Static capacity with specific instance types
<a name="_static_capacity_with_specific_instance_types"></a>

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: reserved-instances
spec:
  replicas: 20

  template:
    metadata:
      labels:
        instance-type: reserved
        cost-center: production
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default

      requirements:
        - key: "node.kubernetes.io/instance-type"
          operator: In
          values: ["m5.2xlarge"]  # Specific instance type
        - key: "karpenter.sh/capacity-type"
          operator: In
          values: ["on-demand"]
        - key: "topology.kubernetes.io/zone"
          operator: In
          values: ["us-west-2a", "us-west-2b", "us-west-2c"]

  limits:
    nodes: 25

  disruption:
    # Conservative disruption for production workloads
    budgets:
      - nodes: 10%
```

### Multi-zone static capacity node pool
<a name="_multi_zone_static_capacity_node_pool"></a>

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: multi-zone-static
spec:
  replicas: 12  # Will be distributed across specified zones

  template:
    metadata:
      labels:
        availability: high
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default

      requirements:
        - key: "eks.amazonaws.com/instance-category"
          operator: In
          values: ["c", "m"]
        - key: "eks.amazonaws.com/instance-cpu"
          operator: In
          values: ["8", "16"]
        - key: "topology.kubernetes.io/zone"
          operator: In
          values: ["us-west-2a", "us-west-2b", "us-west-2c"]
        - key: "karpenter.sh/capacity-type"
          operator: In
          values: ["on-demand"]

  limits:
    nodes: 15

  disruption:
    budgets:
      - nodes: 25%
```

### Static capacity with capacity reservation
<a name="_static_capacity_with_capacity_reservation"></a>

The following example shows how to use a static capacity node pool with an EC2 Capacity Reservation. For more information on using EC2 Capacity Reservations with EKS Auto Mode, see [Control deployment of workloads into Capacity Reservations with EKS Auto Mode](auto-odcr.md).

 `NodeClass` defining the `capacityReservationSelectorTerms` 

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: capacity-reservation-nodeclass
spec:
  role: AmazonEKSNodeRole
  securityGroupSelectorTerms:
  - id: sg-0123456789abcdef0
  subnetSelectorTerms:
  - id: subnet-0123456789abcdef0
  capacityReservationSelectorTerms:
  - id: cr-0123456789abcdef0
```

 `NodePool` referencing the above `NodeClass` and using `karpenter.sh/capacity-type: reserved`.

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: static-capacity-reservation-nodepool
spec:
  replicas: 5
  limits:
    nodes: 8  # Allow scaling up to 8 during operations
  template:
    metadata: {}
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: capacity-reservation-nodeclass
      requirements:
      - key: karpenter.sh/capacity-type
        operator: In
        values: ['reserved']
```

# Create an IngressClass to configure an Application Load Balancer
<a name="auto-configure-alb"></a>

EKS Auto Mode automates routine tasks for load balancing, including exposing cluster apps to the internet.

 AWS suggests using Application Load Balancers (ALB) to serve HTTP and HTTPS traffic. Application Load Balancers can route requests based on the content of the request. For more information on Application Load Balancers, see [What is Elastic Load Balancing?](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) 

EKS Auto Mode creates and configures Application Load Balancers (ALBs). For example, EKS Auto Mode creates a load balancer when you create an `Ingress` Kubernetes objects and configures it to route traffic to your cluster workload.

 **Overview** 

1. Create a workload that you want to expose to the internet.

1. Create an `IngressClassParams` resource, specifying AWS specific configuration values such as the certificate to use for SSL/TLS and VPC Subnets.

1. Create an `IngressClass` resource, specifying that EKS Auto Mode will be the controller for the resource.

1. Create an `Ingress` resource that associates a HTTP path and port with a cluster workload.

EKS Auto Mode will create an Application Load Balancer that points to the workload specified in the `Ingress` resource, using the load balancer configuration specified in the `IngressClassParams` resource.

## Prerequisites
<a name="_prerequisites"></a>
+ EKS Auto Mode Enabled on an Amazon EKS Cluster
+ Kubectl configured to connect to your cluster
  + You can use `kubectl apply -f <filename>` to apply the sample configuration YAML files below to your cluster.

**Note**  
EKS Auto Mode requires subnet tags to identify public and private subnets.  
If you created your cluster with `eksctl`, you already have these tags.  
Learn how to [Tag subnets for EKS Auto Mode](tag-subnets-auto.md).

## Step 1: Create a workload
<a name="_step_1_create_a_workload"></a>

To begin, create a workload that you want to expose to the internet. This can be any Kubernetes resource that serves HTTP traffic, such as a Deployment or a Service.

This example uses a simple HTTP service called `service-2048` that listens on port `80`. Create this service and its deployment by applying the following manifest, `2048-deployment-service.yaml`:

```
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-2048
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app-2048
  replicas: 2
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app-2048
    spec:
      containers:
        - image: public.ecr.aws/l6m2t8p7/docker-2048:latest
          imagePullPolicy: Always
          name: app-2048
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: service-2048
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    app.kubernetes.io/name: app-2048
```

Apply the configuration to your cluster:

```
kubectl apply -f 2048-deployment-service.yaml
```

The resources listed above will be created in the default namespace. You can verify this by running the following command:

```
kubectl get all -n default
```

## Step 2: Create IngressClassParams
<a name="_step_2_create_ingressclassparams"></a>

Create an `IngressClassParams` object to specify AWS specific configuration options for the Application Load Balancer. In this example, we create an `IngressClassParams` resource named `alb` (which you will use in the next step) that specifies the load balancer scheme as `internet-facing` in a file called `alb-ingressclassparams.yaml`.

```
apiVersion: eks.amazonaws.com/v1
kind: IngressClassParams
metadata:
  name: alb
spec:
  scheme: internet-facing
```

Apply the configuration to your cluster:

```
kubectl apply -f alb-ingressclassparams.yaml
```

## Step 3: Create IngressClass
<a name="_step_3_create_ingressclass"></a>

Create an `IngressClass` that references the AWS specific configuration values set in the `IngressClassParams` resource in a file named `alb-ingressclass.yaml`. Note the name of the `IngressClass`. In this example, both the `IngressClass` and `IngressClassParams` are named `alb`.

Use the `is-default-class` annotation to control if `Ingress` resources should use this class by default.

```
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: alb
  annotations:
    # Use this annotation to set an IngressClass as Default
    # If an Ingress doesn't specify a class, it will use the Default
    ingressclass.kubernetes.io/is-default-class: "true"
spec:
  # Configures the IngressClass to use EKS Auto Mode
  controller: eks.amazonaws.com/alb
  parameters:
    apiGroup: eks.amazonaws.com
    kind: IngressClassParams
    # Use the name of the IngressClassParams set in the previous step
    name: alb
```

For more information on configuration options, see [IngressClassParams Reference](#ingress-reference).

Apply the configuration to your cluster:

```
kubectl apply -f alb-ingressclass.yaml
```

## Step 4: Create Ingress
<a name="_step_4_create_ingress"></a>

Create an `Ingress` resource in a file named `alb-ingress.yaml`. The purpose of this resource is to associate paths and ports on the Application Load Balancer with workloads in your cluster. For this example, we create an `Ingress` resource named `2048-ingress` that routes traffic to a service named `service-2048` on port 80.

For more information about configuring this resource, see [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) in the Kubernetes Documentation.

```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: 2048-ingress
spec:
  # this matches the name of IngressClass.
  # this can be omitted if you have a default ingressClass in cluster: the one with ingressclass.kubernetes.io/is-default-class: "true"  annotation
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /*
            pathType: ImplementationSpecific
            backend:
              service:
                name: service-2048
                port:
                  number: 80
```

Apply the configuration to your cluster:

```
kubectl apply -f alb-ingress.yaml
```

## Step 5: Check Status
<a name="_step_5_check_status"></a>

Use `kubectl` to find the status of the `Ingress`. It can take a few minutes for the load balancer to become available.

Use the name of the `Ingress` resource you set in the previous step. For example:

```
kubectl get ingress 2048-ingress
```

Once the resource is ready, retrieve the domain name of the load balancer.

```
kubectl get ingress 2048-ingress -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
```

To view the service in a web browser, review the port and path specified in the `Ingress` rescue.

## Step 6: Cleanup
<a name="_step_6_cleanup"></a>

To clean up the load balancer, use the following command:

```
kubectl delete ingress 2048-ingress
kubectl delete ingressclass alb
kubectl delete ingressclassparams alb
```

EKS Auto Mode will automatically delete the associated load balancer in your AWS account.

## IngressClassParams Reference
<a name="ingress-reference"></a>

The table below is a quick reference for commonly used configuration options.


| Field | Description | Example value | 
| --- | --- | --- | 
|   `scheme`   |  Defines whether the ALB is internal or internet-facing  |   `internet-facing`   | 
|   `namespaceSelector`   |  Restricts which namespaces can use this IngressClass  |   `environment: prod`   | 
|   `group.name`   |  Groups multiple Ingresses to share a single ALB  |   `retail-apps`   | 
|   `ipAddressType`   |  Sets IP address type for the ALB  |   `dualstack`   | 
|   `subnets.ids`   |  List of subnet IDs for ALB deployment  |   `subnet-xxxx, subnet-yyyy`   | 
|   `subnets.tags`   |  Tag filters to select subnets for ALB  |   `Environment: prod`   | 
|   `certificateARNs`   |  ARNs of SSL certificates to use  |   ` arn:aws:acm:region:account:certificate/id`   | 
|   `tags`   |  Custom tags for AWS resources  |   `Environment: prod, Team: platform`   | 
|   `loadBalancerAttributes`   |  Load balancer specific attributes  |   `idle_timeout.timeout_seconds: 60`   | 

## Considerations
<a name="_considerations"></a>
+ You cannot use Annotations on an IngressClass to configure load balancers with EKS Auto Mode. IngressClass configuration should be done through IngressClassParams. However, you can use annotations on individual Ingress resources to configure load balancer behavior (such as `alb.ingress.kubernetes.io/security-group-prefix-lists` or `alb.ingress.kubernetes.io/conditions.*`).
+ You cannot set [ListenerAttribute](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_ListenerAttribute.html) with EKS Auto Mode.
+ You must update the Cluster IAM Role to enable tag propagation from Kubernetes to AWS Load Balancer resources. For more information, see [Custom AWS tags for EKS Auto resources](auto-cluster-iam-role.md#tag-prop).
+ For information about associating resources with either EKS Auto Mode or the self-managed AWS Load Balancer Controller, see [Migration reference](migrate-auto.md#migration-reference).
+ For information about fixing issues with load balancers, see [Troubleshoot EKS Auto Mode](auto-troubleshoot.md).
+ For more considerations about using the load balancing capability of EKS Auto Mode, see [Load balancing](auto-networking.md#auto-lb-consider).

The following tables provide a detailed comparison of changes in IngressClassParams, Ingress annotations, and TargetGroupBinding configurations for EKS Auto Mode. These tables highlight the key differences between the load balancing capability of EKS Auto Mode and the open source load balancer controller, including API version changes, deprecated features, and updated parameter names.

### IngressClassParams
<a name="_ingressclassparams"></a>


| Previous | New | Description | 
| --- | --- | --- | 
|   `elbv2.k8s.aws/v1beta1`   |   `eks.amazonaws.com/v1`   |  API version change  | 
|   `spec.certificateArn`   |   `spec.certificateARNs`   |  Support for multiple certificate ARNs  | 
|   `spec.subnets.tags`   |   `spec.subnets.matchTags`   |  Changed subnet matching schema  | 
|   `spec.listeners.listenerAttributes`   |  Not supported  |  Not yet supported by EKS Auto Mode  | 

### Ingress annotations
<a name="_ingress_annotations"></a>


| Previous | New | Description | 
| --- | --- | --- | 
|   `kubernetes.io/ingress.class`   |  Not supported  |  Use `spec.ingressClassName` on Ingress objects  | 
|   `alb.ingress.kubernetes.io/group.name`   |  Not supported  |  Specify groups in IngressClass only  | 
|   `alb.ingress.kubernetes.io/waf-acl-id`   |  Not supported  |  Use WAF v2 instead  | 
|   `alb.ingress.kubernetes.io/web-acl-id`   |  Not supported  |  Use WAF v2 instead  | 
|   `alb.ingress.kubernetes.io/shield-advanced-protection`   |  Not supported  |  Shield integration disabled  | 
|   `alb.ingress.kubernetes.io/auth-type: oidc`   |  Not supported  |  OIDC Auth Type is currently not supported  | 

### TargetGroupBinding
<a name="_targetgroupbinding"></a>


| Previous | New | Description | 
| --- | --- | --- | 
|   `elbv2.k8s.aws/v1beta1`   |   `eks.amazonaws.com/v1`   |  API version change  | 
|   `spec.targetType` optional  |   `spec.targetType` required  |  Explicit target type specification  | 
|   `spec.networking.ingress.from`   |  Not supported  |  No longer supports NLB without security groups  | 

To use the custom TargetGroupBinding feature, you must tag the target group with the eks:eks-cluster-name tag with cluster name to grant the controller the necessary IAM permissions. Be aware that the controller will delete the target group when the TargetGroupBinding resource or the cluster is deleted.

# Use Service Annotations to configure Network Load Balancers
<a name="auto-configure-nlb"></a>

Learn how to configure Network Load Balancers (NLB) in Amazon EKS using Kubernetes service annotations. This topic explains the annotations supported by EKS Auto Mode for customizing NLB behavior, including internet accessibility, health checks, SSL/TLS termination, and IP targeting modes.

When you create a Kubernetes service of type `LoadBalancer` in EKS Auto Mode, EKS automatically provisions and configures an AWS Network Load Balancer based on the annotations you specify. This declarative approach allows you to manage load balancer configurations directly through your Kubernetes manifests, maintaining infrastructure as code practices.

EKS Auto Mode handles Network Load Balancer provisioning by default for all services of type LoadBalancer - no additional controller installation or configuration is required. The `loadBalancerClass: eks.amazonaws.com/nlb` specification is automatically set as the cluster default, streamlining the deployment process while maintaining compatibility with existing Kubernetes workloads.

**Note**  
EKS Auto Mode requires subnet tags to identify public and private subnets.  
If you created your cluster with `eksctl`, you already have these tags.  
Learn how to [Tag subnets for EKS Auto Mode](tag-subnets-auto.md).

## Sample Service
<a name="_sample_service"></a>

For more information about the Kubernetes `Service` resource, see [the Kubernetes Documentation](https://kubernetes.io/docs/concepts/services-networking/service/).

Review the sample `Service` resource below:

```
apiVersion: v1
kind: Service
metadata:
  name: echoserver
  annotations:
    # Specify the load balancer scheme as internet-facing to create a public-facing Network Load Balancer (NLB)
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
  selector:
    app: echoserver
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  type: LoadBalancer
  # Specify the new load balancer class for NLB as part of EKS Auto Mode feature
  # For clusters with Auto Mode enabled, this field can be omitted as it's the default
  loadBalancerClass: eks.amazonaws.com/nlb
```

## Commonly used annotations
<a name="_commonly_used_annotations"></a>

The following table lists commonly used annotations supported by EKS Auto Mode. Note that EKS Auto Mode may not support all annotations.

**Tip**  
All of the following annotations need to be prefixed with `service.beta.kubernetes.io/` 


| Field | Description | Example | 
| --- | --- | --- | 
|   `aws-load-balancer-type`   |  Specifies the load balancer type. Use `external` for new deployments.  |   `external`   | 
|   `aws-load-balancer-nlb-target-type`   |  Specifies whether to route traffic to node instances or directly to pod IPs. Use `instance` for standard deployments or `ip` for direct pod routing.  |   `instance`   | 
|   `aws-load-balancer-scheme`   |  Controls whether the load balancer is internal or internet-facing.  |   `internet-facing`   | 
|   `aws-load-balancer-healthcheck-protocol`   |  Health check protocol for target group. Common options are `TCP` (default) or `HTTP`.  |   `HTTP`   | 
|   `aws-load-balancer-healthcheck-path`   |  The HTTP path for health checks when using HTTP/HTTPS protocol.  |   `/healthz`   | 
|   `aws-load-balancer-healthcheck-port`   |  Port used for health checks. Can be a specific port number or `traffic-port`.  |   `traffic-port`   | 
|   `aws-load-balancer-subnets`   |  Specifies which subnets to create the load balancer in. Can use subnet IDs or names.  |   `subnet-xxxx, subnet-yyyy`   | 
|   `aws-load-balancer-ssl-cert`   |  ARN of the SSL certificate from AWS Certificate Manager for HTTPS/TLS.  |   ` arn:aws:acm:region:account:certificate/cert-id`   | 
|   `aws-load-balancer-ssl-ports`   |  Specifies which ports should use SSL/TLS.  |   `443, 8443`   | 
|   `load-balancer-source-ranges`   |  CIDR ranges allowed to access the load balancer.  |   `10.0.0.0/24, 192.168.1.0/24`   | 
|   `aws-load-balancer-additional-resource-tags`   |  Additional AWS tags to apply to the load balancer and related resources.  |   `Environment=prod,Team=platform`   | 
|   `aws-load-balancer-ip-address-type`   |  Specifies whether the load balancer uses IPv4 or dual-stack (IPv4 \$1 IPv6).  |   `ipv4` or `dualstack`   | 

## Considerations
<a name="_considerations"></a>
+ You must update the Cluster IAM Role to enable tag propagation from Kubernetes to AWS Load Balancer resources. For more information, see [Custom AWS tags for EKS Auto resources](auto-cluster-iam-role.md#tag-prop).
+ For information about associating resources with either EKS Auto Mode or the self-managed AWS Load Balancer Controller, see [Migration reference](migrate-auto.md#migration-reference).
+ For information about fixing issues with load balancers, see [Troubleshoot EKS Auto Mode](auto-troubleshoot.md).
+ For more considerations about using the load balancing capability of EKS Auto Mode, see [Load balancing](auto-networking.md#auto-lb-consider).

When migrating to EKS Auto Mode for load balancing, several changes in service annotations and resource configurations are necessary. The following tables outline key differences between previous and new implementations, including unsupported options and recommended alternatives.

### Service annotations
<a name="_service_annotations"></a>


| Previous | New | Description | 
| --- | --- | --- | 
|   `service.beta.kubernetes.io/load-balancer-source-ranges`   |  Not supported  |  Use `spec.loadBalancerSourceRanges` on Service  | 
|   `service.beta.kubernetes.io/aws-load-balancer-type`   |  Not supported  |  Use `spec.loadBalancerClass` on Service  | 
|   `service.beta.kubernetes.io/aws-load-balancer-internal`   |  Not supported  |  Use `service.beta.kubernetes.io/aws-load-balancer-scheme`   | 
|   `service.beta.kubernetes.io/aws-load-balancer-proxy-protocol`   |  Not supported  |  Use `service.beta.kubernetes.io/aws-load-balancer-target-group-attributes` instead  | 
|  Various load balancer attributes  |  Not supported  |  Use `service.beta.kubernetes.io/aws-load-balancer-attributes`   | 
|   `service.beta.kubernetes.io/aws-load-balancer-access-log-enabled`   |  Not supported  |  Use `service.beta.kubernetes.io/aws-load-balancer-attributes` instead  | 
|   `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name`   |  Not supported  |  Use `service.beta.kubernetes.io/aws-load-balancer-attributes` instead  | 
|   `service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix`   |  Not supported  |  Use `service.beta.kubernetes.io/aws-load-balancer-attributes` instead  | 
|   `service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled`   |  Not supported  |  Use `service.beta.kubernetes.io/aws-load-balancer-attributes` instead  | 

To migrate from deprecated load balancer attribute annotations, consolidate these settings into the `service.beta.kubernetes.io/aws-load-balancer-attributes` annotation. This annotation accepts a comma-separated list of key-value pairs for various load balancer attributes. For example, to specify access logging, and cross-zone load balancing, use the following format:

```
service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-bucket,access_logs.s3.prefix=my-prefix,load_balancing.cross_zone.enabled=true
```

This consolidated format provides a more consistent and flexible way to configure load balancer attributes while reducing the number of individual annotations needed. Review your existing Service configurations and update them to use this consolidated format.

### TargetGroupBinding
<a name="_targetgroupbinding"></a>


| Previous | New | Description | 
| --- | --- | --- | 
|   `elbv2.k8s.aws/v1beta1`   |   `eks.amazonaws.com/v1`   |  API version change  | 
|   `spec.targetType` optional  |   `spec.targetType` required  |  Explicit target type specification  | 
|   `spec.networking.ingress.from`   |  Not supported  |  No longer supports NLB without security groups  | 

Note: To use the custom TargetGroupBinding feature, you must tag the target group with the `eks:eks-cluster-name` tag with cluster name to grant the controller the necessary IAM permissions. Be aware that the controller will delete the target group when the TargetGroupBinding resource or the cluster is deleted.

# Create a storage class
<a name="create-storage-class"></a>

A `StorageClass` in Amazon EKS Auto Mode defines how Amazon EBS volumes are automatically provisioned when applications request persistent storage. This page explains how to create and configure a `StorageClass` that works with the Amazon EKS Auto Mode to provision EBS volumes.

By configuring a `StorageClass`, you can specify default settings for your EBS volumes including volume type, encryption, IOPS, and other storage parameters. You can also configure the `StorageClass` to use AWS KMS keys for encryption management.

EKS Auto Mode does not create a `StorageClass` for you. You must create a `StorageClass` referencing `ebs.csi.eks.amazonaws.com` to use the storage capability of EKS Auto Mode.

First, create a file named `storage-class.yaml`:

```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: auto-ebs-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
allowedTopologies:
- matchLabelExpressions:
  - key: eks.amazonaws.com/compute-type
    values:
    - auto
provisioner: ebs.csi.eks.amazonaws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: gp3
  encrypted: "true"
```

Second, apply the storage class to your cluster.

```
kubectl apply -f storage-class.yaml
```

 **Key components:** 
+  `provisioner: ebs.csi.eks.amazonaws.com` - Uses EKS Auto Mode
+  `allowedTopologies` - Specifying `matchLabelExpressions` to match on `eks.amazonaws.com/compute-type:auto` will ensure that if your pods need a volume to be automatically provisioned using Auto Mode then the pods will not be scheduled on non-Auto nodes.
+  `volumeBindingMode: WaitForFirstConsumer` - Delays volume creation until a pod needs it
+  `type: gp3` - Specifies the EBS volume type
+  `encrypted: "true"` - EBS will encrypt any volumes created using the `StorageClass`. EBS will use the default `aws/ebs` key alias. For more information, see [How Amazon EBS encryption works](https://docs.aws.amazon.com/ebs/latest/userguide/how-ebs-encryption-works.html) in the Amazon EBS User Guide. This value is optional but suggested.
+  `storageclass.kubernetes.io/is-default-class: "true"` - Kubernetes will use this storage class by default, unless you specify a different volume class on a persistent volume claim. This value is optional. Use caution when setting this value if you are migrating from a different storage controller.

## Use self-managed KMS key to encrypt EBS volumes
<a name="_use_self_managed_kms_key_to_encrypt_ebs_volumes"></a>

To use a self-managed KMS key to encrypt EBS volumes automated by EKS Auto Mode, you need to:

1. Create a self-managed KMS key.
   + For more information, see [Create a symmetric encryption KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/create-symmetric-cmk.html) or [How Amazon Elastic Block Store (Amazon EBS) uses KMS](https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.html) in the KMS User Guide.

1. Create a new policy that permits access to the KMS key.
   + Use the sample IAM policy below to create the policy. Insert the ARN of the new self-managed KMS key. For more information, see [Creating roles and attaching policies (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions_create-policies.html) in the AWS IAM User Guide.

1. Attach the policy to the EKS Cluster Role.
   + Use the AWS console to find the ARN of the EKS Cluster Role. The role information is visible in the **Overview** section. For more information, see [Amazon EKS cluster IAM role](cluster-iam-role.md).

1. Update the `StorageClass` to reference the KMS Key ID at the `parameters.kmsKeyId` field.

### Sample self-managed KMS IAM Policy
<a name="_sample_self_managed_kms_iam_policy"></a>

Update the following values in the policy below:
+  `<account-id>` – Your AWS account ID, such as `111122223333` 
+  `<aws-region>` – The AWS region of your cluster, such as `us-west-2` 

```
{
  "Version":"2012-10-17",		 	 	 
  "Id": "key-auto-policy-3",
  "Statement": [
      {
          "Sid": "Enable IAM User Permissions",
          "Effect": "Allow",
          "Principal": {
              "AWS": "arn:aws:iam::123456789012:root"
          },
          "Action": "kms:*",
          "Resource": "*"
      },
      {
        "Sid": "Allow access through EBS for all principals in the account that are authorized to use EBS",
        "Effect": "Allow",
        "Principal": {
            "AWS": "*"
        },
        "Action": [
            "kms:Encrypt",
            "kms:Decrypt",
            "kms:ReEncrypt*",
            "kms:GenerateDataKey*",
            "kms:CreateGrant",
            "kms:DescribeKey"
        ],
        "Resource": "*",
        "Condition": {
            "StringEquals": {
                "kms:CallerAccount": "123456789012",
                "kms:ViaService": "ec2.us-east-1.amazonaws.com"
            }
        }
    }
  ]
}
```

### Sample self-managed KMS `StorageClass`
<a name="_sample_self_managed_kms_storageclass"></a>

```
parameters:
  type: gp3
  encrypted: "true"
  kmsKeyId: <custom-key-arn>
```

## `StorageClass` Parameters Reference
<a name="_storageclass_parameters_reference"></a>

For general information on the Kubernetes `StorageClass` resources, see [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) in the Kubernetes Documentation.

THe `parameters` section of the `StorageClass` resource is specific to AWS. Use the following table to review available options.


| Parameters | Values | Default | Description | 
| --- | --- | --- | --- | 
|  "csi.storage.k8s.io/fstype"  |  xfs, ext2, ext3, ext4  |  ext4  |  File system type that will be formatted during volume creation. This parameter is case sensitive\$1  | 
|  "type"  |  io1, io2, gp2, gp3, sc1, st1, standard, sbp1, sbg1  |  gp3  |  EBS volume type.  | 
|  "iopsPerGB"  |  |  |  I/O operations per second per GiB. Can be specified for IO1, IO2, and GP3 volumes.  | 
|  "allowAutoIOPSPerGBIncrease"  |  true, false  |  false  |  When `"true"`, the CSI driver increases IOPS for a volume when `iopsPerGB * <volume size>` is too low to fit into IOPS range supported by AWS. This allows dynamic provisioning to always succeed, even when user specifies too small PVC capacity or `iopsPerGB` value. On the other hand, it may introduce additional costs, as such volumes have higher IOPS than requested in `iopsPerGB`.  | 
|  "iops"  |  |  |  I/O operations per second. Can be specified for IO1, IO2, and GP3 volumes.  | 
|  "throughput"  |  |  125  |  Throughput in MiB/s. Only effective when gp3 volume type is specified.  | 
|  "encrypted"  |  true, false  |  false  |  Whether the volume should be encrypted or not. Valid values are "true" or "false".  | 
|  "blockExpress"  |  true, false  |  false  |  Enables the creation of io2 Block Express volumes.  | 
|  "kmsKeyId"  |  |  |  The full ARN of the key to use when encrypting the volume. If not specified, AWS will use the default KMS key for the region the volume is in. This will be an auto-generated key called `/aws/ebs` if not changed.  | 
|  "blockSize"  |  |  |  The block size to use when formatting the underlying filesystem. Only supported on linux nodes and with fstype `ext2`, `ext3`, `ext4`, or `xfs`.  | 
|  "inodeSize"  |  |  |  The inode size to use when formatting the underlying filesystem. Only supported on linux nodes and with fstype `ext2`, `ext3`, `ext4`, or `xfs`.  | 
|  "bytesPerInode"  |  |  |  The `bytes-per-inode` to use when formatting the underlying filesystem. Only supported on linux nodes and with fstype `ext2`, `ext3`, `ext4`.  | 
|  "numberOfInodes"  |  |  |  The `number-of-inodes` to use when formatting the underlying filesystem. Only supported on linux nodes and with fstype `ext2`, `ext3`, `ext4`.  | 
|  "ext4BigAlloc"  |  true, false  |  false  |  Changes the `ext4` filesystem to use clustered block allocation by enabling the `bigalloc` formatting option. Warning: `bigalloc` may not be fully supported with your node’s Linux kernel.  | 
|  "ext4ClusterSize"  |  |  |  The cluster size to use when formatting an `ext4` filesystem when the `bigalloc` feature is enabled. Note: The `ext4BigAlloc` parameter must be set to true.  | 

For more information, see the [AWS EBS CSI Driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md) on GitHub.

## Considerations
<a name="_considerations"></a>

**Note**  
You can only deploy workloads depending on EKS Auto Mode StorageClasses on EKS Auto Mode nodes. If you have a cluster with mixed types of nodes, you need to configure your workloads to run only on EKS Auto Mode nodes. For more information, see [Control if a workload is deployed on EKS Auto Mode nodes](associate-workload.md).

The block storage capability of EKS Auto Mode is different from the EBS CSI Driver.
+ Static Provisioning
  + If you want to use externally-created EBS volumes with EKS Auto Mode, you need to manually add an AWS tag with the key `eks:eks-cluster-name` and the value of the cluster name.
+ Node Startup Taint
  + You cannot use the node startup taint feature to prevent pod scheduling before storage capability readiness
+ Custom Tags on Dynamically Provisioned Volumes
  + You cannot use the extra-tag CLI flag to configure custom tags on dynamically provisioned EBS volumes
  + You can use `StorageClass` tagging to add custom tags. EKS Auto Mode will add tags to the associated AWS resources. You will need to update the Cluster IAM Role for custom tags. For more information, see [Custom AWS tags for EKS Auto resources](auto-cluster-iam-role.md#tag-prop).
+ EBS Detailed Performance Metrics
  + You cannot access Prometheus metrics for EBS detailed performance

## Install CSI Snapshot Controller add-on
<a name="_install_csi_snapshot_controller_add_on"></a>

EKS Auto Mode is compatible with the CSI Snapshot Controller Amazon EKS add-on.

 AWS suggests you configure this add-on to run on the built-in `system` node pool.

For more information, see:
+  [Run critical add-ons on dedicated instances](critical-workload.md) 
+  [Enable or Disable Built-in NodePools](set-builtin-node-pools.md) 
+  [Enable snapshot functionality for CSI volumes](csi-snapshot-controller.md) 

### To install snapshot controller in system node pool
<a name="auto-install-snapshot-controller"></a>

1. Open your EKS cluster in the AWS console

1. From the **Add-ons** tab, select **Get more add-ons** 

1. Select the **CSI Snapshot Controller** and then **Next** 

1. On the **Configure selected add-ons settings** page, select **Optional configuration settings** to view the **Add-on configuration schema** 

   1. Insert the following yaml to associate the snapshot controller with the `system` node pool. The snapshot controller includes a toleration for the `CriticalAddonsOnly` taint.

      ```
      {
              "nodeSelector": {
                  "karpenter.sh/nodepool": "system"
              }
      }
      ```

   1. Select **Next** 

1. Review the add-on configuration and then select **Create** 

# Disable EKS Auto Mode
<a name="auto-disable"></a>

You can disable EKS Auto Mode on an existing EKS Cluster. This is a destructive operation.
+ EKS will terminate all EC2 instances operated by EKS Auto Mode.
+ EKS will delete all Load Balancers operated by EKS Auto Mode.
+ EKS will **not** delete EBS volumes provisioned by EKS Auto Mode.

EKS Auto Mode is designed to fully manage the resources that it creates. Manual interventions could result in EKS Auto Mode failing to completely clean up those resources when it is disabled. For example, if you referred to a managed Security Group from external Security Group rules, and forget to remove that reference before you disable EKS Auto Mode for a cluster, the managed Security Group will leak (not be deleted). Steps below describe how to remove a leaked Security Group if that should happen.

## Disable EKS Auto Mode (AWS Console)
<a name="disable_eks_auto_mode_shared_aws_console"></a>

1. Open your cluster overview page in the AWS Management Console.

1. Under **EKS Auto Mode** select **Manage** 

1. Toggle **EKS Auto Mode** to `off`.

If any managed Security Group is not deleted at the end of this process, you can delete it manually using descriptions from [Delete a security group](https://docs.aws.amazon.com/vpc/latest/userguide/deleting-security-groups.html).

## Disable EKS Auto Mode (AWS CLI)
<a name="disable_eks_auto_mode_shared_aws_cli"></a>

Use the following command to disable EKS Auto Mode on an existing cluster.

You need to have the `aws` CLI installed, and be logged in with sufficient permissions to manage EKS clusters. For more information, see [Set up to use Amazon EKS](setting-up.md).

**Note**  
The compute, block storage, and load balancing capabilities must all be enabled or disabled in the same request.

```
aws eks update-cluster-config \
 --name $CLUSTER_NAME \
 --compute-config enabled=false \
 --kubernetes-network-config '{"elasticLoadBalancing":{"enabled": false}}' \
 --storage-config '{"blockStorage":{"enabled": false}}'
```

You can check if a leaked EKS Auto Mode Security Group failed to be deleted after disabling EKS Auto Mode as follows:

```
aws ec2 describe-security-groups \
    --filters Name=tag:eks:eks-cluster-name,Values=<cluster-Name> Name=tag-key,Values=ingress.eks.amazonaws.com/resource,service.eks.amazonaws.com/resource --query "SecurityGroups[*].[GroupName]"
```

To then delete the Security Group:

```
aws ec2 delete-security-group --group-name=<sg-name>
```

# Update the Kubernetes Version of an EKS Auto Mode cluster
<a name="auto-upgrade"></a>

This topic explains how to update the Kubernetes version of your Auto Mode cluster. Auto Mode simplifies the version update process by handling the coordination of control plane updates with node replacements, while maintaining workload availability through pod disruption budgets.

When upgrading an Auto Mode cluster, many components that traditionally required manual updates are now managed as part of the service. Understanding the automated aspects of the upgrade process and your responsibilities helps ensure a smooth version transition for your cluster.

## Learn about updates with EKS Auto Mode
<a name="_learn_about_updates_with_eks_auto_mode"></a>

After you initiate a control plane upgrade, EKS Auto Mode will upgrade nodes in your cluster. As nodes expire, EKS Auto Mode will replace them with new nodes. The new nodes have the corresponding new Kubernetes version. EKS Auto Mode observes pod disruption budgets when upgrading nodes.

Additionally, you no longer need to update components like:
+ Amazon VPC CNI
+  AWS Load Balancer Controller
+ CoreDNS
+  `kube-proxy` 
+ Karpenter
+  AWS EBS CSI driver

EKS Auto Mode replaces these components with service functionality.

You are still responsible for updating:
+ Apps and workloads deployed to your cluster
+ Self-managed add-ons and controllers
+ Amazon EKS Add-ons
  + Learn how to [Update an Amazon EKS add-on](updating-an-add-on.md) 

Learn [Best Practices for Cluster Upgrades](https://docs.aws.amazon.com/eks/latest/best-practices/cluster-upgrades.html) 

## Start Cluster Update
<a name="_start_cluster_update"></a>

To start a cluster update, see [Update existing cluster to new Kubernetes version](update-cluster.md).

# Enable or Disable Built-in NodePools
<a name="set-builtin-node-pools"></a>

EKS Auto Mode has two built-in NodePools. You can enable or disable these NodePools using the AWS console, CLI, or API.

## Built-in NodePool Reference
<a name="_built_in_nodepool_reference"></a>
+  `system` 
  + This NodePool has a `CriticalAddonsOnly` taint. Many EKS add-ons, such as CoreDNS, tolerate this taint. Use this system node pool to separate cluster-critical applications.
  + Supports both `amd64` and `arm64` architectures.
+  `general-purpose` 
  + This NodePool provides support for launching nodes for general purpose workloads in your cluster.
  + Uses only `amd64` architecture.

Both built-in NodePools:
+ Use the default EKS NodeClass
+ Use only on-demand EC2 capacity
+ Use the C, M, and R EC2 instance families
+ Require generation 5 or newer EC2 instances

**Note**  
Enabling at least one built-in NodePool is required for EKS to provision the "default" NodeClass. If you disable all built-in NodePools, you’ll need to create a custom NodeClass and configure a NodePool to use it. For more information about NodeClasses, see [Create a Node Class for Amazon EKS](create-node-class.md).

## Procedure
<a name="_procedure"></a>

### Prerequisites
<a name="_prerequisites"></a>
+ The latest version of the AWS Command Line Interface (AWS CLI) installed and configured on your device. To check your current version, use `aws --version`. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Quick configuration](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-configure-quickstart-config) with aws configure in the AWS Command Line Interface User Guide.
  + Login to the CLI with sufficient IAM permissions to create AWS resources including IAM Policies, IAM Roles, and EKS Clusters.

### Enable with AWS CLI
<a name="enable_with_shared_aws_cli"></a>

Use the following command to enable both built-in NodePools:

```
aws eks update-cluster-config \
  --name <cluster-name> \
  --compute-config '{
    "nodeRoleArn": "<node-role-arn>",
    "nodePools": ["general-purpose", "system"],
    "enabled": true
  }' \
  --kubernetes-network-config '{
  "elasticLoadBalancing":{"enabled": true}
  }' \
  --storage-config '{
  "blockStorage":{"enabled": true}
  }'
```

You can modify the command to selectively enable the NodePools.

### Disable with AWS CLI
<a name="disable_with_shared_aws_cli"></a>

Use the following command to disable both built-in NodePools:

```
aws eks update-cluster-config \
  --name <cluster-name> \
  --compute-config '{
  "enabled": true,
  "nodePools": []
  }' \
  --kubernetes-network-config '{
  "elasticLoadBalancing":{"enabled": true}}' \
  --storage-config '{
  "blockStorage":{"enabled": true}
  }'
```

# Control if a workload is deployed on EKS Auto Mode nodes
<a name="associate-workload"></a>

When running workloads in an EKS cluster with EKS Auto Mode, you might need to control whether specific workloads run on EKS Auto Mode nodes or other compute types. This topic describes how to use node selectors and affinity rules to ensure your workloads are scheduled on the intended compute infrastructure.

The examples in this topic demonstrate how to use the `eks.amazonaws.com/compute-type` label to either require or prevent workload deployment on EKS Auto Mode nodes. This is particularly useful in mixed-mode clusters where you’re running both EKS Auto Mode and other compute types, such as self-managed Karpenter provisioners or EKS Managed Node Groups.

EKS Auto Mode nodes have set the value of the label `eks.amazonaws.com/compute-type` to `auto`. You can use this label to control if a workload is deployed to nodes managed by EKS Auto Mode.

## Require a workload is deployed to EKS Auto Mode nodes
<a name="_require_a_workload_is_deployed_to_eks_auto_mode_nodes"></a>

**Note**  
This `nodeSelector` value is not required for EKS Auto Mode. This `nodeSelector` value is only relevant if you are running a cluster in a mixed mode, node types not managed by EKS Auto Mode. For example, you may have static compute capacity deployed to your cluster with EKS Managed Node Groups, and have dynamic compute capacity managed by EKS Auto Mode.

You can add this `nodeSelector` to Deployments or other workloads to require Kubernetes schedule them onto EKS Auto Mode nodes.

```
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    nodeSelector:
      eks.amazonaws.com/compute-type: auto
```

## Require a workload is not deployed to EKS Auto Mode nodes
<a name="_require_a_workload_is_not_deployed_to_eks_auto_mode_nodes"></a>

You can add this `nodeAffinity` to Deployments or other workloads to require Kubernetes **not** schedule them onto EKS Auto Mode nodes.

```
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: eks.amazonaws.com/compute-type
            operator: NotIn
            values:
            - auto
```

# Run critical add-ons on dedicated instances
<a name="critical-workload"></a>

In this topic, you will learn how to deploy a workload with a `CriticalAddonsOnly` toleration so EKS Auto Mode will schedule it onto the `system` node pool.

EKS Auto Mode’s built-in `system` node pool is designed for running critical add-ons on dedicated instances. This segregation ensures essential components have dedicated resources and are isolated from general workloads, enhancing overall cluster stability and performance.

This guide demonstrates how to deploy add-ons to the `system` node pool by utilizing the `CriticalAddonsOnly` toleration and appropriate node selectors. By following these steps, you can ensure that your critical applications are scheduled onto the dedicated `system` nodes, leveraging the isolation and resource allocation benefits provided by EKS Auto Mode’s specialized node pool structure.

EKS Auto Mode has two built-in node pools: `general-purpose` and `system`. For more information, see [Enable or Disable Built-in NodePools](set-builtin-node-pools.md).

The purpose of the `system` node pool is to segregate critical add-ons onto different nodes. Nodes provisioned by the `system` node pool have a `CriticalAddonsOnly` Kubernetes taint. Kubernetes will only schedule pods onto these nodes if they have a corresponding toleration. For more information, see [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) in the Kubernetes documentation.

## Prerequisites
<a name="_prerequisites"></a>
+ EKS Auto Mode Cluster with the built-in `system` node pool enabled. For more information, see [Enable or Disable Built-in NodePools](set-builtin-node-pools.md) 
+  `kubectl` installed and configured. For more information, see [Set up to use Amazon EKS](setting-up.md).

## Procedure
<a name="_procedure"></a>

Review the example yaml below. Note the following configurations:
+  `nodeSelector` — This associates the workload with the built-in `system` node pool. This node pool must be enabled with the AWS API. For more information, see [Enable or Disable Built-in NodePools](set-builtin-node-pools.md).
+  `tolerations` — This toleration overcomes the `CriticalAddonsOnly` taint on nodes in the `system` node pool.

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
    spec:
      nodeSelector:
        karpenter.sh/nodepool: system
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      containers:
      - name: app
        image: nginx:latest
        resources:
          requests:
            cpu: "500m"
            memory: "512Mi"
```

To update a workload to run on the `system` node pool, you need to:

1. Update the existing workload to add the following configurations described above:
   +  `nodeSelector` 
   +  `tolerations` 

1. Deploy the updated workload to your cluster with `kubectl apply` 

After updating the workload, it will run on dedicated nodes.

# Use Network Policies with EKS Auto Mode
<a name="auto-net-pol"></a>

## Overview
<a name="_overview"></a>

As customers scale their application environments using EKS, network traffic isolation becomes increasingly fundamental for preventing unauthorized access to resources inside and outside the cluster. This is especially important in a multi-tenant environment with multiple unrelated workloads running side by side in the cluster. Kubernetes network policies enable you to enhance the network security posture for your Kubernetes workloads, and their integrations with cluster-external endpoints. EKS Auto Mode supports different types of network policies.

### Layer 3 and 4 isolation
<a name="_layer_3_and_4_isolation"></a>

Standard Kubernetes network policies operate at layers 3 and 4 of the OSI network model and allow you to control traffic flow at the IP address or port level within your Amazon EKS cluster.

#### Use cases
<a name="_use_cases"></a>
+ Segment network traffic between workloads to ensure that only related applications can talk to each other.
+ Isolate tenants at the namespace level using policies to enforce network separation.

### DNS-based enforcement
<a name="_dns_based_enforcement"></a>

Customers typically deploy workloads in EKS that are part of a broader distributed environment, some of which have to communicate with systems and services outside the cluster (northbound traffic). These systems and services can be in the AWS cloud or outside AWS altogether. Domain Name System (DNS) based policies allow you to strengthen your security posture by adopting a more stable and predictable approach for preventing unauthorized access from pods to cluster-external resources or endpoints. This mechanism eliminates the need to manually track and allow list specific IP addresses. By securing resources with a DNS-based approach, you also have more flexibility to update external infrastructure without having to relax your security posture or modify network policies amid changes to upstream servers and hosts. You can filter egress traffic to external endpoints using either a Fully Qualified Domain Name (FQDN), or a matching pattern for a DNS domain name. This gives you the added flexibility of extending access to multiple subdomains associated with a particular cluster-external endpoint.

#### Use cases
<a name="_use_cases_2"></a>
+ Standardize on a DNS-based approach for filtering access from a Kubernetes environment to cluster-external endpoints.
+ Secure access to AWS services in a multi-tenant environment.
+ Manage network access from pods to on-prem workloads in your Hybrid cloud environments.

### Admin (or cluster-scoped) rules
<a name="_admin_or_cluster_scoped_rules"></a>

In some cases, like multi-tenant scenarios, customers may have the requirement to enforce a network security standard that applies to the whole cluster. Instead of repetitively defining and maintaining a distinct policy for each namespace, you can use a single policy to centrally manage network access controls for different workloads in the cluster, irrespective of their namespace. These types of policies allow you to extend the scope of enforcement for your network filtering rules applied at layer 3, layer 4, and when using DNS rules.

#### Use cases
<a name="_use_cases_3"></a>
+ Centrally manage network access controls for all (or a subset of) workloads in your EKS cluster.
+ Define a default network security posture across the cluster.
+ Extend organizational security standards to the scope of the cluster in a more operationally efficient way.

## Getting started
<a name="_getting_started"></a>

### Prerequisites
<a name="_prerequisites"></a>
+ An Amazon EKS cluster with EKS Auto Mode enabled
+ kubectl configured to connect to your cluster

### Step 1: Enable Network Policy Controller
<a name="_step_1_enable_network_policy_controller"></a>

To use network policies with EKS Auto Mode, you first need to enable the Network Policy Controller by applying a ConfigMap to your cluster.

1. Create a file named `enable-network-policy.yaml` with the following content:

   ```
   apiVersion: v1
   kind: ConfigMap
   metadata:
     name: amazon-vpc-cni
     namespace: kube-system
   data:
     enable-network-policy-controller: "true"
   ```

1. Apply the ConfigMap to your cluster:

   ```
   kubectl apply -f enable-network-policy.yaml
   ```

### Step 2: Create and test network policies
<a name="_step_2_create_and_test_network_policies"></a>

Your EKS Auto Mode cluster is now configured to support Kubernetes network policies. You can test this with the [Stars demo of network policy for Amazon EKS](network-policy-stars-demo.md).

### Step 3: Adjust Network Policy Agent configuration in Node Class (Optional)
<a name="_step_3_adjust_network_policy_agent_configuration_in_node_class_optional"></a>

You can optionally create a new Node Class to change the default behavior of the Network Policy Agent on the nodes or enable the logging of Network Policy events. To do this, follow these steps:

1. Create or edit a Node Class YAML file (e.g., `nodeclass-network-policy.yaml`) with the following content:

   ```
   apiVersion: eks.amazonaws.com/v1
   kind: NodeClass
   metadata:
     name: network-policy-config
   spec:
     # Optional: Changes default network policy behavior
     networkPolicy: DefaultAllow
     # Optional: Enables logging for network policy events
     networkPolicyEventLogs: Enabled
     # Include other Node Class configurations as needed
   ```

1. Apply the Node Class configuration to your cluster:

   ```
   kubectl apply -f nodeclass-network-policy.yaml
   ```

1. Verify that the Node Class has been created:

   ```
   kubectl get nodeclass network-policy-config
   ```

1. Update your Node Pool to use this Node Class. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

## How does it work?
<a name="_how_does_it_work"></a>

### DNS-based network policy
<a name="_dns_based_network_policy"></a>

![\[Illustration of workflow when a DNS-based policy is applied in EKS Auto\]](http://docs.aws.amazon.com/eks/latest/userguide/images/apply-dns-policy-1.png)


![\[Illustration of workflow when a DNS-based policy is applied in EKS Auto\]](http://docs.aws.amazon.com/eks/latest/userguide/images/apply-dns-policy-2.png)


1. The platform team applies a DNS-based policy to the EKS cluster.

1. The Network Policy Controller is responsible for monitoring the creation of policies within the cluster and then reconciling policy endpoints. In this use case, the network policy controller instructs the node agent to filter DNS requests based on the allow-listed domains in the created policy. Domain names are allow-listed using the FQDN or a domain names that matches a pattern defined in the Kubernetes resource configuration.

1. Workload A attempts to resolve the IP for a cluster-external endpoint. The DNS request first goes through a proxy that filters such requests based on the allow list applied through the network policy.

1. Once the DNS request goes through the DNS filter allow list, it is proxied to CoreDNS,

1. CoreDNS in turn sends the request to the External DNS Resolver (Amazon Route 53 Resolver) to get the list of IP address behind the domain name.

1. The resolved IPs with TTL are returned in the response to the DNS request. These IPs are then written in an eBPF map which is used in the next step for IP layer enforcement.

1. The eBPF probes attached to the Pod veth interface will then filter egress traffic from Workload A to the cluster-external endpoint based on the rules in place. This ensures pods can only send cluster-external traffic to the IPs of allow listed domains. The validity of these IPs is based on the TTL retrieved from the External DNS Resolver (Amazon Route 53 Resolver).

#### Using the Application Network Policy
<a name="_using_the_application_network_policy"></a>

The `ApplicationNetworkPolicy` combines the capabilities of standard Kubernetes network policies with DNS based filtering at a namespace level using a single Custom Resource Definition (CRD). Therefore, the `ApplicationNetworkPolicy` can be used for:

1. Defining restrictions at layers 3 and 4 of the network stack using IP blocks and port numbers.

1. Defining rules that operate at layer 7 of the network stack and letting you filter traffic based on FQDNs.

**Important**  
DNS based rules defined using the `ApplicationNetworkPolicy` are only applicable to workloads running in EKS Auto Mode-launched EC2 instances. `ApplicationNetworkPolicy` supports all fields of the standard Kubernetes `NetworkPolicy`, with an additional FQDN filter for egress rules.

**Warning**  
Do not use the same name for an `ApplicationNetworkPolicy` and a `NetworkPolicy` within the same namespace. If the names collide, the resulting `PolicyEndpoints` objects may not reflect either policy correctly. Both resources are accepted without error, making this issue difficult to diagnose.  
To resolve a naming conflict, rename either the `ApplicationNetworkPolicy` or the `NetworkPolicy` so they are unique within the namespace, then verify that the corresponding `PolicyEndpoints` objects are updated correctly.

#### Example
<a name="_example"></a>

You have a workload in your EKS Auto Mode cluster that needs to communicate with an application on-prem which is behind a load balancer with a DNS name. You could achieve this using the following network policy:

```
apiVersion: networking.k8s.aws/v1alpha1
kind: ApplicationNetworkPolicy
metadata:
  name: my-onprem-app-egress
  namespace: galaxy
spec:
  podSelector:
    matchLabels:
      role: backend
  policyTypes:
  - Egress
  egress:
  - to:
    - domainNames:
      - "myapp.mydomain.com"
    ports:
    - protocol: TCP
      port: 8080
```

At the Kubernetes network level, this would allow egress from any pods in the "galaxy" namespace labelled with `role: backend` to connect to the domain name **myapp.mydomain.com** on TCP port 8080. In addition, you would need to setup the network connectivity for egress traffic from your VPC to your corporate data center.

![\[Illustration of workload in EKS Auto communicating with applications on prem\]](http://docs.aws.amazon.com/eks/latest/userguide/images/eks-auto-to-on-prem.png)


### Admin (or cluster) network policy
<a name="_admin_or_cluster_network_policy"></a>

![\[Illustration of the evaluation order for network policies in EKS\]](http://docs.aws.amazon.com/eks/latest/userguide/images/evaluation-order.png)


#### Using the Cluster Network Policy
<a name="_using_the_cluster_network_policy"></a>

When using a `ClusterNetworkPolicy`, the Admin tier policies are evaluated first and cannot be overridden. When the Admin tier policies have been evaluated, the standard namespace scoped policies are used to execute the applied network segmentation rules. This can be accomplished by using either `ApplicationNetworkPolicy` or `NetworkPolicy`. Lastly, the Baseline tier rules that define the default network restrictions for cluster workloads will be enforced. These Baseline tier rules **can** be overridden by the namespace scoped policies if needed.

#### Example
<a name="_example_2"></a>

You have an application in your cluster that you want to isolate from other tenant workloads. You can explicitly block cluster traffic from other namespaces to prevent network access to the sensitive workload namespace.

```
apiVersion: networking.k8s.aws/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: protect-sensitive-workload
spec:
  tier: Admin
  priority: 10
  subject:
    namespaces:
      matchLabels:
        kubernetes.io/metadata.name: earth
  ingress:
    - action: Deny
      from:
      - namespaces:
          matchLabels: {} # Match all namespaces.
      name: select-all-deny-all
```

## Considerations
<a name="_considerations"></a>

### Understand policy evaluation order
<a name="_understand_policy_evaluation_order"></a>

The network policy capabilities supported in EKS are evaluated in a specific order to ensure predictable and secure traffic management. Therefore, it’s important to understand the evaluation flow to design an effective network security posture for your environment.

1.  **Admin tier policies (evaluated first)**: All Admin tier ClusterNetworkPolicies are evaluated before any other policies. Within the Admin tier, policies are processed in priority order (lowest priority number first). The action type determines what happens next.
   +  **Deny action (highest precedence)**: When an Admin policy with a Deny action matches traffic, that traffic is immediately blocked regardless of any other policies. No further ClusterNetworkPolicy or NetworkPolicy rules are processed. This ensures that organization-wide security controls cannot be overridden by namespace-level policies.
   +  **Allow action**: After Deny rules are evaluated, Admin policies with Allow actions are processed in priority order (lowest priority number first). When an Allow action matches, the traffic is accepted and no further policy evaluation occurs. These policies can grant access across multiple namespaces based on label selectors, providing centralized control over which workloads can access specific resources.
   +  **Pass action**: Pass actions in Admin tier policies delegate decision-making to lower tiers. When traffic matches a Pass rule, evaluation skips all remaining Admin tier rules for that traffic and proceeds directly to the NetworkPolicy tier. This allows administrators to explicitly delegate control for certain traffic patterns to application teams. For example, you might use Pass rules to delegate intra-namespace traffic management to namespace administrators while maintaining strict controls over external access.

1.  **Network policy tier**: If no Admin tier policy matches with Deny or Allow, or if a Pass action was matched, namespace-scoped ApplicationNetworkPolicy and traditional NetworkPolicy resources are evaluated next. These policies provide fine-grained control within individual namespaces and are managed by application teams. Namespace-scoped policies can only be more restrictive than Admin policies. They cannot override an Admin policy’s Deny decision, but they can further restrict traffic that was allowed or passed by Admin policies.

1.  **Baseline tier Admin policies**: If no Admin or namespace-scoped policies match the traffic, Baseline tier ClusterNetworkPolicies are evaluated. These provide default security postures that can be overridden by namespace-scoped policies, allowing administrators to set organization-wide defaults while giving teams flexibility to customize as needed. Baseline policies are evaluated in priority order (lowest priority number first).

1.  **Default deny (if no policies match)**: This deny-by-default behavior ensures that only explicitly permitted connections are allowed, maintaining a strong security posture.

### Applying the principle of least privilege
<a name="_applying_the_principle_of_least_privilege"></a>
+  **Start with restrictive policies and gradually add permissions as needed** - Begin by implementing deny-by-default policies at the cluster level, then incrementally add allow rules as you validate legitimate connectivity requirements. This approach forces teams to explicitly justify each external connection, creating a more secure and auditable environment.
+  **Regularly audit and remove unused policy rules** - Network policies can accumulate over time as applications evolve, leaving behind obsolete rules that unnecessarily expand your attack surface. Implement a regular review process to identify and remove policy rules that are no longer needed, ensuring your security posture remains tight and maintainable.
+  **Use specific domain names rather than broad patterns when possible** - While wildcard patterns like `*.amazonaws.com` provide convenience, they also grant access to a wide range of services. Whenever feasible, specify exact domain names like `s3.us-west-2.amazonaws.com` to limit access to only the specific services your applications require, reducing the risk of lateral movement if a workload is compromised.

### Using DNS-based policies in EKS
<a name="_using_dns_based_policies_in_eks"></a>
+ DNS based rules defined using the `ApplicationNetworkPolicy` are only applicable to workloads running in EKS Auto Mode-launched EC2 instances. If you are running a mixed mode cluster (consisting of both EKS Auto and non EKS Auto worker nodes), your DNS-based rules are only effective in the EKS Auto mode worker nodes (EC2 managed instances).

### Validating your DNS policies
<a name="_validating_your_dns_policies"></a>
+  **Use staging clusters that mirror production network topology for testing** - Your staging environment should replicate the network architecture, external dependencies, and connectivity patterns of production to ensure accurate policy testing. This includes matching VPC configurations, DNS resolution behavior, and access to the same external services your production workloads require.
+  **Implement automated testing for critical network paths** - Build automated tests that validate connectivity to essential external services as part of your CI/CD pipeline. These tests should verify that legitimate traffic flows are permitted while unauthorized connections are blocked, providing continuous validation that your network policies maintain the correct security posture as your infrastructure evolves.
+  **Monitor application behavior after policy changes** - After deploying new or modified network policies to production, closely monitor application logs, error rates, and performance metrics to quickly identify any connectivity issues. Establish clear rollback procedures so you can rapidly revert policy changes if they cause unexpected application behavior or service disruptions.

### Interaction with Amazon Route 53 DNS firewall
<a name="_interaction_with_amazon_route_53_dns_firewall"></a>

EKS Admin and Network policies are evaluated first at the pod level when traffic is initiated. If an EKS network policy allows egress to a specific domain, the pod then performs a DNS query that reaches the Route 53 Resolver. At this point, Route 53 DNS Firewall rules are evaluated. If DNS Firewall blocks the domain query, DNS resolution fails and the connection cannot be established, even though the EKS network policy allowed it. This creates complementary security layers: EKS DNS-based network policies provide pod-level egress control for application-specific access requirements and multi-tenant security boundaries, while DNS Firewall provides VPC-wide protection against known malicious domains and enforces organization-wide blocklists.

# Tag subnets for EKS Auto Mode
<a name="tag-subnets-auto"></a>

If you use the load balancing capability of EKS Auto Mode, you need to add AWS tags to your VPC subnets.

## Background
<a name="_background"></a>

These tags identify subnets as associated with the cluster, and more importantly if the subnet is public or private.

Public subnets have direct internet access via an internet gateway. They are used for resources that need to be publicly accessible such as load balancers.

Private subnets do not have direct internet access and use NAT gateways for outbound traffic. They are used for internal resources such as EKS nodes that don’t need public IPs.

To learn more about NAT gateways and Internet gateways, see [Connect your VPC to other networks](https://docs.aws.amazon.com/vpc/latest/userguide/extend-intro.html) in the Amazon Virtual Private Cloud (VPC) User Guide.

## Requirement
<a name="_requirement"></a>

At this time, subnets used for load balancing by EKS Auto Mode are required to have one of the following tags.

### Public subnets
<a name="_public_subnets"></a>

Public subnets are used for internet-facing load balancers. These subnets must have the following tags:


| Key | Value | 
| --- | --- | 
|   `kubernetes.io/role/elb`   |   `1` or ``  | 

### Private subnets
<a name="_private_subnets"></a>

Private subnets are used for internal load balancers. These subnets must have the following tags:


| Key | Value | 
| --- | --- | 
|   `kubernetes.io/role/internal-elb`   |   `1` or ``  | 

## Procedure
<a name="_procedure"></a>

Before you begin, identify which subnets are public (with Internet Gateway access) and which are private (using NAT Gateway). You’ll need permissions to modify VPC resources.

### AWS Management Console
<a name="auto-tag-subnets-console"></a>

1. Open the Amazon VPC console and navigate to **Subnets**.

1. Select the subnet to tag.

1. Choose the **Tags** tab and select **Add tag**.

1. Add the appropriate tag:
   + For public subnets: Key=`kubernetes.io/role/elb` 
   + For private subnets: Key=`kubernetes.io/role/internal-elb` 

1. Set **Value** to `1` or leave empty.

1. Save and repeat for remaining subnets.

### AWS CLI
<a name="shared_aws_cli"></a>

For public subnets:

```
aws ec2 create-tags \
    --resources subnet-ID \
    --tags Key=kubernetes.io/role/elb,Value=1
```

For private subnets:

```
aws ec2 create-tags \
    --resources subnet-ID \
    --tags Key=kubernetes.io/role/internal-elb,Value=1
```

Replace `subnet-ID` with your actual subnet ID.

# Generate CIS compliance reports from Kubernetes nodes using kubectl debug
<a name="auto-cis"></a>

This topic describes how to generate CIS (Center for Internet Security) compliance reports for Amazon EKS nodes using the `kubectl debug` command. The command allows you to temporarily create a debugging container on a Kubernetes node and run CIS compliance checks using the `apiclient` tool. The `apiclient` tool is part of Bottlerocket OS, the OS used by EKS Auto Mode nodes.

## Prerequisites
<a name="_prerequisites"></a>

Before you begin, ensure you have:
+ Access to an Amazon EKS cluster with `kubectl` configured (version must be at least v1.32.0; type `kubectl version` to check).
+ The appropriate IAM permissions to debug nodes.
+ A valid profile that allows debug operations (e.g., `sysadmin`).

For more information about using debugging profiles with `kubectl`, see [Debugging a Pod or Node while applying a profile](https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#debugging-profiles) in the Kubernetes documentation.

## Procedure
<a name="_procedure"></a>

1. Determine the AWS Instance ID of the node you want to run the report on. Use the following command to list the nodes in the cluster. The instance ID is found in the name column, and begins with `i-`:

   ```
   kubectl get nodes
   ```

   ```
   NAME                  STATUS   ROLES    AGE   VERSION
   i-0ea0ba0f8ef9ad609   Ready    <none>   62s   v1.30.10-eks-1a9dacd
   ```

1. Run the following command, replacing `<instance-id>` with the instance ID of the node you want to query:

   ```
   kubectl debug node/<instance-id> -it --profile=sysadmin --image=public.ecr.aws/amazonlinux/amazonlinux:2023 -- bash -c "yum install -q -y util-linux-core; nsenter -t 1 -m apiclient report cis --level 1 --format text"
   ```

   Components of this command include:
   +  `kubectl debug node/<instance-id>` — Creates a debugging session on the specified EC2 instance ID.
   +  `-it` — Allocates a TTY (command line shell) and keeps stdin open for interactive usage.
   +  `--profile=sysadmin` — Uses the specified `kubectl` profile with appropriate permissions.
   +  `--image=public.ecr.aws/amazonlinux/amazonlinux:2023` — Uses `amazonlinux:2023` as the container image for debugging.
   +  `bash -c "…​"` — Executes the following commands in a bash shell:
     +  `yum install -q -y util-linux-core` — Quietly installs the required utilities package.
     +  `nsenter -t 1 -m` — Runs `nsenter` to enter the namespace of the host process (PID 1).
     +  `apiclient report cis --level 1 --format text` — Runs the CIS compliance report at level 1 with text output.

1. Review the report text output.

## Interpreting the output
<a name="_interpreting_the_output"></a>

The command generates a text-based report showing the compliance status of various CIS controls. The output includes:
+ Individual CIS control IDs
+ Description of each control
+ Pass, Fail, or Skip status for each check
+ Details that explain any compliance issues

Here is an example of output from the report run on a Bottlerocket instance:

```
Benchmark name:  CIS Bottlerocket Benchmark
Version:         v1.0.0
Reference:       https://www.cisecurity.org/benchmark/bottlerocket
Benchmark level: 1
Start time:      2025-04-11T01:40:39.055623436Z

[SKIP] 1.2.1     Ensure software update repositories are configured (Manual)
[PASS] 1.3.1     Ensure dm-verity is configured (Automatic)[PASS] 1.4.1     Ensure setuid programs do not create core dumps (Automatic)
[PASS] 1.4.2     Ensure address space layout randomization (ASLR) is enabled (Automatic)
[PASS] 1.4.3     Ensure unprivileged eBPF is disabled (Automatic)
[PASS] 1.5.1     Ensure SELinux is configured (Automatic)
[SKIP] 1.6       Ensure updates, patches, and additional security software are installed (Manual)
[PASS] 2.1.1.1   Ensure chrony is configured (Automatic)
[PASS] 3.2.5     Ensure broadcast ICMP requests are ignored (Automatic)
[PASS] 3.2.6     Ensure bogus ICMP responses are ignored (Automatic)
[PASS] 3.2.7     Ensure TCP SYN Cookies is enabled (Automatic)
[SKIP] 3.4.1.3   Ensure IPv4 outbound and established connections are configured (Manual)
[SKIP] 3.4.2.3   Ensure IPv6 outbound and established connections are configured (Manual)
[PASS] 4.1.1.1   Ensure journald is configured to write logs to persistent disk (Automatic)
[PASS] 4.1.2     Ensure permissions on journal files are configured (Automatic)

Passed:          11
Failed:          0
Skipped:         4
Total checks:    15
```

For information about the benchmark, see [Kubernetes Benchmark](https://www.cisecurity.org/benchmark/kubernetes/) from the Center for Internet Security (CIS).

## Related resources
<a name="_related_resources"></a>
+  [Bottlerocket CIS Benchmark](https://bottlerocket.dev/en/os/1.34.x/api/reporting/cis/) in Bottlerocket OS Documentation.
+  [Debug Running Pods](https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/) in the Kubernetes Documentation.
+  [Kubernetes Benchmark](https://www.cisecurity.org/benchmark/kubernetes/) from the Center for Internet Security (CIS)

# Enable EBS Volume Encryption with Customer Managed KMS Keys for EKS Auto Mode
<a name="auto-kms"></a>

You can encrypt the ephemeral root volume for EKS Auto Mode instances with a customer managed KMS key.

Amazon EKS Auto Mode uses service-linked roles to delegate permissions to other AWS services when managing encrypted EBS volumes for your Kubernetes clusters. This topic describes how to set up the key policy that you need when specifying a customer managed key for Amazon EBS encryption with EKS Auto Mode.

Considerations:
+ EKS Auto Mode does not need additional authorization to use the default AWS managed key to protect the encrypted volumes in your account.
+ This topic covers encrypting ephemeral volumes, the root volumes for EC2 instances. For more information about encrypting data volumes used for workloads, see [Create a storage class](create-storage-class.md).

## Overview
<a name="_overview"></a>

The following AWS KMS keys can be used for Amazon EBS root volume encryption when EKS Auto Mode launches instances:
+  ** AWS managed key** – An encryption key in your account that Amazon EBS creates, owns, and manages. This is the default encryption key for a new account.
+  **Customer managed key** – A custom encryption key that you create, own, and manage.

**Note**  
The key must be symmetric. Amazon EBS does not support asymmetric customer managed keys.

## Step 1: Configure the key policy
<a name="_step_1_configure_the_key_policy"></a>

Your KMS keys must have a key policy that allows EKS Auto Mode to launch instances with Amazon EBS volumes encrypted with a customer managed key.

Configure your key policy with the following structure:

**Note**  
This policy only includes permissions for EKS Auto Mode. The key policy may need additional permissions if other identities need to use the key or manage grants.

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "MyKeyPolicy",
    "Statement": [
        {
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::123456789012:role/ClusterServiceRole"
                ]
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow attachment of persistent resources",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::123456789012:role/ClusterServiceRole"
                ]
            },
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        }
    ]
}
```

Make sure to replace `<account-id>` with your actual AWS account ID.

When configuring the key policy:
+ The `ClusterServiceRole` must have the necessary IAM permissions to use the KMS key for encryption operations
+ The `kms:GrantIsForAWSResource` condition ensures that grants can only be created for AWS services

## Step 2: Configure NodeClass with your customer managed key
<a name="_step_2_configure_nodeclass_with_your_customer_managed_key"></a>

After configuring the key policy, reference the KMS key in your EKS Auto Mode NodeClass configuration:

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: my-node-class
spec:
  # Insert existing configuration

  ephemeralStorage:
    size: "80Gi"  # Range: 1-59000Gi or 1-64000G or 1-58Ti or 1-64T
    iops: 3000    # Range: 3000-16000
    throughput: 125  # Range: 125-1000

    # KMS key for encryption
    kmsKeyID: "arn:aws:kms:<region>:<account-id>:key/<key-id>"
```

Replace the placeholder values with your actual values:
+  `<region>` with your AWS region
+  `<account-id>` with your AWS account ID
+  `<key-id>` with your KMS key ID

You can specify the KMS key using any of the following formats:
+ KMS Key ID: `1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d` 
+ KMS Key ARN: ` arn:aws:kms:us-west-2:111122223333:key/1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d` 
+ Key Alias Name: `alias/eks-auto-mode-key` 
+ Key Alias ARN: ` arn:aws:kms:us-west-2:111122223333:alias/eks-auto-mode-key` 

Apply the NodeClass configuration using kubectl:

```
kubectl apply -f nodeclass.yaml
```

## Related Resources
<a name="_related_resources"></a>
+  [Create a Node Class for Amazon EKS](create-node-class.md) 
+ View more information in the AWS Key Management Service Developer Guide
  +  [Permissions for AWS services in key policies](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-services.html) 
  +  [Change a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html) 
  +  [Grants in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html) 

# Update organization controls for EKS Auto Mode
<a name="auto-controls"></a>

Some organization controls can prevent EKS Auto Mode from functioning correctly. If so, you must update these controls to allow EKS Auto Mode to have the permissions required to manage EC2 instances on your behalf.

EKS Auto Mode uses a service role for launching the EC2 Instances that back EKS Auto Mode Nodes. A service role is an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) which is created in your account that a service assumes to perform actions on your behalf. [Service Control Policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) (SCPs) always apply to actions performed with service roles. This allows an SCP to inhibit Auto Mode’s operations. The most common occurrence is when an SCP is used to restrict the Amazon Machine Images (AMIs) that can be launched. To allow EKS Auto Mode to function, modify the SCP to permit launching AMIs from EKS Auto Mode accounts.

You can also use the [EC2 Allowed AMIs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-allowed-amis.html) feature to limit the visibility of AMIs in other accounts. If you use this feature, you must expand the image criteria to also include the EKS Auto Mode AMI accounts in the regions of interest.

## Example SCP to block all AMIs except for EKS Auto Mode AMIs
<a name="_example_scp_to_block_all_amis_except_for_eks_auto_mode_amis"></a>

The SCP below prevents calling `ec2:RunInstances` unless the AMI belongs to the EKS Auto Mode AMI account for us-west-2 or us-east-1.

**Note**  
It’s important **not** to use the `ec2:Owner` context key. Amazon owns the EKS Auto Mode AMI accounts and the value for this key will always be `amazon`. Constructing an SCP that allows launching AMIs if the `ec2:Owner` is `amazon` will allow launching any Amazon owned AMI, not just those for EKS Auto Mode.\$1

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "DenyAMI",
      "Effect": "Deny",
      "Action": "ec2:RunInstances",
      "Resource": "arn:*:ec2:*::image/ami-*",
      "Condition": {
        "StringNotEquals": {
          "aws:ResourceAccount": [
            "767397842682",
            "992382739861"
          ]
        }
      }
    }
  ]
}
```

## EKS Auto Mode AMI accounts
<a name="_eks_auto_mode_ami_accounts"></a>

 AWS accounts that vary by region host EKS Auto Mode public AMIs.


|  |  | 
| --- |--- |
|   AWS Region  |  Account  | 
|  af-south-1  |  471112993317  | 
|  ap-east-1  |  590183728416  | 
|  ap-east-2  |  381492200852  | 
|  ap-northeast-1  |  851725346105  | 
|  ap-northeast-2  |  992382805010  | 
|  ap-northeast-3  |  891377407544  | 
|  ap-south-1  |  975049899075  | 
|  ap-south-2  |  590183737426  | 
|  ap-southeast-1  |  339712723301  | 
|  ap-southeast-2  |  058264376476  | 
|  ap-southeast-3  |  471112941769  | 
|  ap-southeast-4  |  590183863144  | 
|  ap-southeast-5  |  654654202513  | 
|  ap-southeast-6  |  905418310314  | 
|  ap-southeast-7  |  533267217478  | 
|  ca-central-1  |  992382439851  | 
|  ca-west-1  |  767397959864  | 
|  eu-central-1  |  891376953411  | 
|  eu-central-2  |  381492036002  | 
|  eu-north-1  |  339712696471  | 
|  eu-south-1  |  975049955519  | 
|  eu-south-2  |  471112620929  | 
|  eu-west-1  |  381492008532  | 
|  eu-west-2  |  590184142468  | 
|  eu-west-3  |  891376969258  | 
|  il-central-1  |  590183797093  | 
|  me-central-1  |  637423494195  | 
|  me-south-1  |  905418070398  | 
|  mx-central-1  |  211125506622  | 
|  sa-east-1  |  339712709251  | 
|  us-east-1  |  992382739861  | 
|  us-east-2  |  975050179949  | 
|  us-west-1  |  975050035094  | 
|  us-west-2  |  767397842682  | 
|  us-gov-east-1  |  446077414359  | 
|  us-gov-west-1  |  446098668741  | 

## Associate Public IP address
<a name="_associate_public_ip_address"></a>

When `ec2:RunInstances` is called the `AssociatePublicIpAddress` field for an instance launch is determined automatically by the type of subnet that the instance is being launched into. An SCP may be used to enforce that this value is explicitly set to false, regardless of the type of subnet being launched into. In this case the NodeClass field `spec.advancedNetworking.associatePublicIPAddress` can also be set to false to satisfy the requirements of the SCP.

```
  {
        "Sid": "DenyPublicEC2IPAddesses",
        "Effect": "Deny",
        "Action": "ec2:RunInstances",
        "Resource": "arn:aws:ec2:*:*:network-interface/*",
        "Condition": {
            "BoolIfExists": {
                "ec2:AssociatePublicIpAddress": "true"
            }
        }
    }
```

# Control deployment of workloads into Capacity Reservations with EKS Auto Mode
<a name="auto-odcr"></a>

You can control the deployment of workloads onto [Capacity Reservations](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-reservation-overview.html). EKS Auto Mode supports EC2 On-Demand Capacity Reservations (ODCRs), and EC2 Capacity Blocks for ML.

**Tip**  
By default, EKS Auto Mode can launch into open ODCRs through open-matching, but does not prioritize them. Instances launched through open-matching are labeled `karpenter.sh/capacity-type: on-demand`, not `reserved`. To prioritize ODCR usage and have instances labeled `karpenter.sh/capacity-type: reserved`, configure `capacityReservationSelectorTerms` in the NodeClass definition. Capacity Blocks for ML always require `capacityReservationSelectorTerms` and are not used automatically.

## EC2 On-Demand Capacity Reservations (ODCRs)
<a name="_ec2_on_demand_capacity_reservations_odcrs"></a>

EC2 On-Demand Capacity Reservations (ODCRs) allow you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. When using EKS Auto Mode, you may want to control whether your Kubernetes workloads are deployed onto these reserved instances to maximize utilization of pre-purchased capacity or to ensure critical workloads have access to guaranteed resources.

By default, EKS Auto Mode automatically launches into open ODCRs. However, by configuring `capacityReservationSelectorTerms` on a NodeClass, you can explicitly control which ODCRs your workloads use. Nodes provisioned using configured ODCRs will have `karpenter.sh/capacity-type: reserved` and will be prioritized over on-demand and spot. Once this feature is enabled, EKS Auto Mode will no longer automatically use open ODCRs—they must be explicitly selected by a NodeClass, giving you precise control over capacity reservation usage across your cluster.

**Warning**  
If you configure `capacityReservationSelectorTerms` on a NodeClass in a cluster, EKS Auto Mode will no longer automatically use open ODCRs for *any* NodeClass in the cluster.

### Example NodeClass
<a name="_example_nodeclass"></a>

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
spec:
  # Optional: Selects upon on-demand capacity reservations and capacity blocks
  # for EKS Auto Mode to prioritize.
  capacityReservationSelectorTerms:
    - id: cr-56fac701cc1951b03
    # Alternative Approaches
    - tags:
        app: "my-app"
      # Optional owning account ID filter
      owner: "012345678901"
```

This example NodeClass demonstrates two approaches for selecting ODCRs. The first method directly references a specific ODCR by its ID (`cr-56fac701cc1951b03`). The second method uses tag-based selection, targeting ODCRs with the tag `Name: "targeted-odcr"`. You can also optionally filter by the AWS account that owns the reservation, which is particularly useful in cross-account scenarios or when working with shared capacity reservations.

## EC2 Capacity Blocks for ML
<a name="_ec2_capacity_blocks_for_ml"></a>

Capacity Blocks for ML reserve GPU-based accelerated computing instances on a future date to support your short duration machine learning (ML) workloads. Instances that run inside a Capacity Block are automatically placed close together inside Amazon EC2 UltraClusters, for low-latency, petabit-scale, non-blocking networking.

For more information about the supported platforms and instance types, see [Capacity Blocks for ML](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-blocks.html) in the EC2 User Guide.

You can create an EKS Auto Mode NodeClass that uses a Capacity Block for ML, similar to an ODCR (described earlier).

The following sample definitions create three resources:

1. A NodeClass that references your Capacity Block reservation

1. A NodePool that uses the NodeClass and applies a taint

1. A Pod specification that tolerates the taint and requests GPU resources

### Example NodeClass
<a name="_example_nodeclass_2"></a>

This NodeClass references a specific Capacity Block for ML by its reservation ID. You can obtain this ID from the EC2 console.

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: gpu
spec:
  # Specify your Capacity Block reservation ID
  capacityReservationSelectorTerms:
    - id: cr-56fac701cc1951b03
```

For more information, see [Create a Node Class for Amazon EKS](create-node-class.md).

### Example NodePool
<a name="_example_nodepool"></a>

This NodePool references the `gpu` NodeClass and specifies important configuration:
+ It **only** uses reserved capacity by setting `karpenter.sh/capacity-type: reserved` 
+ It requests specific GPU instance families appropriate for ML workloads
+ It applies a `nvidia.com/gpu` taint to ensure only GPU workloads are scheduled on these nodes

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: gpu
spec:
  template:
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: gpu
      requirements:
        - key: eks.amazonaws.com/instance-family
          operator: In
          values:
            - g6
            - p4d
            - p4de
            - p5
            - p5e
            - p5en
            - p6
            - p6-b200
        - key: karpenter.sh/capacity-type
          operator: In
          values:
            - reserved
            # Enable other capacity types
            # - on-demand
            # - spot
      taints:
        - effect: NoSchedule
          key: nvidia.com/gpu
```

For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

### Example Pod
<a name="_example_pod"></a>

This example pod demonstrates how to configure a workload to run on your Capacity Block nodes:
+ It uses a **nodeSelector** to target specific GPU types (in this case, H200 GPUs)
+ It includes a **toleration** for the `nvidia.com/gpu` taint applied by the NodePool
+ It explicitly **requests GPU resources** using the `nvidia.com/gpu` resource type

```
apiVersion: v1
kind: Pod
metadata:
  name: nvidia-smi
spec:
  nodeSelector:
    # Select specific GPU type - uncomment as needed
    # eks.amazonaws.com/instance-gpu-name: l4
    # eks.amazonaws.com/instance-gpu-name: a100
    eks.amazonaws.com/instance-gpu-name: h200
    # eks.amazonaws.com/instance-gpu-name: b200
    eks.amazonaws.com/compute-type: auto
  restartPolicy: OnFailure
  containers:
  - name: nvidia-smi
    image: public.ecr.aws/amazonlinux/amazonlinux:2023-minimal
    args:
    - "nvidia-smi"
    resources:
      requests:
        # Uncomment if needed
        # memory: "30Gi"
        # cpu: "3500m"
        nvidia.com/gpu: 1
      limits:
        # Uncomment if needed
        # memory: "30Gi"
        nvidia.com/gpu: 1
  tolerations:
  - key: nvidia.com/gpu
    effect: NoSchedule
    operator: Exists
```

For more information, see [Pods](https://kubernetes.io/docs/concepts/workloads/pods/) in the Kubernetes documentation.

### Related Resources
<a name="_related_resources"></a>
+  [Capacity Blocks for ML](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-blocks.html) in the Amazon EC2 User Guide
+  [Find and purchase Capacity Blocks](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-blocks-purchase.html) in the Amazon EC2 User Guide
+  [Manage compute resources for AI/ML workloads on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/ml-compute-management.html) 
+  [GPU Resource Optimization and Cost Management](https://docs.aws.amazon.com/eks/latest/best-practices/aiml-compute.html#_gpu_resource_optimization_and_cost_management) in the EKS Best Practices Guide

# Deploy EKS Auto Mode nodes onto Local Zones
<a name="auto-local-zone"></a>

EKS Auto Mode provides simplified cluster management with automatic node provisioning. AWS Local Zones extend AWS infrastructure to geographic locations closer to your end users, reducing latency for latency-sensitive applications. This guide walks you through the process of deploying EKS Auto Mode nodes onto AWS Local Zones, enabling you to run containerized applications with lower latency for users in specific geographic areas.

This guide also demonstrates how to use Kubernetes taints and tolerations to ensure that only specific workloads run on your Local Zone nodes, helping you control costs and optimize resource usage.

## Prerequisites
<a name="_prerequisites"></a>

Before you begin deploying EKS Auto Mode nodes onto Local Zones, ensure you have the following prerequisites in place:
+  [An existing EKS Auto Mode Cluster](create-auto.md) 
+  [Opted-in to local zone in your AWS account](https://docs.aws.amazon.com/local-zones/latest/ug/getting-started.html#getting-started-find-local-zone) 

## Step 1: Create Local Zone Subnet
<a name="_step_1_create_local_zone_subnet"></a>

The first step in deploying EKS Auto Mode nodes to a Local Zone is creating a subnet in that Local Zone. This subnet provides the network infrastructure for your nodes and allows them to communicate with the rest of your VPC. Follow the [Create a Local Zone subnet](https://docs.aws.amazon.com/local-zones/latest/ug/getting-started.html#getting-started-create-local-zone-subnet) instructions (in the AWS Local Zones User Guide) to create a subnet in your chosen Local Zone.

**Tip**  
Make a note of the name of your local zone subnet.

## Step 2: Create NodeClass for Local Zone Subnet
<a name="_step_2_create_nodeclass_for_local_zone_subnet"></a>

After creating your Local Zone subnet, you need to define a NodeClass that references this subnet. The NodeClass is a Kubernetes custom resource that specifies the infrastructure attributes for your nodes, including which subnets, security groups, and storage configurations to use. In the example below, we create a NodeClass called "local-zone" that targets a local zone subnet based on its name. You can also use the subnet ID. You’ll need to adapt this configuration to target your Local Zone subnet.

For more information, see [Create a Node Class for Amazon EKS](create-node-class.md).

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: local-zone
spec:
  subnetSelectorTerms:
    - id: <local-subnet-id>
```

## Step 3: Create NodePool with NodeClass and Taint
<a name="_step_3_create_nodepool_with_nodeclass_and_taint"></a>

With your NodeClass configured, you now need to create a NodePool that uses this NodeClass. A NodePool defines the compute characteristics of your nodes, including instance types. The NodePool uses the NodeClass as a reference to determine where to launch instances.

In the example below, we create a NodePool that references our "local-zone" NodeClass. We also add a taint to the nodes to ensure that only pods with a matching toleration can be scheduled on these Local Zone nodes. This is particularly important for Local Zone nodes, which typically have higher costs and should only be used by workloads that specifically benefit from the reduced latency.

For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: my-node-pool
spec:
  template:
    metadata:
      labels:
        node-type: local-zone
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: local-zone
      taints:
        - key: "aws.amazon.com/local-zone"
          value: "true"
          effect: NoSchedule

      requirements:
        - key: "eks.amazonaws.com/instance-category"
          operator: In
          values: ["c", "m", "r"]
        - key: "eks.amazonaws.com/instance-cpu"
          operator: In
          values: ["4", "8", "16", "32"]
```

The taint with key `aws.amazon.com/local-zone` and effect `NoSchedule` ensures that pods without a matching toleration won’t be scheduled on these nodes. This prevents regular workloads from accidentally running in the Local Zone, which could lead to unexpected costs.

## Step 4: Deploy Workloads with Toleration and Node Affinity
<a name="_step_4_deploy_workloads_with_toleration_and_node_affinity"></a>

For optimal control over workload placement on Local Zone nodes, use both taints/tolerations and node affinity together. This combined approach provides the following benefits:

1.  **Cost Control**: The taint ensures that only pods with explicit tolerations can use potentially expensive Local Zone resources.

1.  **Guaranteed Placement**: Node affinity ensures that your latency-sensitive applications run exclusively in the Local Zone, not on regular cluster nodes.

Here’s an example of a Deployment configured to run specifically on Local Zone nodes:

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: low-latency-app
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: low-latency-app
  template:
    metadata:
      labels:
        app: low-latency-app
    spec:
      tolerations:
      - key: "aws.amazon.com/local-zone"
        operator: "Equal"
        value: "true"
        effect: "NoSchedule"
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: "node-type"
                operator: "In"
                values: ["local-zone"]
      containers:
      - name: application
        image: my-low-latency-app:latest
        resources:
          limits:
            cpu: "1"
            memory: "1Gi"
          requests:
            cpu: "500m"
            memory: "512Mi"
```

This Deployment has two key scheduling configurations:

1. The **toleration** allows the pods to be scheduled on nodes with the `aws.amazon.com/local-zone` taint.

1. The **node affinity** requirement ensures these pods will only run on nodes with the label `node-type: local-zone`.

Together, these ensure that your latency-sensitive application runs only on Local Zone nodes, and regular applications don’t consume the Local Zone resources unless explicitly configured to do so.

## Step 5: Verify with AWS Console
<a name="step_5_verify_with_shared_aws_console"></a>

After setting up your NodeClass, NodePool, and Deployments, you should verify that nodes are being provisioned in your Local Zone as expected and that your workloads are running on them. You can use the AWS Management Console to verify that EC2 instances are being launched in the correct Local Zone subnet.

Additionally, you can check the Kubernetes node list using `kubectl get nodes -o wide` to confirm that the nodes are joining your cluster with the correct labels and taints:

```
kubectl get nodes -o wide
kubectl describe node <node-name> | grep -A 5 Taints
```

You can also verify that your workload pods are scheduled on the Local Zone nodes:

```
kubectl get pods -o wide
```

This approach ensures that only workloads that specifically tolerate the Local Zone taint will be scheduled on these nodes, helping you control costs and make the most efficient use of your Local Zone resources.

# Configure advanced security settings for nodes
<a name="auto-advanced-security"></a>

This topic describes how to configure advanced security settings for Amazon EKS Auto Mode nodes using the `advancedSecurity` specification in your Node Class.

## Prerequisites
<a name="_prerequisites"></a>

Before you begin, ensure you have:
+ An Amazon EKS Auto Mode cluster. For more information, see [Create a cluster with Amazon EKS Auto Mode](create-auto.md).
+  `kubectl` installed and configured. For more information, see [Set up to use Amazon EKS](setting-up.md).
+ Understanding of Node Class configuration. For more information, see [Create a Node Class for Amazon EKS](create-node-class.md).

## Configure advanced security settings
<a name="_configure_advanced_security_settings"></a>

To configure advanced security settings for your nodes, set the `advancedSecurity` fields in your Node Class specification:

```
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
  name: security-hardened
spec:
  role: MyNodeRole

  subnetSelectorTerms:
    - tags:
        Name: "private-subnet"

  securityGroupSelectorTerms:
    - tags:
        Name: "eks-cluster-sg"

  advancedSecurity:
    # Enable FIPS-compliant AMIs (US regions only)
    fips: true

    # Configure kernel lockdown mode
    kernelLockdown: "integrity"
```

Apply this configuration:

```
kubectl apply -f nodeclass.yaml
```

Reference this Node Class in your Node Pool configuration. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

## Field descriptions
<a name="_field_descriptions"></a>
+  `fips` (boolean, optional): When set to `true`, provisions nodes using AMIs with FIPS 140-2 validated cryptographic modules. This setting selects FIPS-compliant AMIs; customers are responsible for managing their compliance requirements. For more information, see [AWS FIPS compliance](https://aws.amazon.com/compliance/fips/). Default: `false`.
+  `kernelLockdown` (string, optional): Controls the kernel lockdown security module mode. Accepted values:
  +  `integrity`: Blocks methods for overwriting kernel memory or modifying kernel code. Prevents unsigned kernel modules from loading.
  +  `none`: Disables kernel lockdown protection.

    For more information, see [Linux kernel lockdown documentation](https://man7.org/linux/man-pages/man7/kernel_lockdown.7.html).

## Considerations
<a name="_considerations"></a>
+ FIPS-compliant AMIs are available in AWS US East/West, AWS GovCloud (US), and AWS Canada (Central/West) Regions. For more information, see [AWS FIPS compliance](https://aws.amazon.com/compliance/fips/).
+ When using `kernelLockdown: "integrity"`, ensure your workloads don’t require loading unsigned kernel modules or modifying kernel memory.

## Related resources
<a name="_related_resources"></a>
+  [Create a Node Class for Amazon EKS](create-node-class.md) - Complete Node Class configuration guide
+  [Create a Node Pool for EKS Auto Mode](create-node-pool.md) - Node Pool configuration

# Learn how EKS Auto Mode works
<a name="auto-reference"></a>

Use this chapter to learn how the components of Amazon EKS Auto Mode clusters work.

**Topics**
+ [

# Learn about Amazon EKS Auto Mode Managed instances
](automode-learn-instances.md)
+ [

# Learn about identity and access in EKS Auto Mode
](auto-learn-iam.md)
+ [

# Learn about VPC Networking and Load Balancing in EKS Auto Mode
](auto-networking.md)

# Learn about Amazon EKS Auto Mode Managed instances
<a name="automode-learn-instances"></a>

This topic explains how Amazon EKS Auto Mode manages Amazon EC2 instances in your EKS cluster. When you enable EKS Auto Mode, your cluster’s compute resources are automatically provisioned and managed by EKS, changing how you interact with the EC2 instances that serve as nodes in your cluster.

Understanding how Amazon EKS Auto Mode manages instances is essential for planning your workload deployment strategy and operational procedures. Unlike traditional EC2 instances or managed node groups, these instances follow a different lifecycle model where EKS assumes responsibility for many operational aspects, while restricting certain types of access and customization.

Amazon EKS Auto Mode automates routine tasks for creating new EC2 Instances, and attaches them as nodes to your EKS cluster. EKS Auto Mode detects when a workload can’t fit onto existing nodes, and creates a new EC2 Instance.

Amazon EKS Auto Mode is responsible for creating, deleting, and patching EC2 Instances. You are responsible for the containers and pods deployed on the instance.

EC2 Instances created by EKS Auto Mode are different from other EC2 Instances, they are managed instances. These managed instances are owned by EKS and are more restricted. You can’t directly access or install software on instances managed by EKS Auto Mode.

 AWS suggests running either EKS Auto Mode or self-managed Karpenter. You can install both during a migration or in an advanced configuration. If you have both installed, configure your node pools so that workloads are associated with either Karpenter or EKS Auto Mode.

For more information, see [Amazon EC2 managed instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/amazon-ec2-managed-instances.html) in the Amazon EC2 user guide.

## Comparison table
<a name="_comparison_table"></a>


| Standard EC2 Instance | EKS Auto Mode managed instance | 
| --- | --- | 
|  You are responsible for patching and updating the instance.  |   AWS automatically patches and updates the instance.  | 
|  EKS is not responsible for the software on the instance.  |  EKS is responsible for certain software on the instance, such as `kubelet`, the container runtime, and the operating system.  | 
|  You can delete the EC2 Instance using the EC2 API.  |  EKS determines the number of instances deployed in your account. If you delete a workload, EKS will reduce the number of instances in your account.  | 
|  You can use SSH to access the EC2 Instance.  |  You can deploy pods and containers to the managed instance.  | 
|  You determine the operating system and image (AMI).  |   AWS determines the operating system and image.  | 
|  You can deploy workloads that rely on Windows or Ubuntu functionality.  |  You can deploy containers based on Linux, but without specific OS dependencies.  | 
|  You determine what instance type and family to launch.  |   AWS determines what instance type and family to launch. You can use a Node Pool to limit the instance types EKS Auto Mode selects from.  | 

The following functionality works for both Managed instances and Standard EC2 instances:
+ You can view the instance in the AWS console.
+ You can use instance storage as ephemeral storage for workloads.

### AMI Support
<a name="_ami_support"></a>

With EKS Auto Mode, AWS determines the image (AMI) used for your compute nodes. AWS monitors the rollout of new EKS Auto Mode AMI versions. If you experience workload issues related to an AMI version, create a support case. For more information, see [Creating support cases and case management](https://docs.aws.amazon.com/awssupport/latest/user/case-management.html) in the AWS Support User Guide.

Generally, EKS releases a new AMI each week containing CVE and security fixes.

## EKS Auto Mode supported instance reference
<a name="auto-supported-instances"></a>

EKS Auto Mode only creates instances of supported types, and that meet a minimum size requirement.

EKS Auto Mode supports the following instance types:


| Family | Instance Types | 
| --- | --- | 
|  Compute Optimized ©  |  c8id, c8i, c8i-flex, c8gd, c8gn, c8g, c8a, c8gb, c7a, c7g, c7gn, c7gd, c7i, c7i-flex, c6a, c6g, c6i, c6gn, c6id, c6in, c6gd, c5, c5a, c5d, c5ad, c5n, c4  | 
|  General Purpose (M)  |  m8id, m8i, m8i-flex, m8a, m8azn, m8gn, m8gb, m8gd, m8g, m7i, m7a, m7g, m7gd, m7i-flex, m6a, m6i, m6in, m6g, m6idn, m6id, m6gd, m5, m5a, m5ad, m5n, m5dn, m5d, m5zn, m4  | 
|  Memory Optimized ®  |  r8id, r8i, r8i-flex, r8gn, r8gb, r8gd, r8g, r8a, r7a, r7iz, r7gd, r7i, r7g, r6a, r6i, r6id, r6in, r6idn, r6g, r6gd, r5, r5n, r5a, r5dn, r5b, r5ad, r5d, r4  | 
|  Burstable (T)  |  t4g, t3, t3a, t2  | 
|  High Memory (Z/X)  |  z1d, x8aedz, x8g, x8i, x2gd  | 
|  Storage Optimized (I/D)  |  i8ge, i7i, i8g, i7ie, i4g, i4i, i3, i3en, is4gen, d3, d3en, im4gn  | 
|  Accelerated Computing (P/G/Inf/Trn)  |  p6-b200, p6-b300, p5, p5e, p5en, p4d, p4de, p3, p3dn, g7e, g6, gr6, g6e, g5g, g5, g4dn, g4ad, inf2, inf1, trn1, trn1n, trn2  | 
|  High Performance Computing (HPC/X2)  |  hpc8a, x2iezn, x2iedn, x2idn  | 

Additionally, EKS Auto Mode will only create EC2 instances that meet the following requirements:
+ More than 1 CPU
+ Instance size is not nano, micro or small

For more information, see [Amazon EC2 instance type naming conventions](https://docs.aws.amazon.com/ec2/latest/instancetypes/instance-type-names.html).

## Instance Metadata Service
<a name="_instance_metadata_service"></a>
+ EKS Auto Mode enforces IMDSv2 with a hop limit of 1 by default, adhering to AWS security best practices.
+ This default configuration cannot be modified in Auto Mode.
+ For add-ons that typically require IMDS access, supply parameters (such as AWS region) during installation to avoid IMDS lookups. For more information, see [Determine fields you can customize for Amazon EKS add-ons](kubernetes-field-management.md).
+ If a Pod absolutely requires IMDS access when running in Auto Mode, the Pod must be configured to run with `hostNetwork: true`. This allows the Pod to access the instance metadata service directly.
+ Consider the security implications when granting Pods access to instance metadata.

For more information about the Amazon EC2 Instance Metadata Service (IMDS), see [Configure the Instance Metadata Service options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-options.html) in the *Amazon EC2 User Guide*.

## Considerations
<a name="_considerations"></a>
+ If the configured ephemeral storage in the NodeClass is smaller than the NVMe local storage for the instance, EKS Auto Mode eliminates the need for manual configuration by automatically taking the following actions:
  + Uses a smaller (20 GiB) Amazon EBS data volume to reduce costs.
  + Formats and configures the NVMe local storage for ephemeral data use. This includes setting up a RAID 0 array if there are multiple NVMe drives.
+ When `ephemeralStorage.size` equals or exceeds the local NVMe capacity, the following actions occur:
  + Auto Mode skips the small EBS volume.
  + The NVMe drive(s) are exposed directly for your workload.
+ Amazon EKS Auto Mode does not support the following AWS Fault Injection Service actions:
  +  `ec2:RebootInstances` 
  +  `ec2:SendSpotInstanceInterruptions` 
  +  `ec2:StartInstances` 
  +  `ec2:StopInstances` 
  +  `ec2:TerminateInstances` 
  +  `ec2:PauseVolumeIO` 
+ Amazon EKS Auto Mode supports AWS Fault Injection Service EKS Pod actions. For more information, see [Managing Fault Injection Service experiments](https://docs.aws.amazon.com/resilience-hub/latest/userguide/testing.html) and [Use the AWS FIS aws:eks:pod actions](https://docs.aws.amazon.com/fis/latest/userguide/eks-pod-actions.html#configure-service-account) in the AWS Resilience Hub User Guide.
+ You do not need to install the `Neuron Device Plugin` on EKS Auto Mode nodes.

  If you have other types of nodes in your cluster, you need to configure the Neuron Device plugin to not run on Auto Mode nodes. For more information, see [Control if a workload is deployed on EKS Auto Mode nodes](associate-workload.md).

# Learn about identity and access in EKS Auto Mode
<a name="auto-learn-iam"></a>

This topic describes the Identity and Access Management (IAM) roles and permissions required to use EKS Auto Mode. EKS Auto Mode uses two primary IAM roles: a Cluster IAM Role and a Node IAM Role. These roles work in conjunction with EKS Pod Identity and EKS access entries to provide comprehensive access management for your EKS clusters.

When you configure EKS Auto Mode, you will need to set up these IAM roles with specific permissions that allow AWS services to interact with your cluster resources. This includes permissions for managing compute resources, storage volumes, load balancers, and networking components. Understanding these role configurations is essential for proper cluster operation and security.

In EKS Auto Mode, AWS IAM roles are automatically mapped to Kubernetes permissions through EKS access entries, removing the need for manual configuration of `aws-auth` ConfigMaps or custom bindings. When you create a new auto mode cluster, EKS automatically creates the corresponding Kubernetes permissions using Access entries, ensuring that AWS services and cluster components have the appropriate access levels within both the AWS and Kubernetes authorization systems. This automated integration reduces configuration complexity and helps prevent permission-related issues that commonly occur when managing EKS clusters.

## Cluster IAM role
<a name="auto-learn-cluster-iam-role"></a>

The Cluster IAM role is an AWS Identity and Access Management (IAM) role used by Amazon EKS to manage permissions for Kubernetes clusters. This role grants Amazon EKS the necessary permissions to interact with other AWS services on behalf of your cluster, and is automatically configured with Kubernetes permissions using EKS access entries.
+ You must attach AWS IAM policies to this role.
+ EKS Auto Mode attaches Kubernetes permissions to this role automatically using EKS access entries.
+ With EKS Auto Mode, AWS suggests creating a single Cluster IAM Role per AWS account.
+  AWS suggests naming this role `AmazonEKSAutoClusterRole`.
+ This role requires permissions for multiple AWS services to manage resources including EBS volumes, Elastic Load Balancers, and EC2 instances.
+ The suggested configuration for this role includes multiple AWS managed IAM policies, related to the different capabilities of EKS Auto Mode.
  +  `AmazonEKSComputePolicy` 
  +  `AmazonEKSBlockStoragePolicy` 
  +  `AmazonEKSLoadBalancingPolicy` 
  +  `AmazonEKSNetworkingPolicy` 
  +  `AmazonEKSClusterPolicy` 

For more information about the Cluster IAM Role and AWS managed IAM policies, see:
+  [AWS managed policies for Amazon Elastic Kubernetes Service](security-iam-awsmanpol.md) 
+  [Amazon EKS cluster IAM role](cluster-iam-role.md) 

For more information about Kubernetes access, see:
+  [Review access policy permissions](access-policy-permissions.md) 

## Node IAM role
<a name="auto-learn-node-iam-role"></a>

The Node IAM role is an AWS Identity and Access Management (IAM) role used by Amazon EKS to manage permissions for worker nodes in Kubernetes clusters. This role grants EC2 instances running as Kubernetes nodes the necessary permissions to interact with AWS services and resources, and is automatically configured with Kubernetes RBAC permissions using EKS access entries.
+ You must attach AWS IAM policies to this role.
+ EKS Auto Mode attaches Kubernetes RBAC permissions to this role automatically using EKS access entries.
+  AWS suggests naming this role `AmazonEKSAutoNodeRole`.
+ With EKS Auto Mode, AWS suggests creating a single Node IAM Role per AWS account.
+ This role has limited permissions. The key permissions include assuming a Pod Identity Role, and pulling images from ECR.
+  AWS suggests the following AWS managed IAM policies:
  +  `AmazonEKSWorkerNodeMinimalPolicy` 
  +  `AmazonEC2ContainerRegistryPullOnly` 

For more information about the Cluster IAM Role and AWS managed IAM policies, see:
+  [AWS managed policies for Amazon Elastic Kubernetes Service](security-iam-awsmanpol.md) 
+  [Amazon EKS node IAM role](create-node-role.md) 

For more information about Kubernetes access, see:
+  [Review access policy permissions](access-policy-permissions.md) 

## Service-linked role
<a name="_service_linked_role"></a>

Amazon EKS uses a service-linked role (SLR) for certain operations. A service-linked role is a unique type of IAM role that is linked directly to Amazon EKS. Service-linked roles are predefined by Amazon EKS and include all the permissions that the service requires to call other AWS services on your behalf.

 AWS automatically creates and configures the SLR. You can delete an SLR only after first deleting their related resources. This protects your Amazon EKS resources because you can’t inadvertently remove permission to access the resources.

The SLR policy grants Amazon EKS permissions to observe and delete core infrastructure components: EC2 resources (instances, network interfaces, security groups), ELB resources (load balancers, target groups), CloudWatch capabilities (logging and metrics), and IAM roles with "eks" prefix. It also enables private endpoint networking through VPC/hosted zone association and includes permissions for EventBridge monitoring and cleanup of EKS-tagged resources.

For more information, see:
+  [AWS managed policy: AmazonEKSServiceRolePolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazoneksservicerolepolicy) 
+  [Service-linked role permissions for Amazon EKS](using-service-linked-roles-eks.md#service-linked-role-permissions-eks) 

## Access Policy Reference
<a name="_access_policy_reference"></a>

For more information about the Kubernetes permissions used by EKS Auto Mode, see [Review access policy permissions](access-policy-permissions.md).

# Learn about VPC Networking and Load Balancing in EKS Auto Mode
<a name="auto-networking"></a>

This topic explains how to configure Virtual Private Cloud (VPC) networking and load balancing features in EKS Auto Mode. While EKS Auto Mode manages most networking components automatically, you can still customize certain aspects of your cluster’s networking configuration through `NodeClass` resources and load balancer annotations.

When you use EKS Auto Mode, AWS manages the VPC Container Network Interface (CNI) configuration and load balancer provisioning for your cluster. You can influence networking behaviors by defining `NodeClass` objects and applying specific annotations to your Service and Ingress resources, while maintaining the automated operational model that EKS Auto Mode provides.

## Networking capability
<a name="_networking_capability"></a>

EKS Auto Mode has a new networking capability that handles node and pod networking. You can configure it by creating a `NodeClass` Kubernetes object.

Configuration options for the previous AWS VPC CNI will not apply to EKS Auto Mode.

### Configure networking with a `NodeClass`
<a name="_configure_networking_with_a_nodeclass"></a>

The `NodeClass` resource in EKS Auto Mode allows you to customize certain aspects of the networking capability. Through `NodeClass`, you can specify security group selections, control node placement across VPC subnets, set SNAT policies, configure network policies, and enable network event logging. This approach maintains the automated operational model of EKS Auto Mode while providing flexibility for network customization.

You can use a `NodeClass` to:
+ Select a Security Group for Nodes
+ Control how nodes are placed on VPC Subnets
+ Set the Node SNAT Policy to `random` or `disabled` 
+ Enable Kubernetes *network policies* including:
  + Set the Network Policy to Default Deny or Default Allow
  + Enable Network Event Logging to a file.
+ Isolate pod traffic from the node traffic by attaching pods to different subnets.

Learn how to [Create an Amazon EKS NodeClass](create-node-class.md).

### Considerations
<a name="_considerations"></a>

EKS Auto Mode supports:
+ EKS Network Policies.
+ The `HostPort` and `HostNetwork` options for Kubernetes Pods.
+ Nodes and Pods in public or private subnets.
+ Caching DNS queries on the node.

EKS Auto Mode does **not** support:
+ Security Groups per Pod (SGPP). To apply separate security groups to Pod traffic in Auto Mode, use `podSecurityGroupSelectorTerms` in the `NodeClass` instead. For more information, see [Separate subnets and security groups for Pods](create-node-class.md#pod-subnet-selector).
+ Custom Networking in the `ENIConfig`. You can put pods in multiple subnets or exclusively isolate them from the node traffic with [Separate subnets and security groups for Pods](create-node-class.md#pod-subnet-selector).
+ Warm IP, warm prefix, and warm ENI configurations.
+ Minimum IP targets configuration.
+ Other configurations supported by the open source AWS VPC CNI.
+ Network Policy configurations such as conntrack timer customization (default is 300s).
+ Exporting network event logs to CloudWatch.

### Network Resource Management
<a name="_network_resource_management"></a>

EKS Auto Mode handles prefix, IP addressing, and network interface management by monitoring NodeClass resources for networking configurations. The service performs several key operations automatically:

 **Prefix Delegation** 

EKS Auto Mode defaults to using prefix delegation (/28 prefixes) for pod networking and maintains a predefined warm pool of IP resources that scales based on the number of scheduled pods. When pod subnet fragmentation is detected, Auto Mode provisions secondary IP addresses (/32). Due to this default pod networking algorithm, Auto Mode calculates max pods per node based on the number of ENIs and IPs supported per instance type (assuming the worst case of fragmentation). For more information about Max ENIs and IPs per instance type, see [Maximum IP addresses per network interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AvailableIpPerENI.html) in the EC2 User Guide. Newer generation (Nitro v6 and above) instance families generally have increased ENIs and IPs per instance type, and Auto Mode adjusts the max pods calculation accordingly.

For IPv6 clusters, only prefix delegation is used, and Auto Mode always uses a max pods limit of 110 pods per node.

 **Cooldown Management** 

The service implements a cooldown pool for prefixes or secondary IPv4 addresses that are no longer in use. After the cooldown period expires, these resources are released back to the VPC. However, if pods reuse these resources during the cooldown period, they are restored from the cooldown pool.

 **IPv6 Support** 

For IPv6 clusters, EKS Auto Mode provisions a `/80` IPv6 prefix per node on the primary network interface. When using `podSubnetSelectorTerms`, the prefix is allocated on a secondary network interface in the pod subnet instead.

The service also ensures proper management and garbage collection of all network interfaces.

## Load balancing
<a name="auto-lb-consider"></a>

You configure AWS Elastic Load Balancers provisioned by EKS Auto Mode using annotations on Service and Ingress resources.

For more information, see [Create an IngressClass to configure an Application Load Balancer](auto-configure-alb.md) or [Use Service Annotations to configure Network Load Balancers](auto-configure-nlb.md).

### Considerations for load balancing with EKS Auto Mode
<a name="_considerations_for_load_balancing_with_eks_auto_mode"></a>
+ The default targeting mode is IP Mode, not Instance Mode.
+ EKS Auto Mode only supports Security Group Mode for Network Load Balancers.
+  AWS does not support migrating load balancers from the self managed AWS load balancer controller to management by EKS Auto Mode.
+ The `networking.ingress.ipBlock` field in `TargetGroupBinding` spec is not supported.
+ If your worker nodes use custom security groups (not `eks-cluster-sg- ` naming pattern), your cluster role needs additional IAM permissions. The default EKS-managed policy only allows EKS to modify security groups named `eks-cluster-sg-`. Without permission to modify your custom security groups, EKS cannot add the required ingress rules that allow ALB/NLB traffic to reach your pods.

#### CoreDNS considerations
<a name="dns-consider"></a>

EKS Auto Mode does not use the traditional CoreDNS deployment to provide DNS resolution within the cluster. Instead, Auto Mode nodes utilize CoreDNS running as a system service directly on each node. If transitioning a traditional cluster to Auto Mode, you can remove the CoreDNS deployment from your cluster once your workloads have been moved to the Auto Mode nodes.

**Important**  
If you plan to maintain a cluster with both Auto Mode and non-Auto Mode nodes, you must retain the CoreDNS deployment. Non-Auto Mode nodes rely on the traditional CoreDNS pods for DNS resolution, as they cannot access the node-level DNS service that Auto Mode provides.

# Observability for EKS Auto Mode
<a name="auto-observability"></a>

Use this chapter to learn about observability options for Amazon EKS Auto Mode clusters.

**Topics**
+ [

# Access AWS-Managed Component Logs For EKS Auto
](auto-managed-component-logs.md)

# Access AWS-Managed Component Logs For EKS Auto
<a name="auto-managed-component-logs"></a>

You can access AWS-managed component logs from EKS Auto Mode to gain deeper observability into your cluster operations. EKS Auto Mode supports logs for the following sources:
+ Compute autoscaling - Karpenter
+ Block storage - EBS CSI
+ Load balancing - AWS Load Balancer Controller
+ Pod networking - VPC CNI IP Address Management

Logs can be delivered to a [delivery destination](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html) of your choice.

When you create an EKS Auto cluster, you have the option to enable control plane logging (API server, Audit, Authenticator, Controller manager, Scheduler). EKS Auto managed component logs (such as Compute, Block storage, Load balancing, and IPAM) require separate configuration through log delivery.

## Setting up log delivery
<a name="_setting_up_log_delivery"></a>

To configure AWS-managed component log delivery for your EKS Auto Mode cluster, use the Amazon CloudWatch Logs API. For detailed setup instructions, see [Enabling logging from AWS services](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-vended-logs-permissions-V2.html) in the Amazon CloudWatch Logs User Guide. Each Auto Mode capability can be configured as an individual CloudWatch Vended Logs delivery source, allowing you to select which logs you’d like to have access to.

EKS Auto Mode supports the following log types:
+  **AUTO\$1MODE\$1COMPUTE\$1LOGS** 
+  **AUTO\$1MODE\$1BLOCK\$1STORAGE\$1LOGS** 
+  **AUTO\$1MODE\$1LOAD\$1BALANCING\$1LOGS** 
+  **AUTO\$1MODE\$1IPAM\$1LOGS** 

### Using Amazon CloudWatch APIs
<a name="_using_amazon_cloudwatch_apis"></a>

Setting up logging requires three steps:

1. Create a delivery source for the capability using the CloudWatch PutDeliverySource API

1. Create a delivery destination using PutDeliveryDestination

1. Create a delivery to connect the source and destination using CreateDelivery

You can configure the details of the destination for Auto Mode’s logs using the deliveryDestinationConfiguration object in the CloudWatch PutDeliveryDestination API. It takes the ARN of either a CloudWatch log group, S3 bucket, or Amazon Data Firehose delivery stream.

You can configure a single Auto Mode capability (delivery source) to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

### IAM permissions
<a name="_iam_permissions"></a>

Depending on the destination selected, you may need to configure IAM Policies or Roles for the CloudWatch log group, S3 bucket, and Amazon Data Firehose to ensure successful log delivery. Additionally, if you’re sending logs across AWS accounts, you’ll need to use the PutDeliveryDestinationPolicy API to configure an IAM policy that allows delivery to the destination. See the [CloudWatch Vended Logs permissions documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-CloudWatchLogs) for additional information.

## Viewing your logs
<a name="_viewing_your_logs"></a>

Once log delivery is configured, logs will be delivered to your specified destination. The method for accessing logs depends on your chosen destination type:
+  **CloudWatch Logs** - View logs in the CloudWatch Logs console, use AWS CLI commands, or query with CloudWatch Logs Insights
+  **Amazon S3** - Access logs as objects in your S3 bucket through the S3 console, AWS CLI, or analytics tools like Amazon Athena
+  **Amazon Data Firehose** - Logs are streamed to your configured Firehose target (such as S3, OpenSearch Service, Redshift, etc)

## Pricing
<a name="_pricing"></a>

CloudWatch Vended Logs charges apply for log delivery and storage based on your chosen delivery destination. CloudWatch Vended Logs enables reliable, secure log delivery with built-in AWS authentication and authorization at a reduced price compared to standard CloudWatch Logs. See the [Vended Logs section of the CloudWatch pricing page](https://aws.amazon.com/cloudwatch/pricing/) for more details.

### Related Resources
<a name="_related_resources"></a>
+  [Amazon EKS control plane logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) 
+  [PutDeliverySource API](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html) in the CloudWatch Logs API Reference
+  [PutDeliveryDestination API](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html) in the CloudWatch Logs API Reference
+  [CreateDelivery API](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) in the CloudWatch Logs API Reference

# Troubleshoot EKS Auto Mode
<a name="auto-troubleshoot"></a>

With EKS Auto Mode, AWS assumes more responsibility for EC2 Instances in your AWS account. EKS assumes responsibility for the container runtime on nodes, the operating system on the nodes, and certain controllers. This includes a block storage controller, a load balancing controller, and a compute controller.

You must use AWS and Kubernetes APIs to troubleshoot nodes. You can:
+ Use a Kubernetes `NodeDiagnostic` resource to retrieve node logs by using the [Node monitoring agent](#auto-node-monitoring-agent). For more steps, see [Retrieve node logs for a managed node using kubectl and S3](auto-get-logs.md).
+ Use a Kubernetes `NodeDiagnostic` resource to capture network traffic on a node. For more steps, see [Capture network traffic on a managed node using kubectl and S3](auto-get-tcpdump.md).
+ Use the AWS EC2 CLI command `get-console-output` to retrieve console output from nodes. For more steps, see [Get console output from an EC2 managed instance by using the AWS EC2 CLI](#auto-node-console).
+ Use Kubernetes *debugging containers* to retrieve node logs. For more steps, see [Get node logs by using *debug containers* and the `kubectl` CLI](#auto-node-debug-logs).

**Note**  
EKS Auto Mode uses EC2 managed instances. You cannot directly access EC2 managed instances, including by SSH.

You might have the following problems that have solutions specific to EKS Auto Mode components:
+ Pods stuck in the `Pending` state, that aren’t being scheduled onto Auto Mode nodes. For solutions see [Troubleshoot Pod failing to schedule onto Auto Mode node](#auto-troubleshoot-schedule).
+ EC2 managed instances that don’t join the cluster as Kubernetes nodes. For solutions see [Troubleshoot node not joining the cluster](#auto-troubleshoot-join).
+ Errors and issues with the `NodePools`, `PersistentVolumes`, and `Services` that use the controllers that are included in EKS Auto Mode. For solutions see [Troubleshoot included controllers in Auto Mode](#auto-troubleshoot-controllers).
+ Enhanced Pod security prevents sharing volumes across Pods. For solutions see [Sharing Volumes Across Pods](#auto-troubleshoot-share-pod-volumes).

You can use the following methods to troubleshoot EKS Auto Mode components:
+  [Get console output from an EC2 managed instance by using the AWS EC2 CLI](#auto-node-console) 
+  [Get node logs by using *debug containers* and the `kubectl` CLI](#auto-node-debug-logs) 
+  [View resources associated with EKS Auto Mode in the AWS Console](#auto-node-ec2-web) 
+  [View IAM Errors in your AWS account](#auto-node-iam) 
+  [Detect node connectivity issues with the `VPC Reachability Analyzer`](#auto-node-reachability) 

## Node monitoring agent
<a name="auto-node-monitoring-agent"></a>

EKS Auto Mode includes the Amazon EKS node monitoring agent. You can use this agent to view troubleshooting and debugging information about nodes. The node monitoring agent publishes Kubernetes `events` and node `conditions`. For more information, see [Detect node health issues and enable automatic node repair](node-health.md).

## Get console output from an EC2 managed instance by using the AWS EC2 CLI
<a name="auto-node-console"></a>

This procedure helps with troubleshooting boot-time or kernel-level issues.

First, you need to determine the EC2 Instance ID of the instance associated with your workload. Second, use the AWS CLI to retrieve the console output.

1. Confirm you have `kubectl` installed and connected to your cluster

1. (Optional) Use the name of a Kubernetes Deployment to list the associated pods.

   ```
   kubectl get pods -l app=<deployment-name>
   ```

1. Use the name of the Kubernetes Pod to determine the EC2 instance ID of the associated node.

   ```
   kubectl get pod <pod-name> -o wide
   ```

1. Use the EC2 instance ID to retrieve the console output.

   ```
   aws ec2 get-console-output --instance-id <instance id> --latest --output text
   ```

## Get node logs by using *debug containers* and the `kubectl` CLI
<a name="auto-node-debug-logs"></a>

The recommended way of retrieving logs from an EKS Auto Mode node is to use `NodeDiagnostic` resource. For these steps, see [Retrieve node logs for a managed node using kubectl and S3](auto-get-logs.md).

However, you can stream logs live from an instance by using the `kubectl debug node` command. This command launches a new Pod on the node that you want to debug which you can then interactively use.

1. Launch a debug container. The following command uses `i-01234567890123456` for the instance ID of the node, `-it` allocates a `tty` and attach `stdin` for interactive usage, and uses the `sysadmin` profile from the kubeconfig file.

   ```
   kubectl debug node/i-01234567890123456 -it --profile=sysadmin --image=public.ecr.aws/amazonlinux/amazonlinux:2023
   ```

   An example output is as follows.

   ```
   Creating debugging pod node-debugger-i-01234567890123456-nxb9c with container debugger on node i-01234567890123456.
   If you don't see a command prompt, try pressing enter.
   bash-5.2#
   ```

1. From the shell, you can now install `util-linux-core` which provides the `nsenter` command. Use `nsenter` to enter the mount namespace of PID 1 (`init`) on the host, and run the `journalctl` command to stream logs from the `kubelet`:

   ```
   yum install -y util-linux-core
   nsenter -t 1 -m journalctl -f -u kubelet
   ```

For security, the Amazon Linux container image doesn’t install many binaries by default. You can use the `yum whatprovides` command to identify the package that must be installed to provide a given binary.

```
yum whatprovides ps
```

```
Last metadata expiration check: 0:03:36 ago on Thu Jan 16 14:49:17 2025.
procps-ng-3.3.17-1.amzn2023.0.2.x86_64 : System and process monitoring utilities
Repo        : @System
Matched from:
Filename    : /usr/bin/ps
Provide    : /bin/ps

procps-ng-3.3.17-1.amzn2023.0.2.x86_64 : System and process monitoring utilities
Repo        : amazonlinux
Matched from:
Filename    : /usr/bin/ps
Provide    : /bin/ps
```

## View resources associated with EKS Auto Mode in the AWS Console
<a name="auto-node-ec2-web"></a>

You can use the AWS console to view the status of resources associated with your EKS Auto Mode cluster.
+  [EBS Volumes](https://console.aws.amazon.com/ec2/home#Volumes) 
  + View EKS Auto Mode volumes by searching for the tag key `eks:eks-cluster-name` 
+  [Load Balancers](https://console.aws.amazon.com/ec2/home#LoadBalancers) 
  + View EKS Auto Mode load balancers by searching for the tag key `eks:eks-cluster-name` 
+  [EC2 Instances](https://console.aws.amazon.com/ec2/home#Instances) 
  + View EKS Auto Mode instances by searching for the tag key `eks:eks-cluster-name` 

## View IAM Errors in your AWS account
<a name="auto-node-iam"></a>

1. Navigate to CloudTrail console

1. Select "Event History" from the left navigation pane

1. Apply error code filters:
   + AccessDenied
   + UnauthorizedOperation
   + InvalidClientTokenId

Look for errors related to your EKS cluster. Use the error messages to update your EKS access entries, cluster IAM role, or node IAM role. You might need to attach a new policy to these roles with permissions for EKS Auto Mode.

## Troubleshoot Pod failing to schedule onto Auto Mode node
<a name="auto-troubleshoot-schedule"></a>

If pods staying in the `Pending` state and aren’t being scheduled onto an auto mode node, verify if your pod or deployment manifest has a `nodeSelector`. If a `nodeSelector` is present, ensure that it is using `eks.amazonaws.com/compute-type: auto` to be scheduled on nodes that are made by EKS Auto Mode. For more information about the node labels that are used by EKS Auto Mode, see [Control if a workload is deployed on EKS Auto Mode nodes](associate-workload.md).

## Troubleshoot node not joining the cluster
<a name="auto-troubleshoot-join"></a>

EKS Auto Mode automatically configures new EC2 instances with the correct information to join the cluster, including the cluster endpoint and cluster certificate authority (CA). However, these instances can still fail to join the EKS cluster as a node. Run the following commands to identify instances that didn’t join the cluster:

1. Run `kubectl get nodeclaim` to check for `NodeClaims` that are `Ready = False`.

   ```
   kubectl get nodeclaim
   ```

1. Run `kubectl describe nodeclaim <node_claim>` and look under **Status** to find any issues preventing the node from joining the cluster.

   ```
   kubectl describe nodeclaim <node_claim>
   ```

 **Common error messages:** 

 `Error getting launch template configs`   
You might receive this error if you are setting custom tags in the `NodeClass` with the default cluster IAM role permissions. See [Learn about identity and access in EKS Auto Mode](auto-learn-iam.md).

 `Error creating fleet`   
There might be some authorization issue with calling the `RunInstances` call from the EC2 API. Check AWS CloudTrail for errors and see [Amazon EKS Auto Mode cluster IAM role](auto-cluster-iam-role.md) for the required IAM permissions.

### Detect node connectivity issues with the `VPC Reachability Analyzer`
<a name="auto-node-reachability"></a>

**Note**  
You are charged for each analysis that is run the VPC Reachability Analyzer. For pricing details, see [Amazon VPC Pricing](https://aws.amazon.com/vpc/pricing/).

One reason that an instance didn’t join the cluster is a network connectivity issue that prevents them from reaching the API server. To diagnose this issue, you can use the [VPC Reachability Analyzer](https://docs.aws.amazon.com/vpc/latest/reachability/what-is-reachability-analyzer.html) to perform an analysis of the connectivity between a node that is failing to join the cluster and the API server. You will need two pieces of information:
+  **instance ID** of a node that can’t join the cluster
+ IP address of the **Kubernetes API server endpoint** 

To get the **instance ID**, you will need to create a workload on the cluster to cause EKS Auto Mode to launch an EC2 instance. This also creates a `NodeClaim` object in your cluster that will have the instance ID. Run `kubectl get nodeclaim -o yaml` to print all of the `NodeClaims` in your cluster. Each `NodeClaim` contains the instance ID as a field and again in the providerID:

```
kubectl get nodeclaim -o yaml
```

An example output is as follows.

```
    nodeName: i-01234567890123456
    providerID: aws:///us-west-2a/i-01234567890123456
```

You can determine your **Kubernetes API server endpoint** by running `kubectl get endpoint kubernetes -o yaml`. The addresses are in the addresses field:

```
kubectl get endpoints kubernetes -o yaml
```

An example output is as follows.

```
apiVersion: v1
kind: Endpoints
metadata:
  name: kubernetes
  namespace: default
subsets:
- addresses:
  - ip: 10.0.143.233
  - ip: 10.0.152.17
  ports:
  - name: https
    port: 443
    protocol: TCP
```

With these two pieces of information, you can perform the s analysis. First navigate to the VPC Reachability Analyzer in the AWS Management Console.

1. Click "Create and Analyze Path"

1. Provide a name for the analysis (e.g. "Node Join Failure")

1. For the "Source Type" select "Instances"

1. Enter the instance ID of the failing Node as the "Source"

1. For the "Path Destination" select "IP Address"

1. Enter one of the IP addresses for the API server as the "Destination Address"

1. Expand the "Additional Packet Header Configuration Section"

1. Enter a "Destination Port" of 443

1. Select "Protocol" as TCP if it is not already selected

1. Click "Create and Analyze Path"

1. The analysis might take a few minutes to complete. If the analysis results indicates failed reachability, it will indicate where the failure was in the network path so you can resolve the issue.

## Sharing Volumes Across Pods
<a name="auto-troubleshoot-share-pod-volumes"></a>

EKS Auto Mode Nodes are configured with SELinux in enforcing mode which provides more isolation between Pods that are running on the same Node. When SELinux is enabled, most non-privileged pods will automatically have their own multi-category security (MCS) label applied to them. This MCS label is unique per Pod, and is designed to ensure that a process in one Pod cannot manipulate a process in any other Pod or on the host. Even if a labeled Pod runs as root and has access to the host filesystem, it will be unable to manipulate files, make sensitive system calls on the host, access the container runtime, or obtain kubelet’s secret key material.

Due to this, you may experience issues when trying to share data between Pods. For example, a `PersistentVolumeClaim` with an access mode of `ReadWriteOnce` will still not allow multiple Pods to access the volume concurrently.

To enable this sharing between Pods, you can use the Pod’s `seLinuxOptions` to configure the same MCS label on those Pods. In this example, we assign the three categories `c123,c456,c789` to the Pod. This will not conflict with any categories assigned to Pods on the node automatically, as they will only be assigned two categories.

```
securityContext:
  seLinuxOptions:
    level: "s0:c123,c456,c789"
```

## View Karpenter events in control plane logs
<a name="auto-view-karpenter-logs"></a>

For EKS clusters with control plane logs enabled, you can gain insights into Karpenter’s actions and decision-making process by querying the logs. This can be particularly useful for troubleshooting EKS Auto Mode issues related to node provisioning, scaling, and termination. To view Karpenter-related events, use the following CloudWatch Logs Insights query:

```
fields @timestamp, @message
| filter @logStream like /kube-apiserver-audit/
| filter @message like 'DisruptionBlocked'
or @message like 'DisruptionLaunching'
or @message like 'DisruptionTerminating'
or @message like 'DisruptionWaitingReadiness'
or @message like 'Unconsolidatable'
or @message like 'FailedScheduling'
or @message like 'NoCompatibleInstanceTypes'
or @message like 'NodeRepairBlocked'
or @message like 'Disrupted'
or @message like 'Evicted'
or @message like 'FailedDraining'
or @message like 'TerminationGracePeriodExpiring'
or @message like 'TerminationFailed'
or @message like 'FailedConsistencyCheck'
or @message like 'InsufficientCapacityError'
or @message like 'UnregisteredTaintMissing'
or @message like 'NodeClassNotReady'
| sort @timestamp desc
```

This query filters for specific [Karpenter-related events](https://github.com/kubernetes-sigs/karpenter/blob/main/pkg/events/reason.go) in the kube-apiserver audit logs. The events include various disruption states, scheduling failures, capacity issues, and node-related problems. By analyzing these logs, you can gain a better understanding of:
+ Why Karpenter is taking certain actions.
+ Any issues preventing proper node provisioning, scaling, or termination.
+ Potential capacity or compatibility problems with instance types.
+ Node lifecycle events such as disruptions, evictions, or terminations.

To use this query:

1. Navigate to the CloudWatch console

1. Select "Logs Insights" from the left navigation pane

1. Choose the log group for your EKS cluster’s control plane logs

1. Paste the query into the query editor

1. Adjust the time range as needed

1. Run the query

The results will show you a timeline of Karpenter-related events, helping you troubleshoot issues, and understand the behavior of EKS Auto Mode in your cluster. To review Karpenter actions on a specific node, you can add the below line filter specifying the instance ID to the aforementioned query:

```
|filter @message like /[.replaceable]`i-12345678910123456`/
```

**Note**  
To use this query, control plane logging must be enabled on your EKS cluster. If you haven’t done this yet, please refer to [Send control plane logs to CloudWatch Logs](control-plane-logs.md).

## Troubleshoot included controllers in Auto Mode
<a name="auto-troubleshoot-controllers"></a>

If you have a problem with a controller, you should research:
+ If the resources associated with that controller are properly formatted and valid.
+ If the AWS IAM and Kubernetes RBAC resources are properly configured for your cluster. For more information, see [Learn about identity and access in EKS Auto Mode](auto-learn-iam.md).

## Related resources
<a name="_related_resources"></a>

Use these articles from AWS re:Post for advanced troubleshooting steps:
+  [How to troubleshoot common scaling issues in EKS Auto-Mode?](https://repost.aws/articles/ARLpQOknr5Rb-w5iAT9sUBpQ) 
+  [How do I troubleshoot custom nodepool and nodeclass provisioning issues in Amazon EKS Auto Mode?](https://repost.aws/articles/ARPcmFS1POTgqPCBdcZFp6BQ) 
+  [How do I troubleshoot EKS Auto Mode built-in node pools with Unknown Status?](https://repost.aws/en/articles/ARLhrdl45TRASGkvViwtBG0Q) 

# Review EKS Auto Mode release notes
<a name="auto-change"></a>

This page documents updates to Amazon EKS Auto Mode. You can periodically check this page for announcements about features, bug fixes, known issues, and deprecated functionality.

To receive notifications of all source file changes to this specific documentation page, you can subscribe to the following URL with an RSS reader:

```
https://github.com/awsdocs/amazon-eks-user-guide/commits/mainline/latest/ug/automode/auto-change.adoc.atom
```

## April 10, 2026
<a name="_april_10_2026"></a>

 **New supported instance types**: p6-b200, p6-b300, p5e, p5en, trn2, hpc8a, x8aedz, x8i. For the full list of supported instances, see [Learn about Amazon EKS Auto Mode Managed instances](automode-learn-instances.md).

## April 2, 2026
<a name="_april_2_2026"></a>

 **Chore**: NodeClass dry run validation will now use dynamically selected instance types based on linked NodePools.

## February 2, 2026
<a name="_february_2_2026"></a>

 **Feature**: Added support to disable v4Egress traffic from IPv6 pods in EKS Auto Mode IPv6 clusters. For more information, see [Disable IPv4 egress from IPv6 pods in IPv6 clusters.](create-node-class.md#enableV4Egress).

## December 19, 2025
<a name="_december_19_2025"></a>

 **Feature**: Added support for secondary IP mode that provisions secondary IP addresses instead of prefix to Auto nodes. The mode maintains a one secondary IP as MinimalIPTarget and save IP resources for customers who don’t need to warm up more secondary IPs or prefixes. For more information, see [Node Class Specification](create-node-class.md#auto-node-class-spec) and [Secondary IP Mode for Pods](create-node-class.md#secondary-IP-mode).

## November 19, 2025
<a name="_november_19_2025"></a>

 **Feature**: Enabled Seekable OCI (SOCI) parallel pull and unpack for G, P, and Trn family instances with local NVMe storage. SOCI parallel pull and unpack is always used for these instance families with EKS Auto Mode and there are no configuration changes required to enable it. For more information on SOCI, see the [launch blog](https://aws.amazon.com/blogs/containers/introducing-seekable-oci-parallel-pull-mode-for-amazon-eks/).

## November 19, 2025
<a name="_november_19_2025_2"></a>

 **Feature**: Added support for static-capacity node pools that maintain a fixed number of nodes. For more information, see [Static Capacity Node Pools in EKS Auto Mode](auto-static-capacity.md).

## October 23, 2025
<a name="_october_23_2025"></a>

 **Feature:** Users with clusters in US regions can now request to use FIPS compatible AMIs by specifying `spec.advancedSecurity.fips` in their NodeClass definition.

## October 1, 2025
<a name="_october_1_2025"></a>

 **Feature:** EKS Auto Mode now supports deploying nodes to AWS Local Zones. For more information, see [Deploy EKS Auto Mode nodes onto Local Zones](auto-local-zone.md).

## September 30, 2025
<a name="_september_30_2025"></a>

 **Feature:** Added support for instanceProfile to the NodeClass `spec.instanceProfile` which is mutually exclusive from the `spec.role` field.

## September 29, 2025
<a name="_september_29_2025"></a>

DRA is not currently supported by EKS Auto Mode.

## September 10, 2025
<a name="_september_10_2025"></a>

 **Chore:** Events fired from the Auto Mode Compute controller will now use the name `eks-auto-mode/compute` instead of `karpenter`.

## August 24, 2025
<a name="_august_24_2025"></a>

 **Bug Fix:** VPCs that used a DHCP option set with a custom domain name that contained capital letters would cause Nodes to fail to join the cluster due to generating an invalid hostname. This has been resolved and domain names with capital letters now work correctly.

## August 15, 2025
<a name="_august_15_2025"></a>

 **Bug Fix:** The Pod Identity Agent will now only listen on the IPv4 Link Local address in an IPv4 EKS cluster to avoid issues where the Pod can’t reach the IPv6 address.

## August 6, 2025
<a name="_august_6_2025"></a>

 **Feature:** Added new configuration on the NodeClass `spec.advancedNetworking.associatePublicIPAddress` which can be used to prevent public IP addresses from being assigned to EKS Auto Mode Nodes

## June 30, 2025
<a name="_june_30_2025"></a>

 **Feature:** The Auto Mode NodeClass now uses the configured custom KMS key to encrypt the read-only root volume of the instance, in addition to the read/write data volume. Previously, the custom KMS key was only used to encrypt the data volume.

## June 20, 2025
<a name="_june_20_2025"></a>

 **Feature:** Support for controlling deployment of workloads into EC2 On-Demand Capacity Reservations (ODCRs). This adds the optional key `capacityReservationSelectorTerms` to the NodeClass, allowing you to explicitly control which ODCRs your workloads use. For more information, see [Control deployment of workloads into Capacity Reservations with EKS Auto Mode](auto-odcr.md).

## June 13, 2025
<a name="_june_13_2025"></a>

 **Feature:** Support for separate pod subnets in the `NodeClass`. This adds the optional keys ``podSubnetSelectorTerms` and `podSecurityGroupSelectorTerms` to set the subnets and security groups for the pods. For more information, see [Separate subnets and security groups for Pods](create-node-class.md#pod-subnet-selector).

## April 30, 2025
<a name="_april_30_2025"></a>

 **Feature:** Support for forward network proxies in the `NodeClass`. This adds the optional key `advancedNetworking` to set your HTTPS proxy. For more information, see [Node Class Specification](create-node-class.md#auto-node-class-spec).

## April 18, 2025
<a name="_april_18_2025"></a>

 **Feature:** Support for resolving .local domains (typically reserved for Multicast DNS) via unicast DNS.

## April 11, 2025
<a name="_april_11_2025"></a>

 **Feature:** Added `certificateBundles` and `ephemeralStorage.kmsKeyID` to `NodeClass`. For more information, see [Node Class Specification](create-node-class.md#auto-node-class-spec).

 **Feature:** Improved image pull speed, particularly for instance types with local instance storage that can take advantage of the faster image decompression.

 **Bug Fix:** Resolved a race condition which caused FailedCreatePodSandBox , Error while dialing: dial tcp 127.0.0.1:50051: connect: connection refused to sometimes occur for Pods scheduling to a Node immediately at startup.

## April 4, 2025
<a name="_april_4_2025"></a>

 **Feature:** Increase `registryPullQPS` from 5 to 25 and `registryBurst` from 10 to 50 to reduce client enforced image pull throttling (`Failed to pull image xyz: pull QPS exceeded`)

## March 31, 2025
<a name="_march_31_2025"></a>

 **Bug Fix:** Fixes an issue where if a Core DNS Pod is running on an Auto Mode node, DNS queries from Pods on the node would hit that Core DNS Pod instead of the node local DNS server. DNS queries from Pods on an Auto Mode node will always go to the node local DNS.

## March 21, 2025
<a name="_march_21_2025"></a>

 **Bug Fix:** Auto Mode nodes now resolve `kube-dns.kube-system.svc.cluster.local` correctly when there isn’t a `kube-dns` service installed in the cluster. Addresses GitHub issue [\$12546](https://github.com/aws/containers-roadmap/issues/2546).

## March 14, 2025
<a name="_march_14_2025"></a>

 **Feature**: `IPv4` egress enabled in `IPv6` clusters. `IPv4` traffic egressing from `IPv6` Auto Mode clusters will now be automatically translated to the `v4` address of the node primary ENI.