

# Amazon EKS compute environments
<a name="eks"></a>

[Getting started with AWS Batch on Amazon EKS](getting-started-eks.md) provides a short guide to creating EKS compute environments. This section provides more details on Amazon EKS compute environments.

![\[AWS Batch workflow diagram showing integration with Amazon EKS, ECS, Fargate, and EC2.\]](http://docs.aws.amazon.com/batch/latest/userguide/images/batch-on-eks.png)


AWS Batch simplifies your batch workloads on Amazon EKS clusters by providing managed batch capabilities. This includes queuing, dependency tracking, managed job retries and priorities, pod management, and node scaling. AWS Batch can handle multiple Availability Zones and multiple Amazon EC2 instance types and sizes. AWS Batch integrates several of the Amazon EC2 Spot best practices to run your workloads in a fault-tolerant manner, allowing for fewer interruptions. You can use AWS Batch to run a handful of overnight jobs or millions of mission-critical jobs with confidence.

![\[AWS Batch workflow on Amazon EKS, showing job queue, compute environment, and EC2 instances.\]](http://docs.aws.amazon.com/batch/latest/userguide/images/batch-on-eks-detail.png)


AWS Batch is a managed service that orchestrates batch workloads in your Kubernetes clusters that are managed by Amazon Elastic Kubernetes Service (Amazon EKS). AWS Batch conducts this orchestration external to your clusters using an "overlay" model. Since AWS Batch is a managed service, there are no Kubernetes components (for example, Operators or Custom Resources) to install or manage in your cluster. AWS Batch only needs your cluster to be configured with Role-Based Access Controls (RBAC) that allow AWS Batch to communicate with the Kubernetes API server. AWS Batch calls Kubernetes APIs to create, monitor, and delete Kubernetes pods and nodes.

AWS Batch has built-in scaling logic to scale Kubernetes nodes based on job queue load with optimizations in terms of job capacity allocations. When the job queue is empty, AWS Batch scales down the nodes to the minimum capacity that you set, which by default is zero. AWS Batch manages the full lifecycle of these nodes, and decorates the nodes with labels and taints. This way, other Kubernetes workloads aren't placed on the nodes managed by AWS Batch. The exception to this are `DaemonSets`, which can target AWS Batch nodes to provide monitoring and other functionality required for proper execution of the jobs. Additionally, AWS Batch doesn't run jobs, specifically pods, on nodes in your cluster that it doesn't manage. This way, you can use separate scaling logic and services for other applications on the cluster.

To submit jobs to AWS Batch, you interact directly with the AWS Batch API. AWS Batch translates jobs into `podspecs` and then creates the requests to place pods on nodes managed by AWS Batch in your Amazon EKS cluster. You can use tools such as `kubectl` to view running pods and nodes. When a pod has completed its execution, AWS Batch deletes the pod it created to maintain a lower load on the Kubernetes system.

You can get started by connecting a valid Amazon EKS cluster with AWS Batch. Then attach an AWS Batch job queue to it, and register an Amazon EKS job definition using `podspec` equivalent attributes. Last, submit jobs using the [SubmitJob](https://docs.aws.amazon.com/batch/latest/APIReference/API_SubmitJob.html) API operation referencing to the job definition. For more information, see [Getting started with AWS Batch on Amazon EKS](getting-started-eks.md).

AWS Batch on Amazon EKS supports Amazon EC2 instances (On-Demand and Spot) as compute resources. To use Fargate with AWS Batch, use an Amazon ECS compute environment instead. For more information, see [Fargate compute environments](fargate.md).

## Amazon EKS
<a name="compute-environments-eks"></a>

**Topics**
+ [Amazon EKS](#compute-environments-eks)
+ [Amazon EKS default AMI](eks-ce-ami-selection.md)
+ [Mixed AMI environments](mixed-ami-environments.md)
+ [Supported Kubernetes versions](supported_kubernetes_version.md)
+ [Update the Kubernetes version of the compute environment](updating-k8s-version-ce.md)
+ [Shared responsibility of the Kubernetes nodes](eks-ce-shared-responsibility.md)
+ [Run a DaemonSet on AWS Batch managed nodes](daemonset-on-batch-eks-nodes.md)
+ [Customize Amazon EKS launch templates](eks-launch-templates.md)
+ [How to upgrade from EKS AL2 to EKS AL2023](eks-migration-2023.md)

# Amazon EKS default AMI
<a name="eks-ce-ami-selection"></a>

When you create an Amazon EKS compute environment, you don't need to specify an Amazon Machine Image (AMI). AWS Batch selects an Amazon EKS optimized AMI based on the Kubernetes version and instance types that are specified in your [CreateComputeEnvironment](https://docs.aws.amazon.com/batch/latest/APIReference/API_CreateComputeEnvironment.html) request. In general, we recommend that you use the default AMI selection. For information about AMI selection precedence, see [AMI selection order](ami-selection-order.md). For more information about Amazon EKS optimized AMIs, see [Amazon EKS optimized Amazon Linux AMIs](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) in the *Amazon EKS User Guide*.

**Important**  
Amazon Linux 2023 AMIs are the default on AWS Batch for Amazon EKS.  
AWS will end support for Amazon EKS AL2-optimized and AL2-accelerated AMIs, starting 11/26/25. You can continue using AWS Batch-provided Amazon EKS optimized Amazon Linux 2 AMIs on your Amazon EKS compute environments beyond the 11/26/25 end-of-support date, however these compute environments will no longer receive any new software updates, security patches, or bug fixes from AWS. For more information on upgrading from AL2 to AL2023, see [How to upgrade from EKS AL2 to EKS AL2023](eks-migration-2023.md) in the *AWS Batch User Guide*.

Run the following command to see which AMI type AWS Batch selected for your Amazon EKS compute environment. This following example is a non-GPU instance type.

```
# compute CE example: indicates Batch has chosen the AL2 x86 or ARM EKS 1.32 AMI, depending on instance types
    $ aws batch describe-compute-environments --compute-environments My-Eks-CE1 \
        | jq '.computeEnvironments[].computeResources.ec2Configuration'
    [
      {
        "imageType": "EKS_AL2",
        "imageKubernetesVersion": "1.32"
      }
    ]
```

This following example is a GPU instance type.

```
# GPU CE example: indicates Batch has choosen the AL2 x86 EKS Accelerated 1.32 AMI
    $ aws batch describe-compute-environments --compute-environments My-Eks-GPU-CE \
        | jq '.computeEnvironments[].computeResources.ec2Configuration'
    [
      {
        "imageType": "EKS_AL2_NVIDIA",
        "imageKubernetesVersion": "1.32"
      }
    ]
```

# Mixed AMI environments
<a name="mixed-ami-environments"></a>

You can use launch template overrides to create compute environments with both Amazon Linux 2 (AL2) and Amazon Linux 2023 (AL2023) AMIs. This is useful for using different AMIs for different architectures or during migration periods when transitioning from AL2 to AL2023.

**Note**  
AWS will end support for Amazon EKS AL2-optimized and AL2-accelerated AMIs, starting 11/26/25. While you can continue using AWS Batch-provided Amazon EKS optimized Amazon Linux 2 AMIs on your Amazon EKS compute environments beyond the 11/26/25 end-of-support date, these compute environments will no longer receive any new software updates, security patches, or bug fixes from AWS. Mixed AMI environments can be useful during the transition period, allowing you to gradually migrate workloads to AL2023 while maintaining compatibility with existing AL2-based workloads.

Example configuration using both AMI types:

```
{
  "computeResources": {
    "launchTemplate": {
      "launchTemplateId": "TemplateId",
      "version": "1",
      "userdataType": "EKS_BOOTSTRAP_SH",
      "overrides": [
        {
          "instanceType": "c5.large",
          "imageId": "ami-al2-custom",
          "userdataType": "EKS_BOOTSTRAP_SH"
        },
        {
          "instanceType": "c6a.large",
          "imageId": "ami-al2023-custom",
          "userdataType": "EKS_NODEADM"
        }
      ]
    },
    "instanceTypes": ["c5.large", "c6a.large"]
  }
}
```

# Supported Kubernetes versions
<a name="supported_kubernetes_version"></a>

AWS Batch on Amazon EKS currently supports the following Kubernetes versions:
+ `1.34`
+ `1.33`
+ `1.32`
+ `1.31`
+ `1.30`
+ `1.29`

You might see an error message that resembles the following when you use the `CreateComputeEnvironment` API operation or `UpdateComputeEnvironment`API operation to create or update a compute environment. This issue occurs if you specify an unsupported Kubernetes version in `EC2Configuration`.

```
At least one imageKubernetesVersion in EC2Configuration is not supported.
```

To resolve this issue, delete the compute environment and then re-create it with a supported Kubernetes version. 

You can perform a minor version upgrade on your Amazon EKS cluster. For example, you can upgrade the cluster from `1.xx` to `1.yy` even if the minor version isn't supported. 

However, the compute environment status might change to `INVALID` after a major version update. For example, if you perform a major version upgrade from `1.xx` to `2.yy`. If the major version isn't supported by AWS Batch, you see an error message that resembles the following.

```
reason=CLIENT_ERROR - ... EKS Cluster version [2.yy] is unsupported
```

# Update the Kubernetes version of the compute environment
<a name="updating-k8s-version-ce"></a>

With AWS Batch, you can update the Kubernetes version of a compute environment to support Amazon EKS cluster upgrades. The Kubernetes version of a compute environment is the Amazon EKS AMI version for the Kubernetes nodes that AWS Batch launches to run jobs. You can perform a Kubernetes version upgrade on their Amazon EKS nodes before or after you update the version of Amazon EKS cluster's control-plane. We recommend that you update the nodes after upgrading the control plane. For more information, see [Updating an Amazon EKS cluster Kubernetes version](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) in the *Amazon EKS User Guide*.

To upgrade the Kubernetes version of a compute environment, use the [https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdateComputeEnvironment.html](https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdateComputeEnvironment.html) API operation.

```
$ aws batch update-compute-environment \
    --compute-environment <compute-environment-name> \
    --compute-resources \
      'ec2Configuration=[{imageType=EKS_AL2,imageKubernetesVersion=1.32}]'
```

# Shared responsibility of the Kubernetes nodes
<a name="eks-ce-shared-responsibility"></a>

Maintenance of the compute environments is a shared responsibility.
+ Don't change or remove AWS Batch nodes, labels, taints, namespaces, launch templates, or auto scaling groups. Don't add taints to AWS Batch managed nodes. If you make any of these changes, your compute environment cannot be supported and failures including idle instances occur.
+ Don't target your pods to AWS Batch managed nodes. If you target your pods to the managed nodes, broken scaling and stuck job queues occur. Run workloads that don't use AWS Batch on self-managed nodes or managed node groups. For more information, see [Managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) in the *Amazon EKS User Guide*.
+ You can target a DaemonSet to run on AWS Batch managed nodes. For more information, see [Run a DaemonSet on AWS Batch managed nodes](daemonset-on-batch-eks-nodes.md).

AWS Batch doesn't automatically update compute environment AMIs. It's your responsibility to update them. Run the following command to update your AMIs to the latest AMI version.

```
$ aws batch update-compute-environment \
    --compute-environment <compute-environment-name> \
    --compute-resources 'updateToLatestImageVersion=true'
```

AWS Batch doesn't automatically upgrade the Kubernetes version. Run the following command to update the Kubernetes version of your computer environment to *1.32*.

```
$ aws batch update-compute-environment \
    --compute-environment <compute-environment-name> \
    --compute-resources \
      'ec2Configuration=[{imageType=EKS_AL2,imageKubernetesVersion=1.32}]'
```

When updating to a more recent AMI or the Kubernetes version, you can specify whether to terminate jobs when they're updated (`terminateJobsOnUpdate`) and how long to wait for before an instance is replaced if running jobs don't finish (`jobExecutionTimeoutMinutes`.) For more information, see [Update a compute environment in AWS Batch](updating-compute-environments.md) and the infrastructure update policy ([https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdatePolicy.html](https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdatePolicy.html)) set in the [https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdateComputeEnvironment.html](https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdateComputeEnvironment.html) API operation.

# Run a DaemonSet on AWS Batch managed nodes
<a name="daemonset-on-batch-eks-nodes"></a>

AWS Batch sets taints on AWS Batch managed Kubernetes nodes. You can target a DaemonSet to run on AWS Batch managed nodes with the following `tolerations`.

```
tolerations:
  - key: "batch.amazonaws.com/batch-node"
    operator: "Exists"
```

Another way to do this is with the following `tolerations`.

```
tolerations:
  - key: "batch.amazonaws.com/batch-node"
    operator: "Exists"
    effect: "NoSchedule"
  - key: "batch.amazonaws.com/batch-node"
    operator: "Exists"
    effect: "NoExecute"
```

# Customize Amazon EKS launch templates
<a name="eks-launch-templates"></a>

AWS Batch on Amazon EKS supports launch templates. There are constraints on what your launch template can do.

**Important**  
For EKS AL2 AMIs, AWS Batch runs `/etc/eks/bootstrap.sh`. Don't run `/etc/eks/bootstrap.sh` in your launch template or cloud-init user-data scripts. You can add additional parameters besides the `--kubelet-extra-args` parameter to [bootstrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/main/templates/al2/runtime/bootstrap.sh). To do this, set the `AWS_BATCH_KUBELET_EXTRA_ARGS` variable in the `/etc/aws-batch/batch.config` file. See the following example for details.
For EKS AL2023, AWS Batch utilizes the [NodeConfigSpec](https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#nodeconfigspec) from EKS to make instances join the EKS cluster. AWS Batch populates [ClusterDetails](https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#clusterdetails) in [NodeConfigSpec](https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#nodeconfigspec) for the EKS cluster and you don't need to specify them.

**Note**  
We recommend that you do not set any of the follow [https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#nodeconfigspec](https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#nodeconfigspec) settings in the launch template as AWS Batch will override your values. For more information, see [Shared responsibility of the Kubernetes nodes](eks-ce-shared-responsibility.md).  
`Taints`
`Cluster Name`
`apiServerEndpoint`
`certificatAuthority`
`CIDR`
Do not create a labels with the prefix `batch.amazonaws.com/`

**Note**  
If the launch template is changed after [CreateComputeEnvironment](https://docs.aws.amazon.com/batch/latest/APIReference/API_CreateComputeEnvironment.html) is called, [https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdateComputeEnvironment.html](https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdateComputeEnvironment.html) must be called to evaluate the version of the launch template for replacement.

**Topics**
+ [Add `kubelet` extra arguments](#kubelet-extra-args)
+ [Configure the container runtime](#change-container-runtime)
+ [Mount an Amazon EFS volume](#mounting-efs-volume)
+ [IPv6 support](#eks-ipv6-support)

## Add `kubelet` extra arguments
<a name="kubelet-extra-args"></a>

AWS Batch supports adding extra arguments to the `kubelet` command. For the list of supported parameters, see [https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) in the *Kubernetes documentation*. In the following example for EKS AL2 AMIs, `--node-labels mylabel=helloworld` is added to the `kubelet` command line.

```
MIME-Version: 1.0
      Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

      --==MYBOUNDARY==
      Content-Type: text/x-shellscript; charset="us-ascii"

      #!/bin/bash
      mkdir -p /etc/aws-batch

      echo AWS_BATCH_KUBELET_EXTRA_ARGS=\"--node-labels mylabel=helloworld\" >> /etc/aws-batch/batch.config

      --==MYBOUNDARY==--
```

For EKS AL2023 AMIs the file format is YAML. For the list of supported parameters, see [https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#nodeconfigspec](https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#nodeconfigspec) in the *Kubernetes documentation*. In the following example for EKS AL2023 AMIs, `--node-labels mylabel=helloworld` is added to the `kubelet` command line.

```
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: application/node.eks.aws

apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
  kubelet:
    flags:
    - --node-labels=mylabel=helloworld

--==MYBOUNDARY==--
```

## Configure the container runtime
<a name="change-container-runtime"></a>

You can use the AWS Batch `CONTAINER_RUNTIME` environment variable to configure the container runtime on a managed node. The following example sets the container runtime to `containerd` when `bootstrap.sh` runs. For more information, see [https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd) in the *Kubernetes documentation*. 

If you are using an optimized `EKS_AL2023` or `EKS_AL2023_NVIDIA` AMI you do not need to specify the container runtime as only **containerd** is supported.

**Note**  
The `CONTAINER_RUNTIME` environment variable is equivalent to the `--container-runtime` option of `bootstrap.sh`. For more information, see [https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options) in the *Kubernetes documentation*.

```
MIME-Version: 1.0
      Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

      --==MYBOUNDARY==
      Content-Type: text/x-shellscript; charset="us-ascii"

      #!/bin/bash
      mkdir -p /etc/aws-batch

      echo CONTAINER_RUNTIME=containerd >> /etc/aws-batch/batch.config

      --==MYBOUNDARY==--
```

## Mount an Amazon EFS volume
<a name="mounting-efs-volume"></a>

You can use launch templates to mount volumes to the node. In the following example, the `cloud-config` `packages` and `runcmd` settings are used. For more information, see [Cloud config examples](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) in the *cloud-init documentation*.

```
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/cloud-config; charset="us-ascii"

packages:
- amazon-efs-utils

runcmd:
- file_system_id_01=fs-abcdef123
- efs_directory=/mnt/efs

- mkdir -p ${efs_directory}
- echo "${file_system_id_01}:/ ${efs_directory} efs _netdev,noresvport,tls,iam 0 0" >> /etc/fstab
- mount -t efs -o tls ${file_system_id_01}:/ ${efs_directory}

--==MYBOUNDARY==--
```

To use this volume in the job, it must be added in the [eksProperties](https://docs.aws.amazon.com/batch/latest/APIReference/API_EksProperties.html) parameter to [RegisterJobDefinition](https://docs.aws.amazon.com/batch/latest/APIReference/API_RegisterJobDefinition.html). The following example is a large portion of the job definition.

```
{
    "jobDefinitionName": "MyJobOnEks_EFS",
    "type": "container",
    "eksProperties": {
        "podProperties": {
            "containers": [
                {
                    "image": "public.ecr.aws/amazonlinux/amazonlinux:2",
                    "command": ["ls", "-la", "/efs"],
                    "resources": {
                        "limits": {
                            "cpu": "1",
                            "memory": "1024Mi"
                        }
                    },
                    "volumeMounts": [
                        {
                            "name": "efs-volume",
                            "mountPath": "/efs"
                        }
                    ]
                }
            ],
            "volumes": [
                {
                    "name": "efs-volume",
                    "hostPath": {
                        "path": "/mnt/efs"
                    }
                }
            ]
        }
    }
}
```

In the node, the Amazon EFS volume is mounted in the `/mnt/efs` directory. In the container for the Amazon EKS job, the volume is mounted in the `/efs` directory.

## IPv6 support
<a name="eks-ipv6-support"></a>

AWS Batch supports Amazon EKS clusters that have IPv6 addresses. No customizations are required for AWS Batch support. However, before you begin, we recommend that you review the considerations and conditions that are outlined in [Assigning IPv6 addresses to pods and services](https://docs.aws.amazon.com/eks/latest/userguide/cni-ipv6.html) in the *Amazon EKS User Guide*.

# How to upgrade from EKS AL2 to EKS AL2023
<a name="eks-migration-2023"></a>

The Amazon EKS optimized AMIs are available in two families based on Amazon Linux 2 (AL2) and Amazon Linux 2023 (AL2023). AL2023 is a Linux-based operating system designed to provide a secure, stable, and high-performance environment for your cloud applications. For more information about the differences between AL2 and AL2023 see [Upgrade from Amazon Linux 2 to Amazon Linux 2023](https://docs.aws.amazon.com/eks/latest/userguide/al2023.html) in the *Amazon EKS User Guide*.

**Important**  
AWS ended support for Amazon EKS AL2-optimized and AL2-accelerated AMIs on November 26, 2025. AWS Batch Amazon EKS compute environments using Amazon Linux 2 no longer receive software updates, security patches, or bug fixes from AWS. We recommend migrating AWS Batch Amazon EKS compute environments to Amazon Linux 2023 to maintain optimal performance and security. It is your [responsibility to maintain](eks-ce-shared-responsibility.md) these compute environments on the Amazon EKS optimized Amazon Linux 2 AMI after end-of-life.

Depending on how your compute environment is configured you can use one of the following upgrade paths from AL2 to AL2023.

**Upgrade using Ec2Configuration.ImageType**
+ If you are not using a launch template or launch template overrides then change [Ec2Configuration.ImageType](https://docs.aws.amazon.com/batch/latest/APIReference/API_Ec2Configuration.html#Batch-Type-Ec2Configuration-imageType) to `EKS_AL2023` or `EKS_AL2023_NVIDIA` and then run [UpdateComputeEnvironment](https://docs.aws.amazon.com/batch/latest/APIReference/API_UpdateComputeEnvironment.html). 
+ If you specify an [Ec2Configuration.ImageIdOverride](https://docs.aws.amazon.com/batch/latest/APIReference/API_Ec2Configuration.html#Batch-Type-Ec2Configuration-imageIdOverride) then [Ec2Configuration.ImageType](https://docs.aws.amazon.com/batch/latest/APIReference/API_Ec2Configuration.html#Batch-Type-Ec2Configuration-imageType) must match the AMI type specified in [Ec2Configuration.ImageIdOverride](https://docs.aws.amazon.com/batch/latest/APIReference/API_Ec2Configuration.html#Batch-Type-Ec2Configuration-imageIdOverride). 

  If you mismatch `ImageIdOverride` and `ImageType` then the node won't join the cluster. 

**Upgrade using launch templates**
+ If you have any `kubelet` extra arguments defined in a launch template or launch template override, they need to be updated to the new [`kubelet` extra arguments format](eks-launch-templates.md#kubelet-extra-args).

  If you mismatch the `kubelet` extra arguments format then the extra arguments aren't applied.
+ For AL2023 AMIs, **containerd** is the only supported container runtime. You do not need to specify container runtime for `EKS_AL2023` in the launch template.

  You can't specify a customized container runtime with `EKS_AL2023`.
+ If you use a launch template or launch template override that specifies an AMI based on `EKS_AL2023` then you need to set [userdataType](https://docs.aws.amazon.com/batch/latest/APIReference/API_LaunchTemplateSpecification.html) to `EKS_NODEADM`. 

  If you mismatch the `userdataType` and AMI then the node won't join the EKS cluster.