

# What is Eksctl?
<a name="what-is-eksctl"></a>

eksctl is a command-line utility tool that automates and simplifies the process of creating, managing, and operating Amazon Elastic Kubernetes Service (Amazon EKS) clusters. Written in Go, eksctl provides a declarative syntax through YAML configurations and CLI commands to handle complex EKS cluster operations that would otherwise require multiple manual steps across different AWS services.

eksctl is particularly valuable for DevOps engineers, platform teams, and Kubernetes administrators who need to consistently deploy and manage EKS clusters at scale. It’s especially useful for organizations transitioning from self-managed Kubernetes to EKS, or those implementing infrastructure as code (IaC) practices, as it can be integrated into existing CI/CD pipelines and automation workflows. The tool abstracts away many of the complex interactions between AWS services required for EKS cluster setup, such as VPC configuration, IAM role creation, and security group management.

Key features of eksctl include the ability to create fully functional EKS clusters with a single command, support for custom networking configurations, automated node group management, and GitOps workflow integration. The tool manages cluster upgrades, scales node groups, and handles add-on management through a declarative approach. eksctl also provides advanced capabilities such as Fargate profile configuration, managed node group customization, and spot instance integration, while maintaining compatibility with other AWS tools and services through native AWS SDK integration.

## Features
<a name="_features"></a>

The features that are currently implemented are:
+ Create, get, list and delete clusters
+ Create, drain and delete nodegroups
+ Scale a nodegroup
+ Update a cluster
+ Use custom AMIs
+ Configure VPC Networking
+ Configure access to API endpoints
+ Support for GPU nodegroups
+ Spot instances and mixed instances
+ IAM Management and Add-on Policies
+ List cluster Cloudformation stacks
+ Install coredns
+ Write kubeconfig file for a cluster

# Eksctl FAQ
<a name="faq"></a>

## General
<a name="_general"></a>

 **Can I use `eksctl` to manage clusters which weren’t created by `eksctl`?** 

Yes\$1 From version `0.40.0` you can run `eksctl` against any cluster, whether it was created by `eksctl` or not. For more information, see [Non eksctl-created clusters](unowned-clusters.md).

## Nodegroups
<a name="nodegroup-faq"></a>

 **How can I change the instance type of my nodegroup?** 

From the point of view of `eksctl`, nodegroups are immutable. This means that once created the only thing `eksctl` can do is scale the nodegroup up or down.

To change the instance type, create a new nodegroup with the desired instance type, then drain it so that the workloads move to the new one. After that step is complete you can delete the old nodegroup.

 **How can I see the generated userdata for a nodegroup?** 

First you’ll need the name of the Cloudformation stack that manages the nodegroup:

```
eksctl utils describe-stacks --region=us-west-2 --cluster NAME
```

You’ll see a name similar to `eksctl-CLUSTER_NAME-nodegroup-NODEGROUP_NAME`.

You can execute the following to get the userdata. Note the final line which decodes from base64 and decompresses the gzipped data.

```
NG_STACK=eksctl-scrumptious-monster-1595247364-nodegroup-ng-29b8862f # your stack here
LAUNCH_TEMPLATE_ID=$(aws cloudformation describe-stack-resources --stack-name $NG_STACK \
| jq -r '.StackResources | map(select(.LogicalResourceId == "NodeGroupLaunchTemplate") \
| .PhysicalResourceId)[0]')
aws ec2 describe-launch-template-versions --launch-template-id $LAUNCH_TEMPLATE_ID \
| jq -r '.LaunchTemplateVersions[0].LaunchTemplateData.UserData' \
| base64 -d | gunzip
```

## Ingress
<a name="_ingress"></a>

 **How do I set up ingress with `eksctl`?** 

We recommend using the [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller). Documentation on how to deploy the controller to your cluster, as well as how to migrate from the old ALB Ingress Controller, can be found [here](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html).

For the Nginx Ingress Controller, setup would be the same as [any on other Kubernetes cluster](https://kubernetes.github.io/ingress-nginx/deploy/#aws).

## Kubectl
<a name="_kubectl"></a>

 **I’m using an HTTPS proxy and cluster certificate validation fails, how can I use the system CAs?** 

Set the environment variable `KUBECONFIG_USE_SYSTEM_CA` to make `kubeconfig` respect the system certificate authorities.

# Dry Run
<a name="dry-run"></a>

The dry-run feature allows you to inspect and change the instances matched by the instance selector before proceeding to creating a nodegroup.

When `eksctl create cluster` is called with the instance selector options and `--dry-run`, eksctl will output a ClusterConfig file containing a nodegroup representing the CLI options and the instance types set to the instances matched by the instance selector resource criteria.

```
eksctl create cluster --name development --dry-run


apiVersion: eksctl.io/v1alpha5
cloudWatch:
  clusterLogging: {}
iam:
  vpcResourceControllerPolicy: true
  withOIDC: false
kind: ClusterConfig
managedNodeGroups:
- amiFamily: AmazonLinux2
  desiredCapacity: 2
  disableIMDSv1: true
  disablePodIMDS: false
  iam:
    withAddonPolicies:
      albIngress: false
      appMesh: false
      appMeshPreview: false
      autoScaler: false
      certManager: false
      cloudWatch: false
      ebs: false
      efs: false
      externalDNS: false
      fsx: false
      imageBuilder: false
      xRay: false
  instanceSelector: {}
  instanceType: m5.large
  labels:
    alpha.eksctl.io/cluster-name: development
    alpha.eksctl.io/nodegroup-name: ng-4aba8a47
  maxSize: 2
  minSize: 2
  name: ng-4aba8a47
  privateNetworking: false
  securityGroups:
    withLocal: null
    withShared: null
  ssh:
    allow: false
    enableSsm: false
    publicKeyPath: ""
  tags:
    alpha.eksctl.io/nodegroup-name: ng-4aba8a47
    alpha.eksctl.io/nodegroup-type: managed
  volumeIOPS: 3000
  volumeSize: 80
  volumeThroughput: 125
  volumeType: gp3
metadata:
  name: development
  region: us-west-2
  version: "1.24"
privateCluster:
  enabled: false
vpc:
  autoAllocateIPv6: false
  cidr: 192.168.0.0/16
  clusterEndpoints:
    privateAccess: false
    publicAccess: true
  manageSharedNodeSecurityGroupRules: true
  nat:
    gateway: Single
```

The generated ClusterConfig can then be passed to `eksctl create cluster`:

```
eksctl create cluster -f generated-cluster.yaml
```

When a ClusterConfig file is passed with `--dry-run`, eksctl will output a ClusterConfig file containing the values set in the file.

## One-off Options in eksctl
<a name="_one_off_options_in_eksctl"></a>

There are certain one-off options that cannot be represented in the `ClusterConfig` file, e.g., `--install-vpc-controllers`.

It is expected that:

```
eksctl create cluster --<options...> --dry-run > config.yaml
```

followed by:

```
eksctl create cluster -f config.yaml
```

would be equivalent to running the first command without `--dry-run`.

eksctl therefore disallows passing options that cannot be represented in the config file when `--dry-run` is passed.

**Important**  
If you need to pass an AWS profile, set the `AWS_PROFILE` environment variable, instead of passing the `--profile` CLI option.