Help improve this page
To contribute to this user guide, choose the Edit this page on GitHub link that is located in the right pane of every page.
Troubleshoot EKS Auto Mode
With EKS Auto Mode, AWS assumes more responsibility for EC2 Instances in your AWS account. EKS assumes responsibility for the container runtime on nodes, the operating system on the nodes, and certain controllers. This includes a block storage controller, a load balancing controller, and a compute controller.
You must use AWS and Kubernetes APIs to troubleshoot nodes. You can:
-
Use a Kubernetes
NodeDiagnostic
resource to retrieve node logs by using the Node monitoring agent. For more steps, see Retrieve node logs for a managed node using kubectl and S3. -
Use the AWS EC2 CLI command
get-console-output
to retrieve console output from nodes. For more steps, see Get console output from an EC2 managed instance by using the AWS EC2 CLI. -
Use Kubernetes debugging containers to retrieve node logs. For more steps, see Get node logs by using debug containers and the kubectl CLI.
Note
EKS Auto Mode uses EC2 managed instances. You cannot directly access EC2 managed instances, including by SSH.
You might have the following problems that have solutions specific to EKS Auto Mode components:
-
Pods stuck in the
Pending
state, that aren’t being scheduled onto Auto Mode nodes. For solutions see Troubleshoot Pod failing to schedule onto Auto Mode node. -
EC2 managed instances that don’t join the cluster as Kubernetes nodes. For solutions see Troubleshoot node not joining the cluster.
-
Errors and issues with the
NodePools
,PersistentVolumes
, andServices
that use the controllers that are included in EKS Auto Mode. For solutions see Troubleshoot included controllers in Auto Mode. -
Enhanced Pod security prevents sharing volumes across Pods. For solutions see Sharing Volumes Across Pods.
You can use the following methods to troubleshoot EKS Auto Mode components:
Node monitoring agent
EKS Auto Mode includes the Amazon EKS node monitoring agent. You can use this agent to view troubleshooting and debugging information about nodes. The node monitoring agent publishes Kubernetes events
and node conditions
. For more information, see Enable node auto repair and investigate node health issues.
Get console output from an EC2 managed instance by using the AWS EC2 CLI
This procedure helps with troubleshooting boot-time or kernel-level issues.
First, you need to determine the EC2 Instance ID of the instance associated with your workload. Second, use the AWS CLI to retrieve the console output.
-
Confirm you have
kubectl
installed and connected to your cluster -
(Optional) Use the name of a Kubernetes Deployment to list the associated pods.
kubectl get pods -l app=<deployment-name>
-
Use the name of the Kubernetes Pod to determine the EC2 instance ID of the associated node.
kubectl get pod <pod-name> -o wide
-
Use the EC2 instance ID to retrieve the console output.
aws ec2 get-console-output --instance-id <instance id> --latest --output text
Get node logs by using debug containers and the kubectl
CLI
The recommended way of retrieving logs from an EKS Auto Mode node is to use NodeDiagnostic
resource. For these steps, see Retrieve node logs for a managed node using kubectl and S3.
However, you can stream logs live from an instance by using the kubectl debug node
command. This command launches a new Pod on the node that you want to debug which you can then interactively use.
-
Launch a debug container. The following command uses
i-01234567890123456
for the instance ID of the node,-it
allocates atty
and attachstdin
for interactive usage, and uses thesysadmin
profile from the kubeconfig file.kubectl debug node/i-01234567890123456 -it --profile=sysadmin --image=public.ecr.aws/amazonlinux/amazonlinux:2023
An example output is as follows.
Creating debugging pod node-debugger-i-01234567890123456-nxb9c with container debugger on node i-01234567890123456. If you don't see a command prompt, try pressing enter. bash-5.2#
-
From the shell, you can now install
util-linux-core
which provides thensenter
command. Usensenter
to enter the mount namespace of PID 1 (init
) on the host, and run thejournalctl
command to stream logs from thekubelet
:yum install -y util-linux-core nsenter -t 1 -m journalctl -f -u kubelet
For security, the Amazon Linux container image doesn’t install many binaries by default. You can use the yum whatprovides
command to identify the package that must be installed to provide a given binary.
yum whatprovides ps
Last metadata expiration check: 0:03:36 ago on Thu Jan 16 14:49:17 2025. procps-ng-3.3.17-1.amzn2023.0.2.x86_64 : System and process monitoring utilities Repo : @System Matched from: Filename : /usr/bin/ps Provide : /bin/ps procps-ng-3.3.17-1.amzn2023.0.2.x86_64 : System and process monitoring utilities Repo : amazonlinux Matched from: Filename : /usr/bin/ps Provide : /bin/ps
View resources associated with EKS Auto Mode in the AWS Console
You can use the AWS console to view the status of resources associated with your EKS Auto Mode cluster.
-
-
View EKS Auto Mode volumes by searching for the tag key
eks:eks-cluster-name
-
-
-
View EKS Auto Mode load balancers by searching for the tag key
eks:eks-cluster-name
-
-
-
View EKS Auto Mode instances by searching for the tag key
eks:eks-cluster-name
-
View IAM Errors in your AWS account
-
Navigate to CloudTrail console
-
Select "Event History" from the left navigation pane
-
Apply error code filters:
-
AccessDenied
-
UnauthorizedOperation
-
InvalidClientTokenId
-
Look for errors related to your EKS cluster. Use the error messages to update your EKS access entries, cluster IAM role, or node IAM role. You might need to attach a new policy to these roles with permissions for EKS Auto Mode.
Troubleshoot Pod failing to schedule onto Auto Mode node
If pods staying in the Pending
state and aren’t being scheduled onto an auto mode node, verify if your pod or deployment manifest has a nodeSelector
. If a nodeSelector
is present, ensure that it is using eks.amazonaws.com/compute-type: auto
to be scheduled on nodes that are made by EKS Auto Mode. For more information about the node labels that are used by EKS Auto Mode, see Control if a workload is deployed on EKS Auto Mode nodes.
Troubleshoot node not joining the cluster
EKS Auto Mode automatically configures new EC2 instances with the correct information to join the cluster, including the cluster endpoint and cluster certificate authority (CA). However, these instances can still fail to join the EKS cluster as a node. Run the following commands to identify instances that didn’t join the cluster:
-
Run
kubectl get nodeclaim
to check forNodeClaims
that areReady = False
.kubectl get nodeclaim
-
Run
kubectl describe nodeclaim <node_claim>
and look under Status to find any issues preventing the node from joining the cluster.kubectl describe nodeclaim <node_claim>
Common error messages:
-
Error getting launch template configs
-
You might receive this error if you are setting custom tags in the
NodeClass
with the default cluster IAM role permissions. See Learn about identity and access in EKS Auto Mode. -
Error creating fleet
-
There might be some authorization issue with calling the
RunInstances
call from the EC2 API. Check AWS CloudTrail for errors and see Amazon EKS Auto Mode cluster IAM role for the required IAM permissions.
Detect node connectivity issues with the VPC Reachability Analyzer
Note
You are charged for each analysis that is run the VPC Reachability Analyzer. For pricing details, see Amazon VPC Pricing
One reason that an instance didn’t join the cluster is a network connectivity issue that prevents them from reaching the API server. To diagnose this issue, you can use the VPC Reachability Analyzer to perform an analysis of the connectivity between a node that is failing to join the cluster and the API server. You will need two pieces of information:
-
instance ID of a node that can’t join the cluster
-
IP address of the Kubernetes API server endpoint
To get the instance ID, you will need to create a workload on the cluster to cause EKS Auto Mode to launch an EC2 instance. This also creates a NodeClaim
object in your cluster that will have the instance ID. Run kubectl get nodeclaim -o yaml
to print all of the NodeClaims
in your cluster. Each NodeClaim
contains the instance ID as a field and again in the providerID:
kubectl get nodeclaim -o yaml
An example output is as follows.
nodeName: i-01234567890123456 providerID: aws:///us-west-2a/i-01234567890123456
You can determine your Kubernetes API server endpoint by running kubectl get endpoint kubernetes -o yaml
. The addresses are in the addresses field:
kubectl get endpoints kubernetes -o yaml
An example output is as follows.
apiVersion: v1 kind: Endpoints metadata: name: kubernetes namespace: default subsets: - addresses: - ip: 10.0.143.233 - ip: 10.0.152.17 ports: - name: https port: 443 protocol: TCP
With these two pieces of information, you can perform the s analysis. First navigate to the VPC Reachability Analyzer in theAWS Management Console.
-
Click "Create and Analyze Path"
-
Provide a name for the analysis (e.g. "Node Join Failure")
-
For the "Source Type" select "Instances"
-
Enter the instance ID of the failing Node as the "Source"
-
For the "Path Destination" select "IP Address"
-
Enter one of the IP addresses for the API server as the "Destination Address"
-
Expand the "Additional Packet Header Configuration Section"
-
Enter a "Destination Port" of 443
-
Select "Protocol" as TCP if it is not already selected
-
Click "Create and Analyze Path"
-
The analysis might take a few minutes to complete. If the analysis results indicates failed reachability, it will indicate where the failure was in the network path so you can resolve the issue.
Sharing Volumes Across Pods
EKS Auto Mode Nodes are configured with SELinux in enforcing mode which provides more isolation between Pods that are running on the same Node. When SELinux is enabled, most non-privileged pods will automatically have their own multi-category security (MCS) label applied to them. This MCS label is unique per Pod, and is designed to ensure that a process in one Pod cannot manipulate a process in any other Pod or on the host. Even if a labeled Pod runs as root and has access to the host filesystem, it will be unable to manipulate files, make sensitive system calls on the host, access the container runtime, or obtain kubelet’s secret key material.
Due to this, you may experience issues when trying to share data between Pods. For example, a PersistentVolumeClaim
with an access mode of ReadWriteOnce
will still not allow multiple Pods to access the volume concurrently.
To enable this sharing between Pods, you can use the Pod’s seLinuxOptions
to configure the same MCS label on those Pods. In this example, we assign the three categories c123,c456,c789
to the Pod. This will not conflict with any categories assigned to Pods on the node automatically, as they will only be assigned two categories.
securityContext: seLinuxOptions: level: "s0:c123,c456,c789"
Troubleshoot included controllers in Auto Mode
If you have a problem with a controller, you should research:
-
If the resources associated with that controller are properly formatted and valid.
-
If the AWS IAM and Kubernetes RBAC resources are properly configured for your cluster. For more information, see Learn about identity and access in EKS Auto Mode.