Prepare operating system for hybrid nodes - Amazon EKS

Prepare operating system for hybrid nodes

Amazon Linux 2023 (AL2023), Ubuntu, and Red Hat Enterprise Linux (RHEL) are validated on an ongoing basis for use as the node operating system for hybrid nodes. AWS supports the hybrid nodes integration with these operating systems but does not provide support for the operating systems itself. AL2023 is not covered by AWS Support Plans when run outside of Amazon EC2. AL2023 can only be used in on-premises virtualized environments, reference the Amazon Linux 2023 User Guide for more information.

You are responsible for operating system provisioning and management. When you are testing hybrid nodes for the first time, it is easiest to run the Amazon EKS Hybrid Nodes CLI (nodeadm) on an already provisioned host. For production deployments, it is recommended to include nodeadm in your operating system images with it configured to run as a systemd service to automatically join hosts to Amazon EKS clusters at host startup.

Version compatibility

The table below represents the operating system versions that are compatible and validated to use as the node operating system for hybrid nodes. If you are using other operating system variants or versions that are not included in this table, then the compatibility of hybrid nodes with your operating system variant or version is not covered by AWS Support. Hybrid nodes are agnostic to the underlying infrastructure and support x86 and ARM architectures.

Operating System Versions

Amazon Linux

Amazon Linux 2023 (AL2023)

Ubuntu

Ubuntu 20.04, Ubuntu 22.04, Ubuntu 24.04

Red Hat Enterprise Linux

RHEL 8, RHEL 9

Operating system considerations

General

  • The Amazon EKS Hybrid Nodes CLI (nodeadm) can be used to simplify the installation and configuration of the hybrid nodes components and dependencies. You can run the nodeadm install process during your operating system image build pipelines or at runtime on each on-premises host. For more information on the components that nodeadm installs, see the Hybrid nodes nodeadm reference.

  • If you are using a proxy in your on-premises environment to reach the internet, there is additional operating system configuration required for the install and upgrade processes to configure your package manager to use the proxy. See Configure proxy for hybrid nodes for instructions.

Containerd

  • Containerd is the standard Kubernetes container runtime and is a dependency for hybrid nodes, as well as all Amazon EKS node compute types. The Amazon EKS Hybrid Nodes CLI (nodeadm) attempts to install containerd during the nodeadm install process. You can configure the containerd installation at nodeadm install runtime with the --containerd-source command line option. Valid options are none, distro, and docker. If you are using RHEL, distro is not a valid option and you can either configure nodeadm to install the containerd build from Docker’s repos or you can manually install containerd. When using AL2023 or Ubuntu, nodeadm defaults to installing containerd from the operating system distribution. If you do not want nodeadm to install containerd, use the --containerd-source none option.

Ubuntu

  • If you are using Ubuntu 20.04, you must use AWS Systems Manager hybrid activations as your credential provider. AWS IAM Roles Anywhere is not supported on Ubuntu 20.04.

  • If you are using Ubuntu 24.04, you may need to update your version of containerd or change your AppArmor configuration to adopt a fix that allows pods to properly terminate, see Ubuntu #2065423. A reboot is required to apply changes to the AppArmor profile. The latest version of Ubuntu 24.04 has an updated containerd version in its package manager with the fix (containerd version 1.7.19+).

RHEL

  • If you are using RHEL 8, you must use AWS Systems Manager hybrid activations as your credential provider. AWS IAM Roles Anywhere isn’t supported on RHEL 8.

Building operating system images

Amazon EKS provides example Packer templates you can use to create operating system images that include nodeadm and configure it to run at host-startup. This process is recommended to avoid pulling the hybrid nodes dependencies individually on each host and to automate the hybrid nodes bootstrap process. You can use the example Packer templates with an Ubuntu 22.04, Ubuntu 24.04, RHEL 8 or RHEL 9 ISO image and can output images with these formats: OVA, Qcow2, or raw.

Prerequisites

Before using the example Packer templates, you must have the following installed on the machine from where you are running Packer.

  • Packer version 1.11.0 or higher. For instructions on installing Packer, see Install Packer in the Packer documentation.

  • If building OVAs, VMware vSphere plugin 1.4.0 or higher

  • If building Qcow2 or raw images, QEMU plugin version 1.x

Set Environment Variables

Before running the Packer build, set the following environment variables on the machine from where you are running Packer.

General

The following environment variables must be set for building images with all operating systems and output formats.

Environment Variable Type Description

PKR_SSH_PASSWORD

String

Packer uses the ssh_username and ssh_password variables to SSH into the created machine when provisioning. This needs to match the passwords used to create the initial user within the respective OS’s kickstart or user-data files. The default is set as "builder" or "ubuntu" depending on the OS. When setting your password, make sure to change it within the corresponding ks.cfg or user-data file to match.

ISO_URL

String

URL of the ISO to use. Can be a web link to download from a server, or an absolute path to a local file

ISO_CHECKSUM

String

Associated checksum for the supplied ISO.

CREDENTIAL_PROVIDER

String

Credential provider for hybrid nodes. Valid values are ssm (default) for SSM hybrid activations and iam for IAM Roles Anywhere

K8S_VERSION

String

Kubernetes version for hybrid nodes (for example 1.31). For supported Kubernetes versions, see Understand the Kubernetes version lifecycle on EKS.

NODEADM_ARCH

String

Architecture for nodeadm install. Select amd or arm.

RHEL

If you are using RHEL, the following environment variables must be set.

Environment Variable Type Description

RH_USERNAME

String

RHEL subscription manager username

RH_PASSWORD

String

RHEL subscription manager password

RHEL_VERSION

String

Rhel iso version being used. Valid values are 8 or 9.

Ubuntu

There are no Ubuntu-specific environment variables required.

vSphere

If you are building a VMware vSphere OVA, the following environment variables must be set.

Environment Variable Type Description

VSPHERE_SERVER

String

vSphere server address

VSPHERE_USER

String

vSphere username

VSPHERE_PASSWORD

String

vSphere password

VSPHERE_DATACENTER

String

vSphere datacenter name

VSPHERE_CLUSTER

String

vSphere cluster name

VSPHERE_DATASTORE

String

vSphere datastore name

VSPHERE_NETWORK

String

vSphere network name

VSPHERE_OUTPUT_FOLDER

String

vSphere output folder for the templates

QEMU

Environment Variable Type Description

PACKER_OUTPUT_FORMAT

String

Output format for the QEMU builder. Valid values are qcow2 and raw.

Validate template

Before running your build, validate your template with the following command after setting your environment variables. Replace template.pkr.hcl if you are using a different name for your template.

packer validate template.pkr.hcl

Build images

Build your images with the following commands and use the -only flag to specify the target and operating system for your images. Replace template.pkr.hcl if you are using a different name for your template.

vSphere OVAs

Note

If you are using RHEL with vSphere you need to convert the kickstart files to an OEMDRV image and pass it as an ISO to boot from. For more information, see the Packer Readme in the EKS Hybrid Nodes GitHub Repository.

Ubuntu 22.04 OVA

packer build -only=general-build.vsphere-iso.ubuntu22 template.pkr.hcl

Ubuntu 24.04 OVA

packer build -only=general-build.vsphere-iso.ubuntu24 template.pkr.hcl

RHEL 8 OVA

packer build -only=general-build.vsphere-iso.rhel8 template.pkr.hcl

RHEL 9 OVA

packer build -only=general-build.vsphere-iso.rhel9 template.pkr.hcl

QEMU

Note

If you are building an image for a specific host CPU that does not match your builder host, see the QEMU documentation for the name that matches your host CPU and use the -cpu flag with the name of the host CPU when you run the following commands.

Ubuntu 22.04 Qcow2 / Raw

packer build -only=general-build.qemu.ubuntu22 template.pkr.hcl

Ubuntu 24.04 Qcow2 / Raw

packer build -only=general-build.qemu.ubuntu24 template.pkr.hcl

RHEL 8 Qcow2 / Raw

packer build -only=general-build.qemu.rhel8 template.pkr.hcl

RHEL 9 Qcow2 / Raw

packer build -only=general-build.qemu.rhel9 template.pkr.hcl

Pass nodeadm configuration through user-data

You can pass configuration for nodeadm in your user-data through cloud-init to configure and automatically connect hybrid nodes to your EKS cluster at host startup. Below is an example for how to accomplish this when using VMware vSphere as the infrastructure for your hybrid nodes.

  1. Install the the govc CLI following the instructions in the govc readme on GitHub.

  2. After running the Packer build in the previous section and provisioning your template, you can clone your template to create multiple different nodes using the following. You must clone the template for each new VM you are creating that will be used for hybrid nodes. Replace the variables in the command below with the values for your environment. The VM_NAME in the command below is used as your NODE_NAME when you inject the names for your VMs via your metadata.yaml file.

    govc vm.clone -vm "/PATH/TO/TEMPLATE" -ds="YOUR_DATASTORE" \ -on=false -template=false -folder=/FOLDER/TO/SAVE/VM "VM_NAME"
  3. After cloning the template for each of your new VMs, create a userdata.yaml and metadata.yaml for your VMs. Your VMs can share the same userdata.yaml and metadata.yaml and you will populate these on a per VM basis in the steps below. The nodeadm configuration is created and defined in the write_files section of your userdata.yaml. The example below uses AWS SSM hybrid activations as the on-premises credential provider for hybrid nodes. For more information on nodeadm configuration, see the Hybrid nodes nodeadm reference.

    userdata.yaml:

    #cloud-config users: - name: # username for login. Use 'builder' for RHEL or 'ubuntu' for Ubuntu. passwd: # password to login. Default is 'builder' for RHEL. groups: [adm, cdrom, dip, plugdev, lxd, sudo] lock-passwd: false sudo: ALL=(ALL) NOPASSWD:ALL shell: /bin/bash write_files: - path: /usr/local/bin/nodeConfig.yaml permissions: '0644' content: | apiVersion: node.eks.aws/v1alpha1 kind: NodeConfig spec: cluster: name: # Cluster Name region: # AWS region hybrid: ssm: activationCode: # Your ssm activation code activationId: # Your ssm activation id runcmd: - /usr/local/bin/nodeadm init -c file:///usr/local/bin/nodeConfig.yaml >> /var/log/nodeadm-init.log 2>&1

    metadata.yaml:

    Create a metadata.yaml for your environment. Keep the "$NODE_NAME" variable format in the file as this will be populated with values in a subsequent step.

    instance-id: "$NODE_NAME" local-hostname: "$NODE_NAME" network: version: 2 ethernets: nics: match: name: ens* dhcp4: yes
  4. Add the userdata.yaml and metadata.yaml files as gzip+base64 strings with the following commands. The following commands should be run for each of the VMs you are creating. Replace VM_NAME with the name of the VM you are updating.

    export NODE_NAME="VM_NAME" export USER_DATA=$(gzip -c9 <userdata.yaml | base64) govc vm.change -dc="YOUR_DATASTORE" -vm "$NODE_NAME" -e guestinfo.userdata="${USER_DATA}" govc vm.change -dc="YOUR_DATASTORE" -vm "$NODE_NAME" -e guestinfo.userdata.encoding=gzip+base64 envsubst '$NODE_NAME' < metadata.yaml > metadata.yaml.tmp export METADATA=$(gzip -c9 <metadata.yaml.tmp | base64) govc vm.change -dc="YOUR_DATASTORE" -vm "$NODE_NAME" -e guestinfo.metadata="${METADATA}" govc vm.change -dc="YOUR_DATASTORE" -vm "$NODE_NAME" -e guestinfo.metadata.encoding=gzip+base64
  5. Power on your new VMs, which should automatically connect to the EKS cluster you configured.

    govc vm.power -on "${NODE_NAME}"