This topic provides an overview of the networking setup you must have configured before creating your Amazon EKS cluster and attaching hybrid nodes. This guide assumes you have met the prerequisite requirements for hybrid network connectivity using AWS Site-to-Site VPN, AWS Direct Connect, or your own VPN solution.
On-premises networking configuration
Minimum network requirements
For an optimal experience, AWS recommends reliable network connectivity of at least 100 Mbps and a maximum of 200ms round trip latency for the hybrid nodes connection to the AWS Region. The bandwidth and latency requirements can vary depending on the number of hybrid nodes and your workload characteristics such as application image size, application elasticity, monitoring and logging configurations, and application dependencies on accessing data stored in other AWS services.
On-premises node and pod CIDRs
Identify the node and pod CIDRs you will use for your hybrid nodes and the workloads running on them. The node CIDR is allocated from your on-premises network and the pod CIDR is allocated from your Container Network Interface (CNI) if you are using an overlay network for your CNI. You pass your on-premises node CIDRs and optionally pod CIDRs as inputs when you create your Amazon EKS cluster with the RemoteNodeNetwork
and RemotePodNetwork
fields.
The on-premises node and pod CIDR blocks must meet the following requirements:
-
Be within one of the following
IPv4
RFC-1918 ranges:10.0.0.0/8
,172.16.0.0/12
, or192.168.0.0/16
. -
Not overlap with each other, the VPC CIDR for your Amazon EKS cluster, or your Kubernetes service
IPv4
CIDR.
If your CNI performs Network Address Translation (NAT) for pod traffic as it leaves your on-premises hosts, you do not need to advertise your pod CIDR to your on-premises network or configure your Amazon EKS cluster with your remote pod network for hybrid nodes to become ready to workloads. If your CNI does not use NAT for pod traffic as it leaves your on-premises hosts, you must advertise your pod CIDR with your on-premises network and you must configure your Amazon EKS cluster with your remote pod network for hybrid nodes to become ready to workloads. If you are running webhooks on your hybrid nodes, you must advertise your pod CIDR to your on-premises network and configure your Amazon EKS cluster with your remote pod network so the Amazon EKS control plane can directly connect to the webhooks running on hybrid nodes.
Access required during hybrid node installation and upgrade
You must have access to the following domains during the installation process where you install the hybrid nodes dependencies on your hosts. This process can be done once when you are building your operating system images or it can be done on each host at runtime. This includes initial installation and when you upgrade the Kubernetes version of your hybrid nodes.
Component | URL | Protocol | Port |
---|---|---|---|
EKS node artifacts (S3) |
https://hybrid-assets.eks.amazonaws.com |
HTTPS |
443 |
https://eks. |
HTTPS |
443 |
|
EKS ECR endpoints |
See View Amazon container image registries for Amazon EKS add-ons for regional endpoints. |
HTTPS |
443 |
SSM binary endpoint 1 |
https://amazon-ssm- |
HTTPS |
443 |
https://ssm. |
HTTPS |
443 |
|
IAM Anywhere binary endpoint 2 |
https://rolesanywhere.amazonaws.com |
HTTPS |
443 |
https://rolesanywhere. |
HTTPS |
443 |
Note
1 Access to the AWS SSM endpoints are only required if you are using AWS SSM hybrid activations for your on-premises IAM credential provider.
2 Access to the AWS IAM endpoints are only required if you are using AWS IAM Roles Anywhere for your on-premises IAM credential provider.
Access required for ongoing cluster operations
The following network access for your on-premises firewall is required for ongoing cluster operations.
Important
Depending on your choice of CNI, you need to configure additional network access rules for the CNI ports. See the Cilium documentation
Type | Protocol | Direction | Port | Source | Destination | Usage |
---|---|---|---|---|---|---|
HTTPS |
TCP |
Outbound |
443 |
Remote Node CIDR(s) |
EKS cluster IPs 1 |
kubelet to Kubernetes API server |
HTTPS |
TCP |
Outbound |
443 |
Remote Pod CIDR(s) |
EKS cluster IPs 1 |
Pod to Kubernetes API server |
HTTPS |
TCP |
Outbound |
443 |
Remote Node CIDR(s) |
SSM hybrid activations credential refresh and SSM heartbeats every 5 minutes |
|
HTTPS |
TCP |
Outbound |
443 |
Remote Node CIDR(s) |
IAM Roles Anywhere credential refresh |
|
HTTPS |
TCP |
Outbound |
443 |
Remote Pod CIDR(s) |
Pod to STS endpoint, only required for IRSA |
|
HTTPS |
TCP |
Outbound |
443 |
Remote Node CIDR(s) |
Node to Amazon EKS Auth endpoint, only required for Amazon EKS Pod Identity |
|
HTTPS |
TCP |
Inbound |
10250 |
EKS cluster IPs 1 |
Remote Node CIDR(s) |
kubelet to Kubernetes API server |
HTTPS |
TCP |
Inbound |
Webhook ports |
EKS cluster IPs 1 |
Remote Pod CIDR(s) |
Kubernetes API server to webhooks |
HTTPS |
TCP,UDP |
Inbound,Outbound |
53 |
Remote Pod CIDR(s) |
Remote Pod CIDR(s) |
Pod to CoreDNS. If you run at least 1 replica of CoreDNS in the cloud, you must allow DNS traffic to the VPC where CoreDNS is running. |
User-defined |
User-defined |
Inbound,Outbound |
App ports |
Remote Pod CIDR(s) |
Remote Pod CIDR(s) |
Pod to Pod |
Note
1 The IPs of the Amazon EKS cluster. See the following section on Amazon EKS elastic network interfaces.
Amazon EKS network interfaces
Amazon EKS attaches network interfaces to the subnets in the VPC you pass during cluster creation to enable the communication between the Amazon EKS control plane and your VPC. The network interfaces that Amazon EKS creates can be found after cluster creation in the Amazon EC2 console or with the AWS CLI. The original network interfaces are deleted and new network interfaces are created when changes are applied on your Amazon EKS cluster, such as Kubernetes version upgrades. You can restrict the IP range for the Amazon EKS network interfaces by using constrained subnet sizes for the subnets you pass during cluster creation, which makes it easier to configure your on-premises firewall to allow inbound/outbound connectivity to this known, constrained set of IPs. To control which subnets network interfaces are created in, you can limit the number of subnets you specify when you create a cluster or you can update the subnets after creating the cluster.
The network interfaces provisioned by Amazon EKS have a description of the format Amazon EKS
. See the example below for an AWS CLI command you can use to find the IP addresses of the network interfaces that Amazon EKS provisions. Replace your-cluster-name
VPC_ID
with the ID of the VPC you pass during cluster creation.
aws ec2 describe-network-interfaces \ --query 'NetworkInterfaces[?(VpcId ==
VPC_ID
&& contains(Description,Amazon EKS
))].PrivateIpAddress'
AWS VPC and subnet setup
The existing VPC and subnet requirements for Amazon EKS apply to clusters with hybrid nodes. Additionally, your VPC CIDR can’t overlap with your on-premises node and pod CIDRs. You must configure routes in your VPC routing table for your on-premises node and optionally pod CIDRs. These routes must be setup to route traffic to the gateway you are using for your hybrid network connectivity, which is commonly a virtual private gateway (VGW) or transit gateway (TGW). If you are using TGW or VGW to connect your VPC with your on-premises environment, you must create a TGW or VGW attachment for your VPC. Your VPC must have DNS hostname and DNS resolution support.
The following steps use the AWS CLI. You can also create these resources in the AWS Management Console or with other interfaces such as AWS CloudFormation, AWS CDK, or Terraform.
Step 1: Create VPC
-
Run the following command to create a VPC. Replace
VPC_CIDR
with anIPv4
RFC-1918 (private) or non-RFC-1918 (public) CIDR range (for example10.0.0.0/16
). Note: DNS resolution, which is an EKS requirement, is enabled for the VPC by default.aws ec2 create-vpc --cidr-block
VPC_CIDR
-
Enable DNS hostnames for your VPC. Note, DNS resolution is enabled for the VPC by default. Replace
VPC_ID
with the ID of the VPC you created in the previous step.aws ec2 modify-vpc-attribute --vpc-id
VPC_ID
--enable-dns-hostnames
Step 2: Create subnets
Create at least 2 subnets. Amazon EKS uses these subnets for the cluster network interfaces. For more information, see the Subnets requirements and considerations.
-
You can find the availability zones for an AWS Region with the following command. Replace
us-west-2
with your region.aws ec2 describe-availability-zones \ --query 'AvailabilityZones[?(RegionName ==
us-west-2
)].ZoneName' -
Create a subnet. Replace
VPC_ID
with the ID of the VPC. ReplaceSUBNET_CIDR
with the CIDR block for your subnet (for example 10.0.1.0/24 ). ReplaceAZ
with the availability zone where the subnet will be created (for example us-west-2a). The subnets you create must be in at least 2 different availability zones.aws ec2 create-subnet \ --vpc-id
VPC_ID
\ --cidr-blockSUBNET_CIDR
\ --availability-zoneAZ
(Optional) Step 3: Attach VPC with Amazon VPC Transit Gateway (TGW) or AWS Direct Connect virtual private gateway (VGW)
If you are using a TGW or VGW, attach your VPC to the TGW or VGW. For more information, see Amazon VPC attachments in Amazon VPC Transit Gateways or AWS Direct Connect virtual private gateway associations.
Transit Gateway
Run the following command to attach a Transit Gateway. Replace VPC_ID
with the ID of the VPC. Replace SUBNET_ID1
and SUBNET_ID2
with the IDs of the subnets you created in the previous step. Replace TGW_ID
with the ID of your TGW.
aws ec2 create-transit-gateway-vpc-attachment \ --vpc-id
VPC_ID
\ --subnet-idsSUBNET_ID1 SUBNET_ID2
\ --transit-gateway-idTGW_ID
Virtual Private Gateway
Run the following command to attach a Transit Gateway. Replace VPN_ID
with the ID of your VGW. Replace VPC_ID
with the ID of the VPC.
aws ec2 attach-vpn-gateway \ --vpn-gateway-id
VPN_ID
\ --vpc-idVPC_ID
(Optional) Step 4: Create route table
You can modify the main route table for the VPC or you can create a custom route table. The following steps create a custom route table with the routes to on-premises node and pod CIDRs. For more information, see Subnet route tables. Replace VPC_ID
with the ID of the VPC.
aws ec2 create-route-table --vpc-id
VPC_ID
Step 5: Create routes for on-premises nodes and pods
Create routes in the route table for each of your on-premises remote nodes. You can modify the main route table for the VPC or use the custom route table you created in the previous step.
The examples below show how to create routes for your on-premises node and pod CIDRs. In the examples, a transit gateway (TGW) is used to connect the VPC with the on-premises environment. If you have multiple on-premises node and pods CIDRs, repeat the steps for each CIDR.
-
If you are using an internet gateway or a virtual private gateway (VGW) replace
--transit-gateway-id
with--gateway-id
. -
Replace
RT_ID
with the ID of the route table you created in the previous step. -
Replace
REMOTE_NODE_CIDR
with the CIDR range you will use for your hybrid nodes. -
Replace
REMOTE_POD_CIDR
with the CIDR range you will use for the pods running on hybrid nodes. The pod CIDR range corresponds to the Container Networking Interface (CNI) configuration, which most commonly uses an overlay network on-premises. For more information, see Configure a CNI for hybrid nodes. -
Replace
TGW_ID
with the ID of your TGW.
Remote node network
aws ec2 create-route \ --route-table-id
RT_ID
\ --destination-cidr-blockREMOTE_NODE_CIDR
\ --transit-gateway-idTGW_ID
Remote Pod network
aws ec2 create-route \ --route-table-id
RT_ID
\ --destination-cidr-blockREMOTE_POD_CIDR
\ --transit-gateway-idTGW_ID
(Optional) Step 6: Associate subnets with route table
If you created a custom route table in the previous step, associate each of the subnets you created in the previous step with your custom route table. If you are modifying the VPC main route table, the subnets are automatically associated with the main route table of the VPC and you can skip this step.
Run the following command for each of the subnets you created in the previous steps. Replace RT_ID
with the route table you created in the previous step. Replace SUBNET_ID
with the ID of a subnet.
aws ec2 associate-route-table --route-table-id
RT_ID
--subnet-idSUBNET_ID
Cluster security group configuration
The following access for your Amazon EKS cluster security group is required for ongoing cluster operations.
Type | Protocol | Direction | Port | Source | Destination | Usage |
---|---|---|---|---|---|---|
HTTPS |
TCP |
Inbound |
443 |
Remote Node CIDR(s) |
N/A |
Kubelet to Kubernetes API server |
HTTPS |
TCP |
Inbound |
443 |
Remote Pod CIDR(s) |
N/A |
Pods requiring access to K8s API server when the CNI is not using NAT for the pod traffic. |
HTTPS |
TCP |
Outbound |
10250 |
N/A |
Remote Node CIDR(s) |
Kubernetes API server to Kubelet |
HTTPS |
TCP |
Outbound |
Webhook ports |
N/A |
Remote Pod CIDR(s) |
Kubernetes API server to webhook (if running webhooks on hybrid nodes) |
To create a security group with the inbound access rules, run the following commands. This security group must be passed when you create your Amazon EKS cluster. By default, the command below creates a security group that allows all outbound access. You can restrict outbound access to include only the rules above. If you’re considering limiting the outbound rules, we recommend that you thoroughly test all of your applications and pod connectivity before you apply your changed rules to a production cluster.
-
In the first command, replace SG_NAME with a name for your security group
-
In the first command, replace VPC_ID with the ID of the VPC you created in the previous step
-
In the second command, replace SG_ID with the ID of the security group you create in the first command
-
In the second command, replace REMOTE_NODE_CIDR and REMOTE_POD_CIDR with the values for your hybrid nodes and on-premises network.
aws ec2 create-security-group \ --group-name
SG_NAME
\ --description "security group for hybrid nodes" \ --vpc-idVPC_ID
aws ec2 authorize-security-group-ingress \ --group-id
SG_ID
\ --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 443, "ToPort": 443, "IpRanges": [{"CidrIp": "REMOTE_NODE_CIDR"}, {"CidrIp": "REMOTE_POD_CIDR"}]}]'