選取您的 Cookie 偏好設定

我們使用提供自身網站和服務所需的基本 Cookie 和類似工具。我們使用效能 Cookie 收集匿名統計資料,以便了解客戶如何使用我們的網站並進行改進。基本 Cookie 無法停用,但可以按一下「自訂」或「拒絕」以拒絕效能 Cookie。

如果您同意,AWS 與經核准的第三方也會使用 Cookie 提供實用的網站功能、記住您的偏好設定,並顯示相關內容,包括相關廣告。若要接受或拒絕所有非必要 Cookie,請按一下「接受」或「拒絕」。若要進行更詳細的選擇,請按一下「自訂」。

Prepare networking for hybrid nodes

焦點模式
Prepare networking for hybrid nodes - Amazon EKS
此頁面尚未翻譯為您的語言。 請求翻譯

This topic provides an overview of the networking setup you must have configured before creating your Amazon EKS cluster and attaching hybrid nodes. This guide assumes you have met the prerequisite requirements for hybrid network connectivity using AWS Site-to-Site VPN, AWS Direct Connect, or your own VPN solution.

Hybrid node network connectivity.

On-premises networking configuration

Minimum network requirements

For an optimal experience, AWS recommends reliable network connectivity of at least 100 Mbps and a maximum of 200ms round trip latency for the hybrid nodes connection to the AWS Region. The bandwidth and latency requirements can vary depending on the number of hybrid nodes and your workload characteristics such as application image size, application elasticity, monitoring and logging configurations, and application dependencies on accessing data stored in other AWS services.

On-premises node and pod CIDRs

Identify the node and pod CIDRs you will use for your hybrid nodes and the workloads running on them. The node CIDR is allocated from your on-premises network and the pod CIDR is allocated from your Container Network Interface (CNI) if you are using an overlay network for your CNI. You pass your on-premises node CIDRs and optionally pod CIDRs as inputs when you create your Amazon EKS cluster with the RemoteNodeNetwork and RemotePodNetwork fields.

The on-premises node and pod CIDR blocks must meet the following requirements:

  1. Be within one of the following IPv4 RFC-1918 ranges: 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16.

  2. Not overlap with each other, the VPC CIDR for your Amazon EKS cluster, or your Kubernetes service IPv4 CIDR.

If your CNI performs Network Address Translation (NAT) for pod traffic as it leaves your on-premises hosts, you do not need to advertise your pod CIDR to your on-premises network or configure your Amazon EKS cluster with your remote pod network for hybrid nodes to become ready to workloads. If your CNI does not use NAT for pod traffic as it leaves your on-premises hosts, you must advertise your pod CIDR with your on-premises network and you must configure your Amazon EKS cluster with your remote pod network for hybrid nodes to become ready to workloads. If you are running webhooks on your hybrid nodes, you must advertise your pod CIDR to your on-premises network and configure your Amazon EKS cluster with your remote pod network so the Amazon EKS control plane can directly connect to the webhooks running on hybrid nodes.

Access required during hybrid node installation and upgrade

You must have access to the following domains during the installation process where you install the hybrid nodes dependencies on your hosts. This process can be done once when you are building your operating system images or it can be done on each host at runtime. This includes initial installation and when you upgrade the Kubernetes version of your hybrid nodes.

Component URL Protocol Port

EKS node artifacts (S3)

https://hybrid-assets.eks.amazonaws.com

HTTPS

443

EKS service endpoints

https://eks.region.amazonaws.com

HTTPS

443

EKS ECR endpoints

See View Amazon container image registries for Amazon EKS add-ons for regional endpoints.

HTTPS

443

SSM binary endpoint 1

https://amazon-ssm-region.s3.region.amazonaws.com

HTTPS

443

SSM service endpoint 1

https://ssm.region.amazonaws.com

HTTPS

443

IAM Anywhere binary endpoint 2

https://rolesanywhere.amazonaws.com

HTTPS

443

IAM Anywhere service endpoint 2

https://rolesanywhere.region.amazonaws.com

HTTPS

443

Note

1 Access to the AWS SSM endpoints are only required if you are using AWS SSM hybrid activations for your on-premises IAM credential provider.

2 Access to the AWS IAM endpoints are only required if you are using AWS IAM Roles Anywhere for your on-premises IAM credential provider.

Access required for ongoing cluster operations

The following network access for your on-premises firewall is required for ongoing cluster operations.

Important

Depending on your choice of CNI, you need to configure additional network access rules for the CNI ports. See the Cilium documentation and the Calico documentation for details.

Type Protocol Direction Port Source Destination Usage

HTTPS

TCP

Outbound

443

Remote Node CIDR(s)

EKS cluster IPs 1

kubelet to Kubernetes API server

HTTPS

TCP

Outbound

443

Remote Pod CIDR(s)

EKS cluster IPs 1

Pod to Kubernetes API server

HTTPS

TCP

Outbound

443

Remote Node CIDR(s)

SSM service endpoint

SSM hybrid activations credential refresh and SSM heartbeats every 5 minutes

HTTPS

TCP

Outbound

443

Remote Node CIDR(s)

IAM Anywhere service endpoint

IAM Roles Anywhere credential refresh

HTTPS

TCP

Outbound

443

Remote Pod CIDR(s)

STS Regional Endpoint

Pod to STS endpoint, only required for IRSA

HTTPS

TCP

Outbound

443

Remote Node CIDR(s)

Amazon EKS Auth service endpoint

Node to Amazon EKS Auth endpoint, only required for Amazon EKS Pod Identity

HTTPS

TCP

Inbound

10250

EKS cluster IPs 1

Remote Node CIDR(s)

kubelet to Kubernetes API server

HTTPS

TCP

Inbound

Webhook ports

EKS cluster IPs 1

Remote Pod CIDR(s)

Kubernetes API server to webhooks

HTTPS

TCP,UDP

Inbound,Outbound

53

Remote Pod CIDR(s)

Remote Pod CIDR(s)

Pod to CoreDNS. If you run at least 1 replica of CoreDNS in the cloud, you must allow DNS traffic to the VPC where CoreDNS is running.

User-defined

User-defined

Inbound,Outbound

App ports

Remote Pod CIDR(s)

Remote Pod CIDR(s)

Pod to Pod

Note

1 The IPs of the Amazon EKS cluster. See the following section on Amazon EKS elastic network interfaces.

Amazon EKS network interfaces

Amazon EKS attaches network interfaces to the subnets in the VPC you pass during cluster creation to enable the communication between the Amazon EKS control plane and your VPC. The network interfaces that Amazon EKS creates can be found after cluster creation in the Amazon EC2 console or with the AWS CLI. The original network interfaces are deleted and new network interfaces are created when changes are applied on your Amazon EKS cluster, such as Kubernetes version upgrades. You can restrict the IP range for the Amazon EKS network interfaces by using constrained subnet sizes for the subnets you pass during cluster creation, which makes it easier to configure your on-premises firewall to allow inbound/outbound connectivity to this known, constrained set of IPs. To control which subnets network interfaces are created in, you can limit the number of subnets you specify when you create a cluster or you can update the subnets after creating the cluster.

The network interfaces provisioned by Amazon EKS have a description of the format Amazon EKS your-cluster-name . See the example below for an AWS CLI command you can use to find the IP addresses of the network interfaces that Amazon EKS provisions. Replace VPC_ID with the ID of the VPC you pass during cluster creation.

aws ec2 describe-network-interfaces \ --query 'NetworkInterfaces[?(VpcId == VPC_ID && contains(Description,Amazon EKS))].PrivateIpAddress'

AWS VPC and subnet setup

The existing VPC and subnet requirements for Amazon EKS apply to clusters with hybrid nodes. Additionally, your VPC CIDR can’t overlap with your on-premises node and pod CIDRs. You must configure routes in your VPC routing table for your on-premises node and optionally pod CIDRs. These routes must be setup to route traffic to the gateway you are using for your hybrid network connectivity, which is commonly a virtual private gateway (VGW) or transit gateway (TGW). If you are using TGW or VGW to connect your VPC with your on-premises environment, you must create a TGW or VGW attachment for your VPC. Your VPC must have DNS hostname and DNS resolution support.

The following steps use the AWS CLI. You can also create these resources in the AWS Management Console or with other interfaces such as AWS CloudFormation, AWS CDK, or Terraform.

Step 1: Create VPC

  1. Run the following command to create a VPC. Replace VPC_CIDR with an IPv4 RFC-1918 (private) or non-RFC-1918 (public) CIDR range (for example 10.0.0.0/16). Note: DNS resolution, which is an EKS requirement, is enabled for the VPC by default.

    aws ec2 create-vpc --cidr-block VPC_CIDR
  2. Enable DNS hostnames for your VPC. Note, DNS resolution is enabled for the VPC by default. Replace VPC_ID with the ID of the VPC you created in the previous step.

    aws ec2 modify-vpc-attribute --vpc-id VPC_ID --enable-dns-hostnames

Step 2: Create subnets

Create at least 2 subnets. Amazon EKS uses these subnets for the cluster network interfaces. For more information, see the Subnets requirements and considerations.

  1. You can find the availability zones for an AWS Region with the following command. Replace us-west-2 with your region.

    aws ec2 describe-availability-zones \ --query 'AvailabilityZones[?(RegionName == us-west-2)].ZoneName'
  2. Create a subnet. Replace VPC_ID with the ID of the VPC. Replace SUBNET_CIDR with the CIDR block for your subnet (for example 10.0.1.0/24 ). Replace AZ with the availability zone where the subnet will be created (for example us-west-2a). The subnets you create must be in at least 2 different availability zones.

    aws ec2 create-subnet \ --vpc-id VPC_ID \ --cidr-block SUBNET_CIDR \ --availability-zone AZ

(Optional) Step 3: Attach VPC with Amazon VPC Transit Gateway (TGW) or AWS Direct Connect virtual private gateway (VGW)

If you are using a TGW or VGW, attach your VPC to the TGW or VGW. For more information, see Amazon VPC attachments in Amazon VPC Transit Gateways or AWS Direct Connect virtual private gateway associations.

Transit Gateway

Run the following command to attach a Transit Gateway. Replace VPC_ID with the ID of the VPC. Replace SUBNET_ID1 and SUBNET_ID2 with the IDs of the subnets you created in the previous step. Replace TGW_ID with the ID of your TGW.

aws ec2 create-transit-gateway-vpc-attachment \ --vpc-id VPC_ID \ --subnet-ids SUBNET_ID1 SUBNET_ID2 \ --transit-gateway-id TGW_ID

Virtual Private Gateway

Run the following command to attach a Transit Gateway. Replace VPN_ID with the ID of your VGW. Replace VPC_ID with the ID of the VPC.

aws ec2 attach-vpn-gateway \ --vpn-gateway-id VPN_ID \ --vpc-id VPC_ID

(Optional) Step 4: Create route table

You can modify the main route table for the VPC or you can create a custom route table. The following steps create a custom route table with the routes to on-premises node and pod CIDRs. For more information, see Subnet route tables. Replace VPC_ID with the ID of the VPC.

aws ec2 create-route-table --vpc-id VPC_ID

Step 5: Create routes for on-premises nodes and pods

Create routes in the route table for each of your on-premises remote nodes. You can modify the main route table for the VPC or use the custom route table you created in the previous step.

The examples below show how to create routes for your on-premises node and pod CIDRs. In the examples, a transit gateway (TGW) is used to connect the VPC with the on-premises environment. If you have multiple on-premises node and pods CIDRs, repeat the steps for each CIDR.

  • If you are using an internet gateway or a virtual private gateway (VGW) replace --transit-gateway-id with --gateway-id.

  • Replace RT_ID with the ID of the route table you created in the previous step.

  • Replace REMOTE_NODE_CIDR with the CIDR range you will use for your hybrid nodes.

  • Replace REMOTE_POD_CIDR with the CIDR range you will use for the pods running on hybrid nodes. The pod CIDR range corresponds to the Container Networking Interface (CNI) configuration, which most commonly uses an overlay network on-premises. For more information, see Configure a CNI for hybrid nodes.

  • Replace TGW_ID with the ID of your TGW.

Remote node network

aws ec2 create-route \ --route-table-id RT_ID \ --destination-cidr-block REMOTE_NODE_CIDR \ --transit-gateway-id TGW_ID

Remote Pod network

aws ec2 create-route \ --route-table-id RT_ID \ --destination-cidr-block REMOTE_POD_CIDR \ --transit-gateway-id TGW_ID

(Optional) Step 6: Associate subnets with route table

If you created a custom route table in the previous step, associate each of the subnets you created in the previous step with your custom route table. If you are modifying the VPC main route table, the subnets are automatically associated with the main route table of the VPC and you can skip this step.

Run the following command for each of the subnets you created in the previous steps. Replace RT_ID with the route table you created in the previous step. Replace SUBNET_ID with the ID of a subnet.

aws ec2 associate-route-table --route-table-id RT_ID --subnet-id SUBNET_ID

Cluster security group configuration

The following access for your Amazon EKS cluster security group is required for ongoing cluster operations.

Type Protocol Direction Port Source Destination Usage

HTTPS

TCP

Inbound

443

Remote Node CIDR(s)

N/A

Kubelet to Kubernetes API server

HTTPS

TCP

Inbound

443

Remote Pod CIDR(s)

N/A

Pods requiring access to K8s API server when the CNI is not using NAT for the pod traffic.

HTTPS

TCP

Outbound

10250

N/A

Remote Node CIDR(s)

Kubernetes API server to Kubelet

HTTPS

TCP

Outbound

Webhook ports

N/A

Remote Pod CIDR(s)

Kubernetes API server to webhook (if running webhooks on hybrid nodes)

To create a security group with the inbound access rules, run the following commands. This security group must be passed when you create your Amazon EKS cluster. By default, the command below creates a security group that allows all outbound access. You can restrict outbound access to include only the rules above. If you’re considering limiting the outbound rules, we recommend that you thoroughly test all of your applications and pod connectivity before you apply your changed rules to a production cluster.

  • In the first command, replace SG_NAME with a name for your security group

  • In the first command, replace VPC_ID with the ID of the VPC you created in the previous step

  • In the second command, replace SG_ID with the ID of the security group you create in the first command

  • In the second command, replace REMOTE_NODE_CIDR and REMOTE_POD_CIDR with the values for your hybrid nodes and on-premises network.

aws ec2 create-security-group \ --group-name SG_NAME \ --description "security group for hybrid nodes" \ --vpc-id VPC_ID
aws ec2 authorize-security-group-ingress \ --group-id SG_ID \ --ip-permissions '[{"IpProtocol": "tcp", "FromPort": 443, "ToPort": 443, "IpRanges": [{"CidrIp": "REMOTE_NODE_CIDR"}, {"CidrIp": "REMOTE_POD_CIDR"}]}]'
隱私權網站條款Cookie 偏好設定
© 2024, Amazon Web Services, Inc.或其附屬公司。保留所有權利。