

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Configure networking for Amazon EKS clusters
<a name="eks-networking"></a>

Your Amazon EKS cluster is created in a VPC. Pod networking is provided by the Amazon VPC Container Network Interface (CNI) plugin for nodes that run on AWS infrastructure. If you are running nodes on your own infrastructure, see [Configure CNI for hybrid nodes](hybrid-nodes-cni.md). This chapter includes the following topics for learning more about networking for your cluster.

**Topics**
+ [

## Add an existing VPC Subnet to an Amazon EKS cluster from the management console
](#add-existing-subnet)
+ [

# View Amazon EKS networking requirements for VPC and subnets
](network-reqs.md)
+ [

# Create an Amazon VPC for your Amazon EKS cluster
](creating-a-vpc.md)
+ [

# View Amazon EKS security group requirements for clusters
](sec-group-reqs.md)
+ [

# Manage networking add-ons for Amazon EKS clusters
](eks-networking-add-ons.md)

## Add an existing VPC Subnet to an Amazon EKS cluster from the management console
<a name="add-existing-subnet"></a>

1. Navigate to your cluster in the management console

1. From the **Networking** tab select **Manage VPC Resources** 

1. From the **Subnets** dropdown, select additional subnets from the VPC of your cluster.

To create a new VPC Subnet:
+  [Review EKS Subnet Requirements](network-reqs.md#network-requirements-subnets) 
+ See [Create a Subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the Amazon Virtual Private Cloud User Guide.

# View Amazon EKS networking requirements for VPC and subnets
<a name="network-reqs"></a>

When you create a cluster, you specify a [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html) and at least two subnets that are in different Availability Zones. This topic provides an overview of Amazon EKS specific requirements and considerations for the VPC and subnets that you use with your cluster. If you don’t have a VPC to use with Amazon EKS, see [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md). If you’re creating a local or extended cluster on AWS Outposts, see [Create a VPC and subnets for Amazon EKS clusters on AWS Outposts](eks-outposts-vpc-subnet-requirements.md) instead of this topic. The content in this topic applies for Amazon EKS clusters with hybrid nodes. For additional networking requirements for hybrid nodes, see [Prepare networking for hybrid nodes](hybrid-nodes-networking.md).

## VPC requirements and considerations
<a name="network-requirements-vpc"></a>

When you create a cluster, the VPC that you specify must meet the following requirements and considerations:
+ The VPC must have a sufficient number of IP addresses available for the cluster, any nodes, and other Kubernetes resources that you want to create. If the VPC that you want to use doesn’t have a sufficient number of IP addresses, try to increase the number of available IP addresses.

  You can do this by updating the cluster configuration to change which subnets and security groups the cluster uses. You can update from the AWS Management Console, the latest version of the AWS CLI, AWS CloudFormation, and `eksctl` version `v0.164.0-rc.0` or later. You might need to do this to provide subnets with more available IP addresses to successfully upgrade a cluster version.
**Important**  
All subnets that you add must be in the same set of AZs as originally provided when you created the cluster. New subnets must satisfy all of the other requirements, for example they must have sufficient IP addresses.  
For example, assume that you made a cluster and specified four subnets. In the order that you specified them, the first subnet is in the `us-west-2a` Availability Zone, the second and third subnets are in `us-west-2b` Availability Zone, and the fourth subnet is in `us-west-2c` Availability Zone. If you want to change the subnets, you must provide at least one subnet in each of the three Availability Zones, and the subnets must be in the same VPC as the original subnets.

  If you need more IP addresses than the CIDR blocks in the VPC have, you can add additional CIDR blocks by [associating additional Classless Inter-Domain Routing (CIDR) blocks](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#add-ipv4-cidr) with your VPC. You can associate private (RFC 1918) and public (non-RFC 1918) CIDR blocks to your VPC either before or after you create your cluster.

  You can add nodes that use the new CIDR block immediately after you add it. However, because the control plane recognizes the new CIDR block only after the reconciliation is complete, it can take a cluster up to one hour for a CIDR block that you associated with a VPC to be recognized. Then you can run the `kubectl attach`, `kubectl cp`, `kubectl exec`, `kubectl logs`, and `kubectl port-forward` commands (these commands use the `kubelet API`) for nodes and pods in the new CIDR block. Also, if you have Pods that operate as a webhook backend, then you must wait for the control plane reconciliation to complete.
+ Avoid IP address range overlaps when you connect your EKS cluster to other VPCs through Transit Gateway, VPC peering, or other networking configurations. CIDR conflicts occur when your EKS cluster’s service CIDR overlaps with the CIDR of a connected VPC. In these scenarios, Service IP addresses take priority over resources in connected VPCs with the same IP address, although traffic routing can become unpredictable and applications may fail to connect to intended resources.

  To avoid CIDR conflicts, ensure your EKS service CIDR doesn’t overlap with any connected VPC CIDRs and maintain a centralized record of all CIDR assignments. If you encounter CIDR overlaps, you can use a transit gateway with a shared services VPC. For more information, see [Isolated VPCs with shared services](https://docs.aws.amazon.com/vpc/latest/tgw/transit-gateway-isolated-shared.html) and [Amazon EKS VPC routable IP address conservation patterns in a hybrid network](https://aws.amazon.com/blogs/containers/eks-vpc-routable-ip-address-conservation). Also, refer to the Communication across VPCs section of the [VPC and Subnet Considerations](https://docs.aws.amazon.com/eks/latest/best-practices/subnets.html) page in the EKS Best Practices Guide.
+ If you want Kubernetes to assign `IPv6` addresses to Pods and services, associate an `IPv6` CIDR block with your VPC. For more information, see [Associate an IPv6 CIDR block with your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#vpc-associate-ipv6-cidr) in the Amazon VPC User Guide. You cannot use `IPv6` addresses with Pods and services running on hybrid nodes and you cannot use hybrid nodes with clusters configured with the `IPv6` IP address family.
+ The VPC must have `DNS` hostname and `DNS` resolution support. Otherwise, nodes can’t register to your cluster. For more information, see [DNS attributes for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the Amazon VPC User Guide.
+ The VPC might require VPC endpoints using AWS PrivateLink. For more information, see [Subnet requirements and considerations](#network-requirements-subnets).

If you created a cluster with Kubernetes `1.14` or earlier, Amazon EKS added the following tag to your VPC:


| Key | Value | 
| --- | --- | 
|   `kubernetes.io/cluster/my-cluster `   |   `owned`   | 

This tag was only used by Amazon EKS. You can remove the tag without impacting your services. It’s not used with clusters that are version `1.15` or later.

## Subnet requirements and considerations
<a name="network-requirements-subnets"></a>

When you create a cluster, Amazon EKS creates 2–4 [elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) in the subnets that you specify. These network interfaces enable communication between your cluster and your VPC. These network interfaces also enable Kubernetes features that use the `kubelet API`. The connections to the `kubelet` API are used in the `kubectl attach`, `kubectl cp`, `kubectl exec`, `kubectl logs`, and `kubectl port-forward` commands. Each Amazon EKS created network interface has the text `Amazon EKS cluster-name ` in its description.

Amazon EKS can create its network interfaces in any subnet that you specify when you create a cluster. You can change which subnets Amazon EKS creates its network interfaces in after your cluster is created. When you update the Kubernetes version of a cluster, Amazon EKS deletes the original network interfaces that it created, and creates new network interfaces. These network interfaces might be created in the same subnets as the original network interfaces or in different subnets than the original network interfaces. To control which subnets network interfaces are created in, you can limit the number of subnets you specify to only two when you create a cluster or update the subnets after creating the cluster.

### Subnet requirements for clusters
<a name="cluster-subnets"></a>

The [subnets](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-types) that you specify when you create or update a cluster must meet the following requirements:
+ The subnets must each have at least six IP addresses for use by Amazon EKS. However, we recommend at least 16 IP addresses.
+ The subnets must be in at least two different Availability Zones.
+ The subnets can’t reside in AWS Outposts or AWS Wavelength. However, if you have them in your VPC, you can deploy self-managed nodes and Kubernetes resources to these types of subnets. For more information about self-managed nodes, see [Maintain nodes yourself with self-managed nodes](worker.md).
+ The subnets can be a public or private. However, we recommend that you specify private subnets, if possible. A public subnet is a subnet with a route table that includes a route to an [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html), whereas a private subnet is a subnet with a route table that doesn’t include a route to an internet gateway.
+ The subnets can’t reside in the following Availability Zones:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/network-reqs.html)

### IP address family usage by component
<a name="network-requirements-ip-table"></a>

The following table contains the IP address family used by each component of Amazon EKS. You can use a network address translation (NAT) or other compatibility system to connect to these components from source IP addresses in families with the "No" value for a table entry.

Functionality can differ depending on the IP family (`ipFamily`) setting of the cluster. This setting changes the type of IP addresses used for the CIDR block that Kubernetes assigns to Services. A cluster with the setting value of IPv4 is referred to as an *IPv4 cluster*, and a cluster with the setting value of IPv6 is referred to as an *IPv6 cluster*.


| Component | IPv4 addresses | IPv6 addresses | Dual stack addresses | 
| --- | --- | --- | --- | 
|  EKS API public endpoint  |  Yes1,3   |  Yes1,3   |  Yes1,3   | 
|  EKS API VPC endpoint  |  Yes  |  No  |  No  | 
|  EKS Auth API public endpoint (EKS Pod Identity)  |  Yes1   |  Yes1   |  Yes1   | 
|  EKS Auth API VPC endpoint (EKS Pod Identity)  |  Yes1   |  Yes1   |  Yes1   | 
|   `IPv4` Kubernetes cluster public endpoint2   |  Yes  |  No  |  No  | 
|   `IPv4` Kubernetes cluster private endpoint2   |  Yes  |  No  |  No  | 
|   `IPv6` Kubernetes cluster public endpoint2   |  Yes1,4   |  Yes1,4   |  Yes4   | 
|   `IPv6` Kubernetes cluster private endpoint2   |  Yes1,4   |  Yes1,4   |  Yes4   | 
|  Kubernetes cluster subnets  |  Yes2   |  No  |  Yes2   | 
|  Node Primary IP addresses  |  Yes2   |  No  |  Yes2   | 
|  Cluster CIDR range for Service IP addresses  |  Yes2   |  Yes2   |  No  | 
|  Pod IP addresses from the VPC CNI  |  Yes2   |  Yes2   |  No  | 
|  IRSA OIDC Issuer URLs  |  Yes1,3   |  Yes1,3   |  Yes1,3   | 

**Note**  
 1 The endpoint is dual stack with both `IPv4` and `IPv6` addresses. Your applications outside of AWS, your nodes for the cluster, and your pods inside the cluster can reach this endpoint by either `IPv4` or `IPv6`.  
 2 You choose between an `IPv4` cluster and `IPv6` cluster in the IP family (`ipFamily`) setting of the cluster when you create a cluster and this can’t be changed. Instead, you must choose a different setting when you create another cluster and migrate your workloads.  
 3 The dual-stack endpoint was introduced in August 2024. To use the dual-stack endpoints with the AWS CLI, see the [Dual-stack and FIPS endpoints](https://docs.aws.amazon.com/sdkref/latest/guide/feature-endpoints.html) configuration in the * AWS SDKs and Tools Reference Guide*. The following lists the new endpoints:  

 **EKS API public endpoint**   
 `eks.region.api.aws` 

 **IRSA OIDC Issuer URLs**   
 `oidc-eks.region.api.aws` 
 4 The dual-stack cluster endpoint was introduced in October 2024. EKS creates the following endpoint for new clusters that are made after this date and that select `IPv6` in the IP family (ipFamily) setting of the cluster:  

 **EKS cluster public/private endpoint**   
 `eks-cluster.region.api.aws` 

### Subnet requirements for nodes
<a name="node-subnet-reqs"></a>

You can deploy nodes and Kubernetes resources to the same subnets that you specify when you create your cluster. However, this isn’t necessary. This is because you can also deploy nodes and Kubernetes resources to subnets that you didn’t specify when you created the cluster. If you deploy nodes to different subnets, Amazon EKS doesn’t create cluster network interfaces in those subnets. Any subnet that you deploy nodes and Kubernetes resources to must meet the following requirements:
+ The subnets must have enough available IP addresses to deploy all of your nodes and Kubernetes resources to.
+ If you want Kubernetes to assign `IPv6` addresses to Pods and services, then you must have one `IPv6` CIDR block and one `IPv4` CIDR block that are associated with your subnet. For more information, see [Associate an IPv6 CIDR block with your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-subnets.html#subnet-associate-ipv6-cidr) in the Amazon VPC User Guide. The route tables that are associated with the subnets must include routes to `IPv4` and `IPv6` addresses. For more information, see [Routes](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html#route-table-routes) in the Amazon VPC User Guide. Pods are assigned only an `IPv6` address. However the network interfaces that Amazon EKS creates for your cluster and your nodes are assigned an `IPv4` and an `IPv6` address.
+ If you need inbound access from the internet to your Pods, make sure to have at least one public subnet with enough available IP addresses to deploy load balancers and ingresses to. You can deploy load balancers to public subnets. Load balancers can load balance to Pods in private or public subnets. We recommend deploying your nodes to private subnets, if possible.
+ If you plan to deploy nodes to a public subnet, the subnet must auto-assign `IPv4` public addresses or `IPv6` addresses. If you deploy nodes to a private subnet that has an associated `IPv6` CIDR block, the private subnet must also auto-assign `IPv6` addresses. If you used the AWS CloudFormation template provided by Amazon EKS to deploy your VPC after March 26, 2020, this setting is enabled. If you used the templates to deploy your VPC before this date or you use your own VPC, you must enable this setting manually. For the template, see [Create an Amazon VPC for your Amazon EKS cluster](creating-a-vpc.md). For more information, see [Modify the public IPv4 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-subnets.html#subnet-public-ip) and [Modify the IPv6 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-subnets.html#subnet-ipv6) in the [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/).
+ If the subnet that you deploy a node to is a private subnet and its route table doesn’t include a route to a network address translation [(NAT) device](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.html) (`IPv4`) or an [egress-only gateway](https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html) (`IPv6`), add VPC endpoints using AWS PrivateLink to your VPC. VPC endpoints are needed for all the AWS services that your nodes and Pods need to communicate with. Examples include Amazon ECR, Elastic Load Balancing, Amazon CloudWatch, AWS Security Token Service, and Amazon Simple Storage Service (Amazon S3). The endpoint must include the subnet that the nodes are in. Not all AWS services support VPC endpoints. For more information, see [What is AWS PrivateLink?](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) and [AWS services that integrate with AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/aws-services-privatelink-support.html). For a list of more Amazon EKS requirements, see [Deploy private clusters with limited internet access](private-clusters.md).
+ If you want to deploy load balancers to a subnet, the subnet must have the following tag:
  + Private subnets    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/network-reqs.html)
  + Public subnets    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/network-reqs.html)

When a Kubernetes cluster that’s version `1.18` and earlier was created, Amazon EKS added the following tag to all of the subnets that were specified.


| Key | Value | 
| --- | --- | 
|   `kubernetes.io/cluster/my-cluster `   |   `shared`   | 

When you create a new Kubernetes cluster now, Amazon EKS doesn’t add the tag to your subnets. If the tag was on subnets that were used by a cluster that was previously a version earlier than `1.19`, the tag wasn’t automatically removed from the subnets when the cluster was updated to a newer version. Version `2.1.1` or earlier of the AWS Load Balancer Controller requires this tag. If you are using a newer version of the Load Balancer Controller, you can remove the tag without interrupting your services. For more information about the controller, see [Route internet traffic with AWS Load Balancer Controller](aws-load-balancer-controller.md).

If you deployed a VPC by using `eksctl` or any of the Amazon EKS AWS CloudFormation VPC templates, the following applies:
+  **On or after March 26, 2020** – Public `IPv4` addresses are automatically assigned by public subnets to new nodes that are deployed to public subnets.
+  **Before March 26, 2020** – Public `IPv4` addresses aren’t automatically assigned by public subnets to new nodes that are deployed to public subnets.

This change impacts new node groups that are deployed to public subnets in the following ways:
+  ** [Managed node groups](create-managed-node-group.md) ** – If the node group is deployed to a public subnet on or after April 22, 2020, automatic assignment of public IP addresses must be enabled for the public subnet. For more information, see [Modifying the public IPv4 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-public-ip).
+  ** [Linux](launch-workers.md), [Windows](launch-windows-workers.md), or [Arm](eks-optimized-ami.md#arm-ami) self-managed node groups** – If the node group is deployed to a public subnet on or after March 26, 2020, automatic assignment of public IP addresses must be enabled for the public subnet. Otherwise, the nodes must be launched with a public IP address instead. For more information, see [Modifying the public IPv4 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-public-ip) or [Assigning a public IPv4 address during instance launch](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#vpc-public-ip).

## Shared subnet requirements and considerations
<a name="network-requirements-shared"></a>

You can use *VPC sharing* to share subnets with other AWS accounts within the same AWS Organizations. You can create Amazon EKS clusters in shared subnets, with the following considerations:
+ The owner of the VPC subnet must share a subnet with a participant account before that account can create an Amazon EKS cluster in it.
+ You can’t launch resources using the default security group for the VPC because it belongs to the owner. Additionally, participants can’t launch resources using security groups that are owned by other participants or the owner.
+ In a shared subnet, the participant and the owner separately controls the security groups within each respective account. The subnet owner can see security groups that are created by the participants but cannot perform any actions on them. If the subnet owner wants to remove or modify these security groups, the participant that created the security group must take the action.
+ If a cluster is created by a participant, the following considerations apply:
  + Cluster IAM role and Node IAM roles must be created in that account. For more information, see [Amazon EKS cluster IAM role](cluster-iam-role.md) and [Amazon EKS node IAM role](create-node-role.md).
  + All nodes must be made by the same participant, including managed node groups.
+ The shared VPC owner cannot view, update or delete a cluster that a participant creates in the shared subnet. This is in addition to the VPC resources that each account has different access to. For more information, see [Responsibilities and permissions for owners and participants](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html#vpc-share-limitations) in the *Amazon VPC User Guide*.
+ If you use the *custom networking* feature of the Amazon VPC CNI plugin for Kubernetes, you need to use the Availability Zone ID mappings listed in the owner account to create each `ENIConfig`. For more information, see [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md).

For more information about VPC subnet sharing, see [Share your VPC with other accounts](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html#vpc-share-limitations) in the *Amazon VPC User Guide*.

# Create an Amazon VPC for your Amazon EKS cluster
<a name="creating-a-vpc"></a>

You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you might operate in your own data center. However, it comes with the benefits of using the scalable infrastructure of Amazon Web Services. We recommend that you have a thorough understanding of the Amazon VPC service before deploying production Amazon EKS clusters. For more information, see the [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/).

An Amazon EKS cluster, nodes, and Kubernetes resources are deployed to a VPC. If you want to use an existing VPC with Amazon EKS, that VPC must meet the requirements that are described in [View Amazon EKS networking requirements for VPC and subnets](network-reqs.md). This topic describes how to create a VPC that meets Amazon EKS requirements using an Amazon EKS provided AWS CloudFormation template. Once you’ve deployed a template, you can view the resources created by the template to know exactly what resources it created, and the configuration of those resources. If you are using hybrid nodes, your VPC must have routes in its route table for your on-premises network. For more information about the network requirements for hybrid nodes, see [Prepare networking for hybrid nodes](hybrid-nodes-networking.md).

## Prerequisites
<a name="_prerequisites"></a>

To create a VPC for Amazon EKS, you must have the necessary IAM permissions to create Amazon VPC resources. These resources are VPCs, subnets, security groups, route tables and routes, and internet and NAT gateways. For more information, see [Create a VPC with a public subnet example policy](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-policy-examples.html#vpc-public-subnet-iam) in the Amazon VPC User Guide and the full list of [Actions](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonec2.html#amazonec2-actions-as-permissions) in the [Service Authorization Reference](https://docs.aws.amazon.com/service-authorization/latest/reference/reference.html).

You can create a VPC with public and private subnets, only public subnets, or only private subnets.

## Public and private subnets
<a name="_public_and_private_subnets"></a>

This VPC has two public and two private subnets. A public subnet’s associated route table has a route to an internet gateway. However, the route table of a private subnet doesn’t have a route to an internet gateway. One public and one private subnet are deployed to the same Availability Zone. The other public and private subnets are deployed to a second Availability Zone in the same AWS Region. We recommend this option for most deployments.

With this option, you can deploy your nodes to private subnets. This option allows Kubernetes to deploy load balancers to the public subnets that can load balance traffic to Pods that run on nodes in the private subnets. Public `IPv4` addresses are automatically assigned to nodes that are deployed to public subnets, but public `IPv4` addresses aren’t assigned to nodes deployed to private subnets.

You can also assign `IPv6` addresses to nodes in public and private subnets. The nodes in private subnets can communicate with the cluster and other AWS services. Pods can communicate to the internet through a NAT gateway using `IPv4` addresses or outbound-only Internet gateway using `IPv6` addresses deployed in each Availability Zone. A security group is deployed that has rules that deny all inbound traffic from sources other than the cluster or nodes but allows all outbound traffic. The subnets are tagged so that Kubernetes can deploy load balancers to them.

1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/).

1. From the navigation bar, select an AWS Region that supports Amazon EKS.

1. Choose **Create stack**, **With new resources (standard)**.

1. Under **Prerequisite - Prepare template**, make sure that **Template is ready** is selected and then under **Specify template**, select **Amazon S3 URL**.

1. You can create a VPC that supports only `IPv4`, or a VPC that supports `IPv4` and `IPv6`. Paste one of the following URLs into the text area under **Amazon S3 URL** and choose **Next**:
   +  `IPv4` 

```
https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml
```
+  `IPv4` and `IPv6` 

```
https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-ipv6-vpc-public-private-subnets.yaml
```

1. On the **Specify stack details** page, enter the parameters, and then choose **Next**.
   +  **Stack name**: Choose a stack name for your AWS CloudFormation stack. For example, you can use the template name you used in the previous step. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.
   +  **VpcBlock**: Choose an `IPv4` CIDR range for your VPC. Each node, Pod, and load balancer that you deploy is assigned an `IPv4` address from this block. The default `IPv4` values provide enough IP addresses for most implementations, but if it doesn’t, then you can change it. For more information, see [VPC and subnet sizing](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#VPC_Sizing) in the Amazon VPC User Guide. You can also add additional CIDR blocks to the VPC once it’s created. If you’re creating an `IPv6` VPC, `IPv6` CIDR ranges are automatically assigned for you from Amazon’s Global Unicast Address space.
   +  **PublicSubnet01Block**: Specify an `IPv4` CIDR block for public subnet 1. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it. If you’re creating an `IPv6` VPC, this block is specified for you within the template.
   +  **PublicSubnet02Block**: Specify an `IPv4` CIDR block for public subnet 2. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it. If you’re creating an `IPv6` VPC, this block is specified for you within the template.
   +  **PrivateSubnet01Block**: Specify an `IPv4` CIDR block for private subnet 1. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it. If you’re creating an `IPv6` VPC, this block is specified for you within the template.
   +  **PrivateSubnet02Block**: Specify an `IPv4` CIDR block for private subnet 2. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it. If you’re creating an `IPv6` VPC, this block is specified for you within the template.

1. (Optional) On the **Configure stack options** page, tag your stack resources and then choose **Next**.

1. On the **Review** page, choose **Create stack**.

1. When your stack is created, select it in the console and choose **Outputs**.

1. Record the **VpcId** for the VPC that was created. You need this when you create your cluster and nodes.

1. Record the **SubnetIds** for the subnets that were created and whether you created them as public or private subnets. You need at least two of these when you create your cluster and nodes.

1. If you created an `IPv4` VPC, skip this step. If you created an `IPv6` VPC, you must enable the auto-assign `IPv6` address option for the public subnets that were created by the template. That setting is already enabled for the private subnets. To enable the setting, complete the following steps:

   1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.

   1. In the left navigation pane, choose **Subnets** 

   1. Select one of your public subnets (** *stack-name*/SubnetPublic01** or ** *stack-name*/SubnetPublic02** contains the word **public**) and choose **Actions**, **Edit subnet settings**.

   1. Choose the **Enable auto-assign IPv6 address** check box and then choose **Save**.

   1. Complete the previous steps again for your other public subnet.

## Only public subnets
<a name="_only_public_subnets"></a>

This VPC has three public subnets that are deployed into different Availability Zones in an AWS Region. All nodes are automatically assigned public `IPv4` addresses and can send and receive internet traffic through an [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html). A [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) is deployed that denies all inbound traffic and allows all outbound traffic. The subnets are tagged so that Kubernetes can deploy load balancers to them.

1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/).

1. From the navigation bar, select an AWS Region that supports Amazon EKS.

1. Choose **Create stack**, **With new resources (standard)**.

1. Under **Prepare template**, make sure that **Template is ready** is selected and then under **Template source**, select **Amazon S3 URL**.

1. Paste the following URL into the text area under **Amazon S3 URL** and choose **Next**:

```
https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-sample.yaml
```

1. On the **Specify Details** page, enter the parameters, and then choose **Next**.
   +  **Stack name**: Choose a stack name for your AWS CloudFormation stack. For example, you can call it *amazon-eks-vpc-sample*. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.
   +  **VpcBlock**: Choose a CIDR block for your VPC. Each node, Pod, and load balancer that you deploy is assigned an `IPv4` address from this block. The default `IPv4` values provide enough IP addresses for most implementations, but if it doesn’t, then you can change it. For more information, see [VPC and subnet sizing](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#VPC_Sizing) in the Amazon VPC User Guide. You can also add additional CIDR blocks to the VPC once it’s created.
   +  **Subnet01Block**: Specify a CIDR block for subnet 1. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it.
   +  **Subnet02Block**: Specify a CIDR block for subnet 2. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it.
   +  **Subnet03Block**: Specify a CIDR block for subnet 3. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it.

1. (Optional) On the **Options** page, tag your stack resources. Choose **Next**.

1. On the **Review** page, choose **Create**.

1. When your stack is created, select it in the console and choose **Outputs**.

1. Record the **VpcId** for the VPC that was created. You need this when you create your cluster and nodes.

1. Record the **SubnetIds** for the subnets that were created. You need at least two of these when you create your cluster and nodes.

1. (Optional) Any cluster that you deploy to this VPC can assign private `IPv4` addresses to your Pods and services. If you want to deploy clusters to this VPC to assign private `IPv6` addresses to your Pods and services, make updates to your VPC, subnet, route tables, and security groups. For more information, see [Migrate existing VPCs from IPv4 to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the Amazon VPC User Guide. Amazon EKS requires that your subnets have the `Auto-assign` `IPv6` addresses option enabled. By default, it’s disabled.

## Only private subnets
<a name="_only_private_subnets"></a>

This VPC has three private subnets that are deployed into different Availability Zones in the AWS Region. Resources that are deployed to the subnets can’t access the internet, nor can the internet access resources in the subnets. The template creates [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-aws-services.html) using AWS PrivateLink for several AWS services that nodes typically need to access. If your nodes need outbound internet access, you can add a public [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the Availability Zone of each subnet after the VPC is created. A [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) is created that denies all inbound traffic, except from resources deployed into the subnets. A security group also allows all outbound traffic. The subnets are tagged so that Kubernetes can deploy internal load balancers to them. If you’re creating a VPC with this configuration, see [Deploy private clusters with limited internet access](private-clusters.md) for additional requirements and considerations.

1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/).

1. From the navigation bar, select an AWS Region that supports Amazon EKS.

1. Choose **Create stack**, **With new resources (standard)**.

1. Under **Prepare template**, make sure that **Template is ready** is selected and then under **Template source**, select **Amazon S3 URL**.

1. Paste the following URL into the text area under **Amazon S3 URL** and choose **Next**:

```
https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-fully-private-vpc.yaml
```

1. On the **Specify Details** page, enter the parameters and then choose **Next**.
   +  **Stack name**: Choose a stack name for your AWS CloudFormation stack. For example, you can call it *amazon-eks-fully-private-vpc*. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.
   +  **VpcBlock**: Choose a CIDR block for your VPC. Each node, Pod, and load balancer that you deploy is assigned an `IPv4` address from this block. The default `IPv4` values provide enough IP addresses for most implementations, but if it doesn’t, then you can change it. For more information, see [VPC and subnet sizing](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#VPC_Sizing) in the Amazon VPC User Guide. You can also add additional CIDR blocks to the VPC once it’s created.
   +  **PrivateSubnet01Block**: Specify a CIDR block for subnet 1. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it.
   +  **PrivateSubnet02Block**: Specify a CIDR block for subnet 2. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it.
   +  **PrivateSubnet03Block**: Specify a CIDR block for subnet 3. The default value provides enough IP addresses for most implementations, but if it doesn’t, then you can change it.

1. (Optional) On the **Options** page, tag your stack resources. Choose **Next**.

1. On the **Review** page, choose **Create**.

1. When your stack is created, select it in the console and choose **Outputs**.

1. Record the **VpcId** for the VPC that was created. You need this when you create your cluster and nodes.

1. Record the **SubnetIds** for the subnets that were created. You need at least two of these when you create your cluster and nodes.

1. (Optional) Any cluster that you deploy to this VPC can assign private `IPv4` addresses to your Pods and services. If you want deploy clusters to this VPC to assign private `IPv6` addresses to your Pods and services, make updates to your VPC, subnet, route tables, and security groups. For more information, see [Migrate existing VPCs from IPv4 to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the Amazon VPC User Guide. Amazon EKS requires that your subnets have the `Auto-assign IPv6` addresses option enabled (it’s disabled by default).

# View Amazon EKS security group requirements for clusters
<a name="sec-group-reqs"></a>

This topic describes the security group requirements of an Amazon EKS cluster.

## Default cluster security group
<a name="security-group-default-rules"></a>

When you create a cluster, Amazon EKS creates a security group that’s named `eks-cluster-sg-my-cluster-uniqueID `. This security group has the following default rules:


| Rule type | Protocol | Ports | Source | Destination | 
| --- | --- | --- | --- | --- | 
|  Inbound  |  All  |  All  |  Self  |  | 
|  Outbound  |  All  |  All  |  |  0.0.0.0/0(`IPv4`) or ::/0 (`IPv6`)  | 
|  Outbound  |  All  |  All  |  |  Self (for EFA traffic)  | 

The default security group includes an outbound rule that allows Elastic Fabric Adapter (EFA) traffic with the destination of the same security group. This enables EFA traffic within the cluster, which is beneficial for AI/ML and High Performance Computing (HPC) workloads. For more information, see [Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html) in the *Amazon Elastic Compute Cloud User Guide*.

**Important**  
If your cluster doesn’t need the outbound rule to `0.0.0.0/0` (IPv4), `::/0` (IPv6), you can remove it. If you remove it, you must still have the minimum rules listed in [Restricting cluster traffic](#security-group-restricting-cluster-traffic). If you remove the inbound or outbound rules that allow traffic to/from the cluster security group itself, Amazon EKS recreates it whenever the cluster is updated.

Amazon EKS adds the following tags to the security group. If you remove the tags, Amazon EKS adds them back to the security group whenever your cluster is updated.


| Key | Value | 
| --- | --- | 
|   `kubernetes.io/cluster/my-cluster `   |   `owned`   | 
|   `aws:eks:cluster-name`   |   *my-cluster*   | 
|   `Name`   |   `eks-cluster-sg-my-cluster-uniqueid `   | 

Amazon EKS automatically associates this security group to the following resources that it also creates:
+ 2–4 elastic network interfaces (referred to for the rest of this document as *network interface*) that are created when you create your cluster.
+ Network interfaces of the nodes in any managed node group that you create.

The default rules allow all traffic to flow freely between your cluster and nodes, and allows all outbound traffic to any destination. When you create a cluster, you can (optionally) specify your own security groups. If you do, then Amazon EKS also associates the security groups that you specify to the network interfaces that it creates for your cluster. However, it doesn’t associate them to any node groups that you create.

You can determine the ID of your cluster security group in the AWS Management Console under the cluster’s **Networking** section. Or, you can do so by running the following AWS CLI command.

```
aws eks describe-cluster --name my-cluster --query cluster.resourcesVpcConfig.clusterSecurityGroupId
```

## Restricting cluster traffic
<a name="security-group-restricting-cluster-traffic"></a>

If you need to limit the open ports between the EKS control plane and nodes, you can remove the [default outbound rule](#security-group-default-rules) to `0.0.0.0/0` (IPv4)/`::/0` (IPv6) and add the following minimum rules that are required for the cluster.

If you remove the [default inbound rule](#security-group-default-rules) that allows all traffic for source self (traffic from the cluster security group), Amazon EKS recreates it when the cluster is updated.

If you remove the [default outbound rule](#security-group-default-rules) that allows all traffic for destination self (traffic to the cluster security group), Amazon EKS recreates it when the cluster is updated.


| Rule type | Protocol | Port | Destination | 
| --- | --- | --- | --- | 
|  Outbound  |  TCP  |  443  |  Cluster security group  | 
|  Outbound  |  TCP  |  10250  |  Cluster security group  | 
|  Outbound (DNS)  |  TCP and UDP  |  53  |  Cluster security group  | 

You must also add rules for the following traffic:
+ Any protocol and ports that you expect your nodes to use for inter-node communication.
+ Outbound internet access so that nodes can access the Amazon EKS APIs for cluster introspection and node registration at launch time. If your nodes don’t have internet access, review [Deploy private clusters with limited internet access](private-clusters.md) for additional considerations.
+ Node access to pull container images from Amazon ECR or other container registries APIs that they need to pull images from, such as DockerHub. For more information, see [AWS IP address ranges](https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html) in the AWS General Reference.
+ Node access to Amazon S3.
+ Separate rules are required for `IPv4` and `IPv6` addresses.
+ If you are using hybrid nodes, you must add an additional security group to your cluster to allow communication with your on-premises nodes and pods. For more information, see [Prepare networking for hybrid nodes](hybrid-nodes-networking.md).

If you’re considering limiting the rules, we recommend that you thoroughly test all of your Pods before you apply your changed rules to a production cluster.

If you originally deployed a cluster with Kubernetes `1.14` and a platform version of `eks.3` or earlier, then consider the following:
+ You might also have control plane and node security groups. When these groups were created, they included the restricted rules listed in the previous table. These security groups are no longer required and can be removed. However, you need to make sure your cluster security group contains the rules that those groups contain.
+ If you deployed the cluster using the API directly or you used a tool such as the AWS CLI or AWS CloudFormation to create the cluster and you didn’t specify a security group at cluster creation, then the default security group for the VPC was applied to the cluster network interfaces that Amazon EKS created.

## Shared security groups
<a name="_shared_security_groups"></a>

Amazon EKS supports shared security groups.
+  **Security Group VPC Associations** associate security groups with multiple VPCs in the same account and region.
  + Learn how to [Associate security groups with multiple VPCs](https://docs.aws.amazon.com/vpc/latest/userguide/security-group-assoc.html) in the *Amazon VPC User Guide*.
+  **Shared security groups** enable you to share security groups with other AWS accounts. The accounts must be in the same AWS organization.
  + Learn how to [Share security groups with organizations](https://docs.aws.amazon.com/vpc/latest/userguide/security-group-sharing.html) in the *Amazon VPC User Guide*.
+ Security groups are always limited to a single AWS region.

### Considerations for Amazon EKS
<a name="_considerations_for_amazon_eks"></a>
+ EKS has the same requirements of shared or multi-VPC security groups as standard security groups.

# Manage networking add-ons for Amazon EKS clusters
<a name="eks-networking-add-ons"></a>

Several networking add-ons are available for your Amazon EKS cluster.

## Built-in add-ons
<a name="eks-networking-add-ons-built-in"></a>

**Note**  
 **When you create an EKS cluster:**   
 **Using the AWS Console**: The built-in add-ons (like CoreDNS, kube-proxy, etc.) are automatically installed as Amazon EKS Add-ons. These can be easily configured and updated through the AWS Console, CLI, or SDKs.
 **Using other methods** (CLI, SDKs, etc.): The same built-in add-ons are installed as self-managed versions that run as regular Kubernetes deployments. These require manual configuration and updates since they can’t be managed through AWS tools.
We recommend using Amazon EKS Add-ons rather than self-managed versions to simplify add-on management and enable centralized configuration and updates through AWS services.

 **Amazon VPC CNI plugin for Kubernetes**   
This CNI add-on creates elastic network interfaces and attaches them to your Amazon EC2 nodes. The add-on also assigns a private `IPv4` or `IPv6` address from your VPC to each Pod and service. This add-on is installed, by default, on your cluster. For more information, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md). If you are using hybrid nodes, the VPC CNI is still installed by default but it is prevented from running on your hybrid nodes with an anti-affinity rule. For more information about your CNI options for hybrid nodes, see [Configure CNI for hybrid nodes](hybrid-nodes-cni.md).

 **CoreDNS**   
CoreDNS is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS. CoreDNS provides name resolution for all Pods in the cluster. This add-on is installed, by default, on your cluster. For more information, see [Manage CoreDNS for DNS in Amazon EKS clusters](managing-coredns.md).

 ** `kube-proxy` **   
This add-on maintains network rules on your Amazon EC2 nodes and enables network communication to your Pods. This add-on is installed, by default, on your cluster. For more information, see [Manage `kube-proxy` in Amazon EKS clusters](managing-kube-proxy.md).

## Optional AWS networking add-ons
<a name="eks-networking-add-ons-optional"></a>

 ** AWS Load Balancer Controller**   
When you deploy Kubernetes service objects of type `loadbalancer`, the controller creates AWS Network Load Balancers . When you create Kubernetes ingress objects, the controller creates AWS Application Load Balancers. We recommend using this controller to provision Network Load Balancers, rather than using the [legacy Cloud Provider](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/#legacy-cloud-provider) controller built-in to Kubernetes. For more information, see the [AWS Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller) documentation.

 ** AWS Gateway API Controller**   
This controller lets you connect services across multiple Kubernetes clusters using the [Kubernetes gateway API](https://gateway-api.sigs.k8s.io/). The controller connects Kubernetes services running on Amazon EC2 instances, containers, and serverless functions by using the [Amazon VPC Lattice](https://docs.aws.amazon.com/vpc-lattice/latest/ug/what-is-vpc-service-network.html) service. For more information, see the [AWS Gateway API Controller](https://www.gateway-api-controller.eks.aws.dev/) documentation.

For more information about add-ons, see [Amazon EKS add-ons](eks-add-ons.md).

# Assign IPs to Pods with the Amazon VPC CNI
<a name="managing-vpc-cni"></a>

**Tip**  
 [Register](https://aws-experience.com/emea/smb/events/series/get-hands-on-with-amazon-eks?trk=4a9b4147-2490-4c63-bc9f-f8a84b122c8c&sc_channel=el) for upcoming Amazon EKS workshops.

**Tip**  
With Amazon EKS Auto Mode, you don’t need to install or upgrade networking add-ons. Auto Mode includes pod networking and load balancing capabilities.  
For more information, see [Automate cluster infrastructure with EKS Auto Mode](automode.md).

The Amazon VPC CNI plugin for Kubernetes add-on is deployed on each Amazon EC2 node in your Amazon EKS cluster. The add-on creates [elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) and attaches them to your Amazon EC2 nodes. The add-on also assigns a private `IPv4` or `IPv6` address from your VPC to each Pod.

A version of the add-on is deployed with each Fargate node in your cluster, but you don’t update it on Fargate nodes. Other compatible CNI plugins are available for use on Amazon EKS clusters, but this is the only CNI plugin supported by Amazon EKS for nodes that run on AWS infrastructure. For more information about the other compatible CNI plugins, see [Alternate CNI plugins for Amazon EKS clusters](alternate-cni-plugins.md). The VPC CNI isn’t supported for use with hybrid nodes. For more information about your CNI options for hybrid nodes, see [Configure CNI for hybrid nodes](hybrid-nodes-cni.md).

The following table lists the latest available version of the Amazon EKS add-on type for each Kubernetes version.

## Amazon VPC CNI versions
<a name="vpc-cni-latest-available-version"></a>


| Kubernetes version | Amazon EKS type of VPC CNI version | 
| --- | --- | 
|  1.35  |  v1.21.1-eksbuild.7  | 
|  1.34  |  v1.21.1-eksbuild.7  | 
|  1.33  |  v1.21.1-eksbuild.7  | 
|  1.32  |  v1.21.1-eksbuild.7  | 
|  1.31  |  v1.21.1-eksbuild.7  | 
|  1.30  |  v1.21.1-eksbuild.7  | 
|  1.29  |  v1.21.1-eksbuild.7  | 

**Important**  
If you’re self-managing this add-on, the versions in the table might not be the same as the available self-managed versions. For more information about updating the self-managed type of this add-on, see [Update the Amazon VPC CNI (self-managed add-on)](vpc-add-on-self-managed-update.md).

**Important**  
To upgrade to VPC CNI v1.12.0 or later, you must upgrade to VPC CNI v1.7.0 first. We recommend that you update one minor version at a time.

## Considerations
<a name="manage-vpc-cni-add-on-on-considerations"></a>

The following are considerations for using the feature.
+ Versions are specified as `major-version.minor-version.patch-version-eksbuild.build-number`.
+ Check version compatibility for each feature. Some features of each release of the Amazon VPC CNI plugin for Kubernetes require certain Kubernetes versions. When using different Amazon EKS features, if a specific version of the add-on is required, then it’s noted in the feature documentation. Unless you have a specific reason for running an earlier version, we recommend running the latest version.

# Create the Amazon VPC CNI (Amazon EKS add-on)
<a name="vpc-add-on-create"></a>

Use the following steps to create the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.

Before you begin, review the considerations. For more information, see [Considerations](managing-vpc-cni.md#manage-vpc-cni-add-on-on-considerations).

## Prerequisites
<a name="vpc-add-on-create-prerequisites"></a>

The following are prerequisites for the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.
+ An existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md).
+ An existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. To determine whether you already have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
+ An IAM role with the [AmazonEKS\$1CNI\$1Policy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKS_CNI_Policy.html) IAM policy (if your cluster uses the `IPv4` family) or an IPv6 policy (if your cluster uses the `IPv6` family) attached to it. For more information about the VPC CNI role, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md). For information about the IPv6 policy, see [Create IAM policy for clusters that use the `IPv6` family](cni-iam-role.md#cni-iam-role-create-ipv6-policy).

**Important**  
Amazon VPC CNI plugin for Kubernetes versions `v1.16.0` to `v1.16.1` implement CNI specification version `v1.0.0`. For more information about `v1.0.0` of the CNI spec, see [Container Network Interface (CNI) Specification](https://github.com/containernetworking/cni/blob/spec-v1.0.0/SPEC.md) on GitHub.

## Procedure
<a name="vpc-add-on-create-procedure"></a>

After you complete the prerequisites, use the following steps to create the add-on.

1. See which version of the add-on is installed on your cluster.

   ```
   kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
   ```

   An example output is as follows.

   ```
   v1.16.4-eksbuild.2
   ```

1. See which type of the add-on is installed on your cluster. Depending on the tool that you created your cluster with, you might not currently have the Amazon EKS add-on type installed on your cluster. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni --query addon.addonVersion --output text
   ```

   If a version number is returned, you have the Amazon EKS type of the add-on installed on your cluster and don’t need to complete the remaining steps in this procedure. If an error is returned, you don’t have the Amazon EKS type of the add-on installed on your cluster. Complete the remaining steps of this procedure to install it.

1. Save the configuration of your currently installed add-on.

   ```
   kubectl get daemonset aws-node -n kube-system -o yaml > aws-k8s-cni-old.yaml
   ```

1. Create the add-on using the AWS CLI. If you want to use the AWS Management Console or `eksctl` to create the add-on, see [Create an Amazon EKS add-on](creating-an-add-on.md) and specify `vpc-cni` for the add-on name. Copy the command that follows to your device. Make the following modifications to the command, as needed, and then run the modified command.
   + Replace *my-cluster* with the name of your cluster.
   + Replace *v1.20.3-eksbuild.1* with the latest version listed in the latest version table for your cluster version. For the latest version table, see [Amazon VPC CNI versions](managing-vpc-cni.md#vpc-cni-latest-available-version).
   + Replace *111122223333* with your account ID and *AmazonEKSVPCCNIRole* with the name of an [existing IAM role](cni-iam-role.md#cni-iam-role-create-role) that you’ve created. Specifying a role requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one for your cluster, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

     ```
     aws eks create-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.20.3-eksbuild.1 \
         --service-account-role-arn arn:aws:iam::111122223333:role/AmazonEKSVPCCNIRole
     ```

     If you’ve applied custom settings to your current add-on that conflict with the default settings of the Amazon EKS add-on, creation might fail. If creation fails, you receive an error that can help you resolve the issue. Alternatively, you can add `--resolve-conflicts OVERWRITE` to the previous command. This allows the add-on to overwrite any existing custom settings. Once you’ve created the add-on, you can update it with your custom settings.

1. Confirm that the latest version of the add-on for your cluster’s Kubernetes version was added to your cluster. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni --query addon.addonVersion --output text
   ```

   It might take several seconds for add-on creation to complete.

   An example output is as follows.

   ```
   v1.20.3-eksbuild.1
   ```

1. If you made custom settings to your original add-on, before you created the Amazon EKS add-on, use the configuration that you saved in a previous step to update the EKS add-on with your custom settings. Follow the steps in [Update the Amazon VPC CNI (Amazon EKS add-on)](vpc-add-on-update.md).

1. (Optional) Install the `cni-metrics-helper` to your cluster. It scrapes elastic network interface and IP address information, aggregates it at a cluster level, and publishes the metrics to Amazon CloudWatch. For more information, see [cni-metrics-helper](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/cni-metrics-helper/README.md) on GitHub.

# Update the Amazon VPC CNI (Amazon EKS add-on)
<a name="vpc-add-on-update"></a>

Update the Amazon EKS type of the Amazon VPC CNI plugin for Kubernetes add-on. If you haven’t added the Amazon EKS type of the add-on to your cluster, you can install it by following [Create the Amazon VPC CNI (Amazon EKS add-on)](vpc-add-on-create.md). Or, update the other type of VPC CNI installation by following [Update the Amazon VPC CNI (self-managed add-on)](vpc-add-on-self-managed-update.md).

1. See which version of the add-on is installed on your cluster. Replace *my-cluster* with your cluster name.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni --query "addon.addonVersion" --output text
   ```

   An example output is as follows.

   ```
   v1.20.0-eksbuild.1
   ```

   Compare the version with the table of latest versions at [Amazon VPC CNI versions](managing-vpc-cni.md#vpc-cni-latest-available-version). If the version returned is the same as the version for your cluster’s Kubernetes version in the latest version table, then you already have the latest version installed on your cluster and don’t need to complete the rest of this procedure. If you receive an error, instead of a version number in your output, then you don’t have the Amazon EKS type of the add-on installed on your cluster. You need to create the add-on before you can update it with this procedure. To create the Amazon EKS type of the VPC CNI add-on, you can follow [Create the Amazon VPC CNI (Amazon EKS add-on)](vpc-add-on-create.md).

1. Save the configuration of your currently installed add-on.

   ```
   kubectl get daemonset aws-node -n kube-system -o yaml > aws-k8s-cni-old.yaml
   ```

1. Update your add-on using the AWS CLI. If you want to use the AWS Management Console or `eksctl` to update the add-on, see [Update an Amazon EKS add-on](updating-an-add-on.md). Copy the command that follows to your device. Make the following modifications to the command, as needed, and then run the modified command.
   + Replace *my-cluster* with the name of your cluster.
   + Replace *v1.20.0-eksbuild.1* with the latest version listed in the latest version table for your cluster version.
   + Replace *111122223333* with your account ID and *AmazonEKSVPCCNIRole* with the name of an existing IAM role that you’ve created. To create an IAM role for the VPC CNI, see [Step 1: Create the Amazon VPC CNI plugin for Kubernetes IAM role](cni-iam-role.md#cni-iam-role-create-role). Specifying a role requires that you have an IAM OpenID Connect (OIDC) provider for your cluster. To determine whether you have one for your cluster, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
   + The `--resolve-conflicts PRESERVE` option preserves existing configuration values for the add-on. If you’ve set custom values for add-on settings, and you don’t use this option, Amazon EKS overwrites your values with its default values. If you use this option, then we recommend testing any field and value changes on a non-production cluster before updating the add-on on your production cluster. If you change this value to `OVERWRITE`, all settings are changed to Amazon EKS default values. If you’ve set custom values for any settings, they might be overwritten with Amazon EKS default values. If you change this value to `none`, Amazon EKS doesn’t change the value of any settings, but the update might fail. If the update fails, you receive an error message to help you resolve the conflict.
   + If you’re not updating a configuration setting, remove `--configuration-values '{"env":{"AWS_VPC_K8S_CNI_EXTERNALSNAT":"true"}}'` from the command. If you’re updating a configuration setting, replace *"env":\$1"AWS\$1VPC\$1K8S\$1CNI\$1EXTERNALSNAT":"true"\$1* with the setting that you want to set. In this example, the `AWS_VPC_K8S_CNI_EXTERNALSNAT` environment variable is set to `true`. The value that you specify must be valid for the configuration schema. If you don’t know the configuration schema, run `aws eks describe-addon-configuration --addon-name vpc-cni --addon-version v1.20.0-eksbuild.1 `, replacing *v1.20.0-eksbuild.1* with the version number of the add-on that you want to see the configuration for. The schema is returned in the output. If you have any existing custom configuration, want to remove it all, and set the values for all settings back to Amazon EKS defaults, remove *"env":\$1"AWS\$1VPC\$1K8S\$1CNI\$1EXTERNALSNAT":"true"\$1* from the command, so that you have empty `{}`. For an explanation of each setting, see [CNI Configuration Variables](https://github.com/aws/amazon-vpc-cni-k8s#cni-configuration-variables) on GitHub.

     ```
     aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.20.3-eksbuild.1 \
         --service-account-role-arn arn:aws:iam::111122223333:role/AmazonEKSVPCCNIRole \
         --resolve-conflicts PRESERVE --configuration-values '{"env":{"AWS_VPC_K8S_CNI_EXTERNALSNAT":"true"}}'
     ```

     It might take several seconds for the update to complete.

1. Confirm that the add-on version was updated. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni
   ```

   It might take several seconds for the update to complete.

   An example output is as follows.

   ```
   {
       "addon": {
           "addonName": "vpc-cni",
           "clusterName": "my-cluster",
           "status": "ACTIVE",
           "addonVersion": "v1.20.3-eksbuild.1",
           "health": {
               "issues": []
           },
           "addonArn": "arn:aws:eks:region:111122223333:addon/my-cluster/vpc-cni/74c33d2f-b4dc-8718-56e7-9fdfa65d14a9",
           "createdAt": "2023-04-12T18:25:19.319000+00:00",
           "modifiedAt": "2023-04-12T18:40:28.683000+00:00",
           "serviceAccountRoleArn": "arn:aws:iam::111122223333:role/AmazonEKSVPCCNIRole",
           "tags": {},
           "configurationValues": "{\"env\":{\"AWS_VPC_K8S_CNI_EXTERNALSNAT\":\"true\"}}"
       }
   }
   ```

## Troubleshooting
<a name="_troubleshooting"></a>

When upgrading the VPC CNI from a version older than v1.13.2, you must replace all nodes in the cluster after the update. Versions prior to v1.13.2 use the iptables-legacy backend to insert iptables rules necessary for proper functionality, such as source NAT (SNAT).

Version v1.13.2 was a significant release that [introduced iptables-wrapper](https://github.com/aws/amazon-vpc-cni-k8s/pull/2402), which automatically detects the appropriate iptables backend (iptables-legacy or iptables-nft) for inserting chains and rules. This change aligned with the upstream Kubernetes decision to move away from the legacy backend due to performance limitations.

Replacing nodes following an upgrade from a version older than v1.13.2 of the VPC CNI is required because introducing rules into both the iptables-legacy and iptables-nft backends can lead to unexpected behavior for traffic originating from non-primary ENIs.

# Update the Amazon VPC CNI (self-managed add-on)
<a name="vpc-add-on-self-managed-update"></a>

**Important**  
We recommend adding the Amazon EKS type of the add-on to your cluster instead of using the self-managed type of the add-on. If you’re not familiar with the difference between the types, see [Amazon EKS add-ons](eks-add-ons.md). For more information about adding an Amazon EKS add-on to your cluster, see [Create an Amazon EKS add-on](creating-an-add-on.md). If you’re unable to use the Amazon EKS add-on, we encourage you to submit an issue about why you can’t to the [Containers roadmap GitHub repository](https://github.com/aws/containers-roadmap/issues).

1. Confirm that you don’t have the Amazon EKS type of the add-on installed on your cluster. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni --query addon.addonVersion --output text
   ```

   If an error message is returned, you don’t have the Amazon EKS type of the add-on installed on your cluster. To self-manage the add-on, complete the remaining steps in this procedure to update the add-on. If a version number is returned, you have the Amazon EKS type of the add-on installed on your cluster. To update it, use the procedure in [Update an Amazon EKS add-on](updating-an-add-on.md), rather than using this procedure. If you’re not familiar with the differences between the add-on types, see [Amazon EKS add-ons](eks-add-ons.md).

1. See which version of the container image is currently installed on your cluster.

   ```
   kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
   ```

   An example output is as follows.

   ```
   v1.20.0-eksbuild.1
   ```

   Your output might not include the build number.

1. Backup your current settings so you can configure the same settings once you’ve updated your version.

   ```
   kubectl get daemonset aws-node -n kube-system -o yaml > aws-k8s-cni-old.yaml
   ```

   To review the available versions and familiarize yourself with the changes in the version that you want to update to, see [releases](https://github.com/aws/amazon-vpc-cni-k8s/releases) on GitHub. Note that we recommend updating to the same `major`.`minor`.`patch` version listed in the latest available versions table, even if later versions are available on GitHub. For the latest available version table, see [Amazon VPC CNI versions](managing-vpc-cni.md#vpc-cni-latest-available-version). The build versions listed in the table aren’t specified in the self-managed versions listed on GitHub. Update your version by completing the tasks in one of the following options:
   + If you don’t have any custom settings for the add-on, then run the command under the `To apply this release:` heading on GitHub for the [release](https://github.com/aws/amazon-vpc-cni-k8s/releases) that you’re updating to.
   + If you have custom settings, download the manifest file with the following command. Change *https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.20.0/config/master/aws-k8s-cni.yaml* to the URL for the release on GitHub that you’re updating to.

     ```
     curl -O https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.20.3/config/master/aws-k8s-cni.yaml
     ```

     If necessary, modify the manifest with the custom settings from the backup you made in a previous step and then apply the modified manifest to your cluster. If your nodes don’t have access to the private Amazon EKS Amazon ECR repositories that the images are pulled from (see the lines that start with `image:` in the manifest), then you’ll have to download the images, copy them to your own repository, and modify the manifest to pull the images from your repository. For more information, see [Copy a container image from one repository to another repository](copy-image-to-repository.md).

     ```
     kubectl apply -f aws-k8s-cni.yaml
     ```

1. Confirm that the new version is now installed on your cluster.

   ```
   kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
   ```

   An example output is as follows.

   ```
   v1.20.3
   ```

1. (Optional) Install the `cni-metrics-helper` to your cluster. It scrapes elastic network interface and IP address information, aggregates it at a cluster level, and publishes the metrics to Amazon CloudWatch. For more information, see [cni-metrics-helper](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/cni-metrics-helper/README.md) on GitHub.

# Configure Amazon VPC CNI plugin to use IRSA
<a name="cni-iam-role"></a>

The [Amazon VPC CNI plugin for Kubernetes](https://github.com/aws/amazon-vpc-cni-k8s) is the networking plugin for Pod networking in Amazon EKS clusters. The plugin is responsible for allocating VPC IP addresses to Kubernetes pods and configuring the necessary networking for Pods on each node.

**Note**  
The Amazon VPC CNI plugin also supports Amazon EKS Pod Identities. For more information, see [Assign an IAM role to a Kubernetes service account](pod-id-association.md).

The plugin:
+ Requires AWS Identity and Access Management (IAM) permissions. If your cluster uses the `IPv4` family, the permissions are specified in the [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKS_CNI_Policy.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKS_CNI_Policy.html) AWS managed policy. If your cluster uses the `IPv6` family, then the permissions must be added to an IAM policy that you create; for instructions, see [Create IAM policy for clusters that use the `IPv6` family](#cni-iam-role-create-ipv6-policy). You can attach the policy to the Amazon EKS node IAM role, or to a separate IAM role. For instructions to attach the policy to the Amazon EKS node IAM role, see [Amazon EKS node IAM role](create-node-role.md). We recommend that you assign it to a separate role, as detailed in this topic.
+ Creates and is configured to use a Kubernetes service account named `aws-node` when it’s deployed. The service account is bound to a Kubernetes `clusterrole` named `aws-node`, which is assigned the required Kubernetes permissions.

**Note**  
The Pods for the Amazon VPC CNI plugin for Kubernetes have access to the permissions assigned to the [Amazon EKS node IAM role](create-node-role.md), unless you block access to IMDS. For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).
+ Requires an existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md).
+ Requires an existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. To determine whether you already have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

## Step 1: Create the Amazon VPC CNI plugin for Kubernetes IAM role
<a name="cni-iam-role-create-role"></a>

1. Determine the IP family of your cluster.

   ```
   aws eks describe-cluster --name my-cluster | grep ipFamily
   ```

   An example output is as follows.

   ```
   "ipFamily": "ipv4"
   ```

   The output may return `ipv6` instead.

1. Create the IAM role. You can use `eksctl` or `kubectl` and the AWS CLI to create your IAM role.  
eksctl  
   + Create an IAM role and attach the IAM policy to the role with the command that matches the IP family of your cluster. The command creates and deploys an AWS CloudFormation stack that creates an IAM role, attaches the policy that you specify to it, and annotates the existing `aws-node` Kubernetes service account with the ARN of the IAM role that is created.
     +  `IPv4` 

       Replace *my-cluster* with your own value.

       ```
       eksctl create iamserviceaccount \
           --name aws-node \
           --namespace kube-system \
           --cluster my-cluster \
           --role-name AmazonEKSVPCCNIRole \
           --attach-policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
           --override-existing-serviceaccounts \
           --approve
       ```
     +  `IPv6` 

       Replace *my-cluster* with your own value. Replace *111122223333* with your account ID and replace *AmazonEKS\$1CNI\$1IPv6\$1Policy* with the name of your `IPv6` policy. If you don’t have an `IPv6` policy, see [Create IAM policy for clusters that use the `IPv6` family](#cni-iam-role-create-ipv6-policy) to create one. To use `IPv6` with your cluster, it must meet several requirements. For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).

       ```
       eksctl create iamserviceaccount \
           --name aws-node \
           --namespace kube-system \
           --cluster my-cluster \
           --role-name AmazonEKSVPCCNIRole \
           --attach-policy-arn arn:aws:iam::111122223333:policy/AmazonEKS_CNI_IPv6_Policy \
           --override-existing-serviceaccounts \
           --approve
       ```  
kubectl and the AWS CLI  

   1. View your cluster’s OIDC provider URL.

      ```
      aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text
      ```

      An example output is as follows.

      ```
      https://oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE
      ```

      If no output is returned, then you must [create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

   1. Copy the following contents to a file named *vpc-cni-trust-policy.json*. Replace *111122223333* with your account ID and *EXAMPLED539D4633E53DE1B71EXAMPLE* with the output returned in the previous step. Replace *region-code* with the AWS Region that your cluster is in.

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
                  },
                  "Action": "sts:AssumeRoleWithWebIdentity",
                  "Condition": {
                      "StringEquals": {
                          "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com",
                          "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:aws-node"
                      }
                  }
              }
          ]
      }
      ```

   1. Create the role. You can replace *AmazonEKSVPCCNIRole* with any name that you choose.

      ```
      aws iam create-role \
        --role-name AmazonEKSVPCCNIRole \
        --assume-role-policy-document file://"vpc-cni-trust-policy.json"
      ```

   1. Attach the required IAM policy to the role. Run the command that matches the IP family of your cluster.
      +  `IPv4` 

        ```
        aws iam attach-role-policy \
          --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
          --role-name AmazonEKSVPCCNIRole
        ```
      +  `IPv6` 

        Replace *111122223333* with your account ID and *AmazonEKS\$1CNI\$1IPv6\$1Policy* with the name of your `IPv6` policy. If you don’t have an `IPv6` policy, see [Create IAM policy for clusters that use the `IPv6` family](#cni-iam-role-create-ipv6-policy) to create one. To use `IPv6` with your cluster, it must meet several requirements. For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).

        ```
        aws iam attach-role-policy \
          --policy-arn arn:aws:iam::111122223333:policy/AmazonEKS_CNI_IPv6_Policy \
          --role-name AmazonEKSVPCCNIRole
        ```

   1. Run the following command to annotate the `aws-node` service account with the ARN of the IAM role that you created previously. Replace the example values with your own values.

      ```
      kubectl annotate serviceaccount \
          -n kube-system aws-node \
          eks.amazonaws.com/role-arn=arn:aws:iam::111122223333:role/AmazonEKSVPCCNIRole
      ```

1. (Optional) Configure the AWS Security Token Service endpoint type used by your Kubernetes service account. For more information, see [Configure the AWS Security Token Service endpoint for a service account](configure-sts-endpoint.md).

## Step 2: Re-deploy Amazon VPC CNI plugin for Kubernetes Pods
<a name="cni-iam-role-redeploy-pods"></a>

1. Delete and re-create any existing Pods that are associated with the service account to apply the credential environment variables. The annotation is not applied to Pods that are currently running without the annotation. The following command deletes the existing `aws-node` DaemonSet Pods and deploys them with the service account annotation.

   ```
   kubectl delete Pods -n kube-system -l k8s-app=aws-node
   ```

1. Confirm that the Pods all restarted.

   ```
   kubectl get pods -n kube-system -l k8s-app=aws-node
   ```

1. Describe one of the Pods and verify that the `AWS_WEB_IDENTITY_TOKEN_FILE` and `AWS_ROLE_ARN` environment variables exist. Replace *cpjw7* with the name of one of your Pods returned in the output of the previous step.

   ```
   kubectl describe pod -n kube-system aws-node-cpjw7 | grep 'AWS_ROLE_ARN:\|AWS_WEB_IDENTITY_TOKEN_FILE:'
   ```

   An example output is as follows.

   ```
   AWS_ROLE_ARN:                 arn:aws:iam::111122223333:role/AmazonEKSVPCCNIRole
         AWS_WEB_IDENTITY_TOKEN_FILE:  /var/run/secrets/eks.amazonaws.com/serviceaccount/token
         AWS_ROLE_ARN:                           arn:aws:iam::111122223333:role/AmazonEKSVPCCNIRole
         AWS_WEB_IDENTITY_TOKEN_FILE:            /var/run/secrets/eks.amazonaws.com/serviceaccount/token
   ```

   Two sets of duplicate results are returned because the Pod contains two containers. Both containers have the same values.

   If your Pod is using the AWS Regional endpoint, then the following line is also returned in the previous output.

   ```
   AWS_STS_REGIONAL_ENDPOINTS=regional
   ```

## Step 3: Remove the CNI policy from the node IAM role
<a name="remove-cni-policy-node-iam-role"></a>

If your [Amazon EKS node IAM role](create-node-role.md) currently has the `AmazonEKS_CNI_Policy` IAM (`IPv4`) policy or an [IPv6 policy](#cni-iam-role-create-ipv6-policy) attached to it, and you’ve created a separate IAM role, attached the policy to it instead, and assigned it to the `aws-node` Kubernetes service account, then we recommend that you remove the policy from your node role with the AWS CLI command that matches the IP family of your cluster. Replace *AmazonEKSNodeRole* with the name of your node role.
+  `IPv4` 

  ```
  aws iam detach-role-policy --role-name AmazonEKSNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
  ```
+  `IPv6` 

  Replace *111122223333* with your account ID and *AmazonEKS\$1CNI\$1IPv6\$1Policy* with the name of your `IPv6` policy.

  ```
  aws iam detach-role-policy --role-name AmazonEKSNodeRole --policy-arn arn:aws:iam::111122223333:policy/AmazonEKS_CNI_IPv6_Policy
  ```

## Create IAM policy for clusters that use the `IPv6` family
<a name="cni-iam-role-create-ipv6-policy"></a>

If you created a cluster that uses the `IPv6` family and the cluster has version `1.10.1` or later of the Amazon VPC CNI plugin for Kubernetes add-on configured, then you need to create an IAM policy that you can assign to an IAM role. If you have an existing cluster that you didn’t configure with the `IPv6` family when you created it, then to use `IPv6`, you must create a new cluster. For more information about using `IPv6` with your cluster, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).

1. Copy the following text and save it to a file named `vpc-cni-ipv6-policy.json`.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ec2:AssignIpv6Addresses",
                   "ec2:DescribeInstances",
                   "ec2:DescribeTags",
                   "ec2:DescribeNetworkInterfaces",
                   "ec2:DescribeInstanceTypes"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "ec2:CreateTags"
               ],
               "Resource": [
                   "arn:aws:ec2:*:*:network-interface/*"
               ]
           }
       ]
   }
   ```

1. Create the IAM policy.

   ```
   aws iam create-policy --policy-name AmazonEKS_CNI_IPv6_Policy --policy-document file://vpc-cni-ipv6-policy.json
   ```

# Learn about VPC CNI modes and configuration
<a name="pod-networking-use-cases"></a>

The Amazon VPC CNI plugin for Kubernetes provides networking for Pods. Use the following table to learn more about the available networking features.


| Networking feature | Learn more | 
| --- | --- | 
|  Configure your cluster to assign IPv6 addresses to clusters, Pods, and services  |   [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md)   | 
|  Use IPv4 Source Network Address Translation for Pods  |   [Enable outbound internet access for Pods](external-snat.md)   | 
|  Restrict network traffic to and from your Pods  |   [Restrict Pod network traffic with Kubernetes network policies](cni-network-policy-configure.md)   | 
|  Customize the secondary network interface in nodes  |   [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md)   | 
|  Increase IP addresses for your node  |   [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md)   | 
|  Use security groups for Pod network traffic  |   [Assign security groups to individual Pods](security-groups-for-pods.md)   | 
|  Use multiple network interfaces for Pods  |   [Attach multiple network interfaces to Pods](pod-multiple-network-interfaces.md)   | 

# Learn about IPv6 addresses to clusters, Pods, and services
<a name="cni-ipv6"></a>

 **Applies to**: Pods with Amazon EC2 instances and Fargate Pods

By default, Kubernetes assigns `IPv4` addresses to your Pods and services. Instead of assigning `IPv4` addresses to your Pods and services, you can configure your cluster to assign `IPv6` addresses to them. Amazon EKS doesn’t support dual-stacked Pods or services, even though Kubernetes does. As a result, you can’t assign both `IPv4` and `IPv6` addresses to your Pods and services.

You select which IP family you want to use for your cluster when you create it. You can’t change the family after you create the cluster.

For a tutorial to deploy an Amazon EKS `IPv6` cluster, see [Deploying an Amazon EKS `IPv6` cluster and managed Amazon Linux nodes](deploy-ipv6-cluster.md).

The following are considerations for using the feature:

## `IPv6` Feature support
<a name="_ipv6_feature_support"></a>
+  **No Windows support**: Windows Pods and services aren’t supported.
+  **Nitro-based EC2 nodes required**: You can only use `IPv6` with AWS Nitro-based Amazon EC2 or Fargate nodes.
+  **EC2 and Fargate nodes supported**: You can use `IPv6` with [Assign security groups to individual Pods](security-groups-for-pods.md) with Amazon EC2 nodes and Fargate nodes.
+  **Outposts not supported**: You can’t use `IPv6` with [Deploy Amazon EKS on-premises with AWS Outposts](eks-outposts.md).
+  **FSx for Lustre is not supported**: The [Use high-performance app storage with Amazon FSx for Lustre](fsx-csi.md) is not supported.
+  **Custom networking not supported**: If you previously used [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md) to help alleviate IP address exhaustion, you can use `IPv6` instead. You can’t use custom networking with `IPv6`. If you use custom networking for network isolation, then you might need to continue to use custom networking and the `IPv4` family for your clusters.

## IP address assignments
<a name="_ip_address_assignments"></a>
+  **Kubernetes services**: Kubernetes services are only assigned an `IPv6` addresses. They aren’t assigned IPv4 addresses.
+  **Pods**: Pods are assigned an IPv6 address and a host-local IPv4 address. The host-local IPv4 address is assigned by using a host-local CNI plugin chained with VPC CNI and the address is not reported to the Kubernetes control plane. It is only used when a pod needs to communicate with an external IPv4 resources in another Amazon VPC or the internet. The host-local IPv4 address gets SNATed (by VPC CNI) to the primary IPv4 address of the primary ENI of the worker node.
+  **Pods and services**: Pods and services receive only `IPv6` addresses, not `IPv4` addresses. When Pods need to communicate with external `IPv4` endpoints, they use NAT on the node itself. This built-in NAT capability eliminates the need for [DNS64 and NAT64](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html#nat-gateway-nat64-dns64). For traffic requiring public internet access, the Pod’s traffic is source network address translated to a public IP address.
+  **Routing addresses**: When a Pod communicates outside the VPC, its original `IPv6` address is preserved (not translated to the node’s `IPv6` address). This traffic is routed directly through an internet gateway or egress-only internet gateway.
+  **Nodes**: All nodes are assigned an `IPv4` and `IPv6` address.
+  **Fargate Pods**: Each Fargate Pod receives an `IPv6` address from the CIDR that’s specified for the subnet that it’s deployed in. The underlying hardware unit that runs Fargate Pods gets a unique `IPv4` and `IPv6` address from the CIDRs that are assigned to the subnet that the hardware unit is deployed in.

## How to use `IPv6` with EKS
<a name="_how_to_use_ipv6_with_eks"></a>
+  **Create new cluster**: You must create a new cluster and specify that you want to use the `IPv6` family for that cluster. You can’t enable the `IPv6` family for a cluster that you updated from a previous version. For instructions on how to create a new cluster, see Considerations .
+  **Use recent VPC CNI**: Deploy Amazon VPC CNI version `1.10.1` or later. This version or later is deployed by default. After you deploy the add-on, you can’t downgrade your Amazon VPC CNI add-on to a version lower than `1.10.1` without first removing all nodes in all node groups in your cluster.
+  **Configure VPC CNI for `IPv6` **: If you use Amazon EC2 nodes, you must configure the Amazon VPC CNI add-on with IP prefix delegation and `IPv6`. If you choose the `IPv6` family when creating your cluster, the `1.10.1` version of the add-on defaults to this configuration. This is the case for both a self-managed or Amazon EKS add-on. For more information about IP prefix delegation, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md).
+  **Configure `IPv4` and `IPv6` addresses**: When you create a cluster, the VPC and subnets that you specify must have an `IPv6` CIDR block that’s assigned to the VPC and subnets that you specify. They must also have an `IPv4` CIDR block assigned to them. This is because, even if you only want to use `IPv6`, a VPC still requires an `IPv4` CIDR block to function. For more information, see [Associate an IPv6 CIDR block with your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#vpc-associate-ipv6-cidr) in the Amazon VPC User Guide.
+  **Auto-assign IPv6 addresses to nodes:** When you create your nodes, you must specify subnets that are configured to auto-assign `IPv6` addresses. Otherwise, you can’t deploy your nodes. By default, this configuration is disabled. For more information, see [Modify the IPv6 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-ipv6) in the Amazon VPC User Guide.
+  **Set route tables to use `IPv6` **: The route tables that are assigned to your subnets must have routes for `IPv6` addresses. For more information, see [Migrate to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the Amazon VPC User Guide.
+  **Set security groups for `IPv6` **: Your security groups must allow `IPv6` addresses. For more information, see [Migrate to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the Amazon VPC User Guide.
+  **Set up load balancer**: Use version `2.3.1` or later of the AWS Load Balancer Controller to load balance HTTP applications using the [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) or network traffic using the [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md) to `IPv6` Pods with either load balancer in IP mode, but not instance mode. For more information, see [Route internet traffic with AWS Load Balancer Controller](aws-load-balancer-controller.md).
+  **Add `IPv6` IAM policy**: You must attach an `IPv6` IAM policy to your node IAM or CNI IAM role. Between the two, we recommend that you attach it to a CNI IAM role. For more information, see [Create IAM policy for clusters that use the `IPv6` family](cni-iam-role.md#cni-iam-role-create-ipv6-policy) and [Step 1: Create the Amazon VPC CNI plugin for Kubernetes IAM role](cni-iam-role.md#cni-iam-role-create-role).
+  **Evaluate all components**: Perform a thorough evaluation of your applications, Amazon EKS add-ons, and AWS services that you integrate with before deploying `IPv6` clusters. This is to ensure that everything works as expected with `IPv6`.

# Deploying an Amazon EKS `IPv6` cluster and managed Amazon Linux nodes
<a name="deploy-ipv6-cluster"></a>

In this tutorial, you deploy an `IPv6` Amazon VPC, an Amazon EKS cluster with the `IPv6` family, and a managed node group with Amazon EC2 Amazon Linux nodes. You can’t deploy Amazon EC2 Windows nodes in an `IPv6` cluster. You can also deploy Fargate nodes to your cluster, though those instructions aren’t provided in this topic for simplicity.

## Prerequisites
<a name="_prerequisites"></a>

Complete the following before you start the tutorial:

Install and configure the following tools and resources that you need to create and manage an Amazon EKS cluster.
+ We recommend that you familiarize yourself with all settings and deploy a cluster with the settings that meet your requirements. For more information, see [Create an Amazon EKS cluster](create-cluster.md), [Simplify node lifecycle with managed node groups](managed-node-groups.md), and the [Considerations](cni-ipv6.md) for this topic. You can only enable some settings when creating your cluster.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ The IAM security principal that you’re using must have permissions to work with Amazon EKS IAM roles, service linked roles, AWS CloudFormation, a VPC, and related resources. For more information, see [Actions](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html) and [Using service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) in the IAM User Guide.
+ If you use the eksctl, install version `0.215.0` or later on your computer. To install or update to it, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*. If you use the AWS CloudShell, you may need to [install version 2.12.3 or later or 1.27.160 or later of the AWS CLI](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software), because the default AWS CLI version installed in the AWS CloudShell may be an earlier version.

You can use the eksctl or CLI to deploy an `IPv6` cluster.

## Deploy an IPv6 cluster with eksctl
<a name="_deploy_an_ipv6_cluster_with_eksctl"></a>

1. Create the `ipv6-cluster.yaml` file. Copy the command that follows to your device. Make the following modifications to the command as needed and then run the modified command:
   + Replace *my-cluster* with a name for your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.
   + Replace *region-code* with any AWS Region that is supported by Amazon EKS. For a list of AWS Regions, see [Amazon EKS endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/eks.html) in the AWS General Reference guide.
   + The value for `version` with the version of your cluster. For more information, see [Amazon EKS supported versions](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html).
   + Replace *my-nodegroup* with a name for your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.
   + Replace *t3.medium* with any [AWS Nitro System instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances).

     ```
     cat >ipv6-cluster.yaml <<EOF
     ---
     apiVersion: eksctl.io/v1alpha5
     kind: ClusterConfig
     
     metadata:
       name: my-cluster
       region: region-code
       version: "X.XX"
     
     kubernetesNetworkConfig:
       ipFamily: IPv6
     
     addons:
       - name: vpc-cni
         version: latest
       - name: coredns
         version: latest
       - name: kube-proxy
         version: latest
     
     iam:
       withOIDC: true
     
     managedNodeGroups:
       - name: my-nodegroup
         instanceType: t3.medium
     EOF
     ```

1. Create your cluster.

   ```
   eksctl create cluster -f ipv6-cluster.yaml
   ```

   Cluster creation takes several minutes. Don’t proceed until you see the last line of output, which looks similar to the following output.

   ```
   [...]
   [✓]  EKS cluster "my-cluster" in "region-code" region is ready
   ```

1. Confirm that default Pods are assigned `IPv6` addresses.

   ```
   kubectl get pods -n kube-system -o wide
   ```

   An example output is as follows.

   ```
   NAME                       READY   STATUS    RESTARTS   AGE     IP                                       NODE                                            NOMINATED NODE   READINESS GATES
   aws-node-rslts             1/1     Running   1          5m36s   2600:1f13:b66:8200:11a5:ade0:c590:6ac8   ip-192-168-34-75.region-code.compute.internal   <none>           <none>
   aws-node-t74jh             1/1     Running   0          5m32s   2600:1f13:b66:8203:4516:2080:8ced:1ca9   ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   coredns-85d5b4454c-cw7w2   1/1     Running   0          56m     2600:1f13:b66:8203:34e5::                ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   coredns-85d5b4454c-tx6n8   1/1     Running   0          56m     2600:1f13:b66:8203:34e5::1               ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   kube-proxy-btpbk           1/1     Running   0          5m36s   2600:1f13:b66:8200:11a5:ade0:c590:6ac8   ip-192-168-34-75.region-code.compute.internal   <none>           <none>
   kube-proxy-jjk2g           1/1     Running   0          5m33s   2600:1f13:b66:8203:4516:2080:8ced:1ca9   ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   ```

1. Confirm that default services are assigned `IPv6` addresses.

   ```
   kubectl get services -n kube-system -o wide
   ```

   An example output is as follows.

   ```
   NAME       TYPE        CLUSTER-IP          EXTERNAL-IP   PORT(S)         AGE   SELECTOR
   kube-dns   ClusterIP   fd30:3087:b6c2::a   <none>        53/UDP,53/TCP   57m   k8s-app=kube-dns
   ```

1. (Optional) [Deploy a sample application](sample-deployment.md) or deploy the [AWS Load Balancer Controller](aws-load-balancer-controller.md) and a sample application to load balance HTTP applications with [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) or network traffic with [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md) to `IPv6` Pods.

1. After you’ve finished with the cluster and nodes that you created for this tutorial, you should clean up the resources that you created with the following command.

   ```
   eksctl delete cluster my-cluster
   ```

## Deploy an IPv6 cluster with AWS CLI
<a name="deploy_an_ipv6_cluster_with_shared_aws_cli"></a>

**Important**  
You must complete all steps in this procedure as the same user. To check the current user, run the following command:  

  ```
  aws sts get-caller-identity
  ```
You must complete all steps in this procedure in the same shell. Several steps use variables set in previous steps. Steps that use variables won’t function properly if the variable values are set in a different shell. If you use the [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) to complete the following procedure, remember that if you don’t interact with it using your keyboard or pointer for approximately 20–30 minutes, your shell session ends. Running processes do not count as interactions.
The instructions are written for the Bash shell, and may need adjusting in other shells.

Replace all example values in the steps of this procedure with your own values.

1. Run the following commands to set some variables used in later steps. Replace *region-code* with the AWS Region that you want to deploy your resources in. The value can be any AWS Region that is supported by Amazon EKS. For a list of AWS Regions, see [Amazon EKS endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/eks.html) in the AWS General Reference guide. Replace *my-cluster* with a name for your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in. Replace *my-nodegroup* with a name for your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters. Replace *111122223333* with your account ID.

   ```
   export region_code=region-code
   export cluster_name=my-cluster
   export nodegroup_name=my-nodegroup
   export account_id=111122223333
   ```

1. Create an Amazon VPC with public and private subnets that meets Amazon EKS and `IPv6` requirements.

   1. Run the following command to set a variable for your AWS CloudFormation stack name. You can replace *my-eks-ipv6-vpc* with any name you choose.

      ```
      export vpc_stack_name=my-eks-ipv6-vpc
      ```

   1. Create an `IPv6` VPC using an AWS CloudFormation template.

      ```
      aws cloudformation create-stack --region $region_code --stack-name $vpc_stack_name \
        --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-ipv6-vpc-public-private-subnets.yaml
      ```

      The stack takes a few minutes to create. Run the following command. Don’t continue to the next step until the output of the command is `CREATE_COMPLETE`.

      ```
      aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name --query Stacks[].StackStatus --output text
      ```

   1. Retrieve the IDs of the public subnets that were created.

      ```
      aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name \
          --query='Stacks[].Outputs[?OutputKey==`SubnetsPublic`].OutputValue' --output text
      ```

      An example output is as follows.

      ```
      subnet-0a1a56c486EXAMPLE,subnet-099e6ca77aEXAMPLE
      ```

   1. Enable the auto-assign `IPv6` address option for the public subnets that were created.

      ```
      aws ec2 modify-subnet-attribute --region $region_code --subnet-id subnet-0a1a56c486EXAMPLE --assign-ipv6-address-on-creation
      aws ec2 modify-subnet-attribute --region $region_code --subnet-id subnet-099e6ca77aEXAMPLE --assign-ipv6-address-on-creation
      ```

   1. Retrieve the names of the subnets and security groups created by the template from the deployed AWS CloudFormation stack and store them in variables for use in a later step.

      ```
      security_groups=$(aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name \
          --query='Stacks[].Outputs[?OutputKey==`SecurityGroups`].OutputValue' --output text)
      
      public_subnets=$(aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name \
          --query='Stacks[].Outputs[?OutputKey==`SubnetsPublic`].OutputValue' --output text)
      
      private_subnets=$(aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name \
          --query='Stacks[].Outputs[?OutputKey==`SubnetsPrivate`].OutputValue' --output text)
      
      subnets=${public_subnets},${private_subnets}
      ```

1. Create a cluster IAM role and attach the required Amazon EKS IAM managed policy to it. Kubernetes clusters managed by Amazon EKS make calls to other AWS services on your behalf to manage the resources that you use with the service.

   1. Run the following command to create the `eks-cluster-role-trust-policy.json` file.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "eks.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      ```

   1. Run the following command to set a variable for your role name. You can replace *myAmazonEKSClusterRole* with any name you choose.

      ```
      export cluster_role_name=myAmazonEKSClusterRole
      ```

   1. Create the role.

      ```
      aws iam create-role --role-name $cluster_role_name --assume-role-policy-document file://"eks-cluster-role-trust-policy.json"
      ```

   1. Retrieve the ARN of the IAM role and store it in a variable for a later step.

      ```
      CLUSTER_IAM_ROLE=$(aws iam get-role --role-name $cluster_role_name --query="Role.Arn" --output text)
      ```

   1. Attach the required Amazon EKS managed IAM policy to the role.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy --role-name $cluster_role_name
      ```

1. Create your cluster.

   ```
   aws eks create-cluster --region $region_code --name $cluster_name --kubernetes-version 1.XX \
      --role-arn $CLUSTER_IAM_ROLE --resources-vpc-config subnetIds=$subnets,securityGroupIds=$security_groups \
      --kubernetes-network-config ipFamily=ipv6
   ```

   1. NOTE: You might receive an error that one of the Availability Zones in your request doesn’t have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account. For more information, see [Insufficient capacity](troubleshooting.md#ice).

      The cluster takes several minutes to create. Run the following command. Don’t continue to the next step until the output from the command is `ACTIVE`.

      ```
      aws eks describe-cluster --region $region_code --name $cluster_name --query cluster.status
      ```

1. Create or update a `kubeconfig` file for your cluster so that you can communicate with your cluster.

   ```
   aws eks update-kubeconfig --region $region_code --name $cluster_name
   ```

   By default, the `config` file is created in `~/.kube` or the new cluster’s configuration is added to an existing `config` file in `~/.kube`.

1. Create a node IAM role.

   1. Run the following command to create the `vpc-cni-ipv6-policy.json` file.

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "ec2:AssignIpv6Addresses",
                      "ec2:DescribeInstances",
                      "ec2:DescribeTags",
                      "ec2:DescribeNetworkInterfaces",
                      "ec2:DescribeInstanceTypes"
                  ],
                  "Resource": "*"
              },
              {
                  "Effect": "Allow",
                  "Action": [
                      "ec2:CreateTags"
                  ],
                  "Resource": [
                      "arn:aws:ec2:*:*:network-interface/*"
                  ]
              }
          ]
      }
      ```

   1. Create the IAM policy.

      ```
      aws iam create-policy --policy-name AmazonEKS_CNI_IPv6_Policy --policy-document file://vpc-cni-ipv6-policy.json
      ```

   1. Run the following command to create the `node-role-trust-relationship.json` file.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      ```

   1. Run the following command to set a variable for your role name. You can replace *AmazonEKSNodeRole* with any name you choose.

      ```
      export node_role_name=AmazonEKSNodeRole
      ```

   1. Create the IAM role.

      ```
      aws iam create-role --role-name $node_role_name --assume-role-policy-document file://"node-role-trust-relationship.json"
      ```

   1. Attach the IAM policy to the IAM role.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::$account_id:policy/AmazonEKS_CNI_IPv6_Policy \
          --role-name $node_role_name
      ```
**Important**  
For simplicity in this tutorial, the policy is attached to this IAM role. In a production cluster however, we recommend attaching the policy to a separate IAM role. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

   1. Attach two required IAM managed policies to the IAM role.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
        --role-name $node_role_name
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
        --role-name $node_role_name
      ```

   1. Retrieve the ARN of the IAM role and store it in a variable for a later step.

      ```
      node_iam_role=$(aws iam get-role --role-name $node_role_name --query="Role.Arn" --output text)
      ```

1. Create a managed node group.

   1. View the IDs of the subnets that you created in a previous step.

      ```
      echo $subnets
      ```

      An example output is as follows.

      ```
      subnet-0a1a56c486EXAMPLE,subnet-099e6ca77aEXAMPLE,subnet-0377963d69EXAMPLE,subnet-0c05f819d5EXAMPLE
      ```

   1. Create the node group. Replace *0a1a56c486EXAMPLE*, *099e6ca77aEXAMPLE*, *0377963d69EXAMPLE*, and *0c05f819d5EXAMPLE* with the values returned in the output of the previous step. Be sure to remove the commas between subnet IDs from the previous output in the following command. You can replace *t3.medium* with any [AWS Nitro System instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances).

      ```
      aws eks create-nodegroup --region $region_code --cluster-name $cluster_name --nodegroup-name $nodegroup_name \
          --subnets subnet-0a1a56c486EXAMPLE subnet-099e6ca77aEXAMPLE subnet-0377963d69EXAMPLE subnet-0c05f819d5EXAMPLE \
          --instance-types t3.medium --node-role $node_iam_role
      ```

      The node group takes a few minutes to create. Run the following command. Don’t proceed to the next step until the output returned is `ACTIVE`.

      ```
      aws eks describe-nodegroup --region $region_code --cluster-name $cluster_name --nodegroup-name $nodegroup_name \
          --query nodegroup.status --output text
      ```

1. Confirm that the default Pods are assigned `IPv6` addresses in the `IP` column.

   ```
   kubectl get pods -n kube-system -o wide
   ```

   An example output is as follows.

   ```
   NAME                       READY   STATUS    RESTARTS   AGE     IP                                       NODE                                            NOMINATED NODE   READINESS GATES
   aws-node-rslts             1/1     Running   1          5m36s   2600:1f13:b66:8200:11a5:ade0:c590:6ac8   ip-192-168-34-75.region-code.compute.internal   <none>           <none>
   aws-node-t74jh             1/1     Running   0          5m32s   2600:1f13:b66:8203:4516:2080:8ced:1ca9   ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   coredns-85d5b4454c-cw7w2   1/1     Running   0          56m     2600:1f13:b66:8203:34e5::                ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   coredns-85d5b4454c-tx6n8   1/1     Running   0          56m     2600:1f13:b66:8203:34e5::1               ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   kube-proxy-btpbk           1/1     Running   0          5m36s   2600:1f13:b66:8200:11a5:ade0:c590:6ac8   ip-192-168-34-75.region-code.compute.internal   <none>           <none>
   kube-proxy-jjk2g           1/1     Running   0          5m33s   2600:1f13:b66:8203:4516:2080:8ced:1ca9   ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   ```

1. Confirm that the default services are assigned `IPv6` addresses in the `IP` column.

   ```
   kubectl get services -n kube-system -o wide
   ```

   An example output is as follows.

   ```
   NAME       TYPE        CLUSTER-IP          EXTERNAL-IP   PORT(S)         AGE   SELECTOR
   kube-dns   ClusterIP   fd30:3087:b6c2::a   <none>        53/UDP,53/TCP   57m   k8s-app=kube-dns
   ```

1. (Optional) [Deploy a sample application](sample-deployment.md) or deploy the [AWS Load Balancer Controller](aws-load-balancer-controller.md) and a sample application to load balance HTTP applications with [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) or network traffic with [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md) to `IPv6` Pods.

1. After you’ve finished with the cluster and nodes that you created for this tutorial, you should clean up the resources that you created with the following commands. Make sure that you’re not using any of the resources outside of this tutorial before deleting them.

   1. If you’re completing this step in a different shell than you completed the previous steps in, set the values of all the variables used in previous steps, replacing the example values with the values you specified when you completed the previous steps. If you’re completing this step in the same shell that you completed the previous steps in, skip to the next step.

      ```
      export region_code=region-code
      export vpc_stack_name=my-eks-ipv6-vpc
      export cluster_name=my-cluster
      export nodegroup_name=my-nodegroup
      export account_id=111122223333
      export node_role_name=AmazonEKSNodeRole
      export cluster_role_name=myAmazonEKSClusterRole
      ```

   1. Delete your node group.

      ```
      aws eks delete-nodegroup --region $region_code --cluster-name $cluster_name --nodegroup-name $nodegroup_name
      ```

      Deletion takes a few minutes. Run the following command. Don’t proceed to the next step if any output is returned.

      ```
      aws eks list-nodegroups --region $region_code --cluster-name $cluster_name --query nodegroups --output text
      ```

   1. Delete the cluster.

      ```
      aws eks delete-cluster --region $region_code --name $cluster_name
      ```

      The cluster takes a few minutes to delete. Before continuing make sure that the cluster is deleted with the following command.

      ```
      aws eks describe-cluster --region $region_code --name $cluster_name
      ```

      Don’t proceed to the next step until your output is similar to the following output.

      ```
      An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: my-cluster.
      ```

   1. Delete the IAM resources that you created. Replace *AmazonEKS\$1CNI\$1IPv6\$1Policy* with the name you chose, if you chose a different name than the one used in previous steps.

      ```
      aws iam detach-role-policy --role-name $cluster_role_name --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
      aws iam detach-role-policy --role-name $node_role_name --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
      aws iam detach-role-policy --role-name $node_role_name --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
      aws iam detach-role-policy --role-name $node_role_name --policy-arn arn:aws:iam::$account_id:policy/AmazonEKS_CNI_IPv6_Policy
      aws iam delete-policy --policy-arn arn:aws:iam::$account_id:policy/AmazonEKS_CNI_IPv6_Policy
      aws iam delete-role --role-name $cluster_role_name
      aws iam delete-role --role-name $node_role_name
      ```

   1. Delete the AWS CloudFormation stack that created the VPC.

      ```
      aws cloudformation delete-stack --region $region_code --stack-name $vpc_stack_name
      ```

# Enable outbound internet access for Pods
<a name="external-snat"></a>

 **Applies to**: Linux `IPv4` Fargate nodes, Linux nodes with Amazon EC2 instances

If you deployed your cluster using the `IPv6` family, then the information in this topic isn’t applicable to your cluster, because `IPv6` addresses are not network translated. For more information about using `IPv6` with your cluster, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).

By default, each Pod in your cluster is assigned a [private](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-private-addresses) `IPv4` address from a classless inter-domain routing (CIDR) block that is associated with the VPC that the Pod is deployed in. Pods in the same VPC communicate with each other using these private IP addresses as end points. When a Pod communicates to any `IPv4` address that isn’t within a CIDR block that’s associated to your VPC, the Amazon VPC CNI plugin (for both [Linux](https://github.com/aws/amazon-vpc-cni-k8s#amazon-vpc-cni-k8s) or [Windows](https://github.com/aws/amazon-vpc-cni-plugins/tree/master/plugins/vpc-bridge)) translates the Pod’s `IPv4` address to the primary private `IPv4` address of the primary [elastic network interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#eni-basics) of the node that the Pod is running on, by default [\$1](#snat-exception).

**Note**  
For Windows nodes, there are additional details to consider. By default, the [VPC CNI plugin for Windows](https://github.com/aws/amazon-vpc-cni-plugins/tree/master/plugins/vpc-bridge) is defined with a networking configuration in which the traffic to a destination within the same VPC is excluded for SNAT. This means that internal VPC communication has SNAT disabled and the IP address allocated to a Pod is routable inside the VPC. But traffic to a destination outside of the VPC has the source Pod IP SNAT’ed to the instance ENI’s primary IP address. This default configuration for Windows ensures that the pod can access networks outside of your VPC in the same way as the host instance.

Due to this behavior:
+ Your Pods can communicate with internet resources only if the node that they’re running on has a [public](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-public-addresses) or [elastic](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-eips.html) IP address assigned to it and is in a [public subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-basics). A public subnet’s associated [route table](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) has a route to an internet gateway. We recommend deploying nodes to private subnets, whenever possible.
+ For versions of the plugin earlier than `1.8.0`, resources that are in networks or VPCs that are connected to your cluster VPC using [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), a [transit VPC](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/transit-vpc-option.html), or [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) can’t initiate communication to your Pods behind secondary elastic network interfaces. Your Pods can initiate communication to those resources and receive responses from them, though.

If either of the following statements are true in your environment, then change the default configuration with the command that follows.
+ You have resources in networks or VPCs that are connected to your cluster VPC using [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), a [transit VPC](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/transit-vpc-option.html), or [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) that need to initiate communication with your Pods using an `IPv4` address and your plugin version is earlier than `1.8.0`.
+ Your Pods are in a [private subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-basics) and need to communicate outbound to the internet. The subnet has a route to a [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html).

```
kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=true
```

**Note**  
The `AWS_VPC_K8S_CNI_EXTERNALSNAT` and `AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS` CNI configuration variables aren’t applicable to Windows nodes. Disabling SNAT isn’t supported for Windows. As for excluding a list of `IPv4` CIDRs from SNAT, you can define this by specifying the `ExcludedSnatCIDRs` parameter in the Windows bootstrap script. For more information on using this parameter, see [Bootstrap script configuration parameters](eks-optimized-windows-ami.md#bootstrap-script-configuration-parameters).

## Host networking
<a name="snat-exception"></a>

\$1 If a Pod’s spec contains `hostNetwork=true` (default is `false`), then its IP address isn’t translated to a different address. This is the case for the `kube-proxy` and Amazon VPC CNI plugin for Kubernetes Pods that run on your cluster, by default. For these Pods, the IP address is the same as the node’s primary IP address, so the Pod’s IP address isn’t translated. For more information about a Pod’s `hostNetwork` setting, see [PodSpec v1 core](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podspec-v1-core) in the Kubernetes API reference.

# Limit Pod traffic with Kubernetes network policies
<a name="cni-network-policy"></a>

## Overview
<a name="_overview"></a>

By default, there are no restrictions in Kubernetes for IP addresses, ports, or connections between any Pods in your cluster or between your Pods and resources in any other network. You can use Kubernetes *network policy* to restrict network traffic to and from your Pods. For more information, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) in the Kubernetes documentation.

## Standard network policy
<a name="_standard_network_policy"></a>

You can use the standard `NetworkPolicy` to segment pod-to-pod traffic in the cluster. These network policies operate at layers 3 and 4 of the OSI network model, allowing you to control traffic flow at the IP address or port level within your Amazon EKS cluster. Standard network policies are scoped to the namespace level.

### Use cases
<a name="_use_cases"></a>
+ Segment network traffic between workloads to ensure that only related applications can talk to each other.
+ Isolate tenants at the namespace level using policies to enforce network separation.

### Example
<a name="_example"></a>

In the policy below, egress traffic from the *webapp* pods in the *sun* namespace is restricted.

```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: webapp-egress-policy
  namespace: sun
spec:
  podSelector:
    matchLabels:
      role: webapp
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: moon
      podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 8080
  - to:
    - namespaceSelector:
        matchLabels:
          name: stars
      podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 8080
```

The policy applies to pods with the label `role: webapp` in the `sun` namespace.
+ Allowed traffic: Pods with the label `role: frontend` in the `moon` namespace on TCP port `8080` 
+ Allowed traffic: Pods with the label role: frontend in the `stars` namespace on TCP port `8080` 
+ Blocked traffic: All other outbound traffic from `webapp` pods is implicitly denied

## Admin (or cluster) network policy
<a name="_admin_or_cluster_network_policy"></a>

![\[llustration of the evaluation order for network policies in EKS\]](http://docs.aws.amazon.com/eks/latest/userguide/images/evaluation-order.png)


You can use the `ClusterNetworkPolicy` to enforce a network security standard that applies to the whole cluster. Instead of repetitively defining and maintaining a distinct policy for each namespace, you can use a single policy to centrally manage network access controls for different workloads in the cluster, irrespective of their namespace.

### Use cases
<a name="_use_cases_2"></a>
+ Centrally manage network access controls for all (or a subset of) workloads in your EKS cluster.
+ Define a default network security posture across the cluster.
+ Extend organizational security standards to the scope of the cluster in a more operationally efficient way.

### Example
<a name="_example_2"></a>

In the policy below, you can explicitly block cluster traffic from other namespaces to prevent network access to a sensitive workload namespace.

```
apiVersion: networking.k8s.aws/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: protect-sensitive-workload
spec:
  tier: Admin
  priority: 10
  subject:
    namespaces:
      matchLabels:
        kubernetes.io/metadata.name: earth
  ingress:
    - action: Deny
      from:
      - namespaces:
          matchLabels: {} # Match all namespaces.
      name: select-all-deny-all
```

## Important notes
<a name="_important_notes"></a>

Network policies in the Amazon VPC CNI plugin for Kubernetes are supported in the configurations listed below.
+ Version 1.21.0 (or later) of Amazon VPC CNI plugin for both standard and admin network policies.
+ Cluster configured for `IPv4` or `IPv6` addresses.
+ You can use network policies with [security groups for Pods](security-groups-for-pods.md). With network policies, you can control all in-cluster communication. With security groups for Pods, you can control access to AWS services from applications within a Pod.
+ You can use network policies with *custom networking* and *prefix delegation*.

## Considerations
<a name="cni-network-policy-considerations"></a>

 **Architecture** 
+ When applying Amazon VPC CNI plugin for Kubernetes network policies to your cluster with the Amazon VPC CNI plugin for Kubernetes , you can apply the policies to Amazon EC2 Linux nodes only. You can’t apply the policies to Fargate or Windows nodes.
+ Network policies only apply either `IPv4` or `IPv6` addresses, but not both. In an `IPv4` cluster, the VPC CNI assigns `IPv4` address to pods and applies `IPv4` policies. In an `IPv6` cluster, the VPC CNI assigns `IPv6` address to pods and applies `IPv6` policies. Any `IPv4` network policy rules applied to an `IPv6` cluster are ignored. Any `IPv6` network policy rules applied to an `IPv4` cluster are ignored.

 **Network Policies** 
+ Network Policies are only applied to Pods that are part of a Deployment. Standalone Pods that don’t have a `metadata.ownerReferences` set can’t have network policies applied to them.
+ You can apply multiple network policies to the same Pod. When two or more policies that select the same Pod are configured, all policies are applied to the Pod.
+ The maximum number of combinations of ports and protocols for a single IP address range (CIDR) is 24 across all of your network policies. Selectors such as `namespaceSelector` resolve to one or more CIDRs. If multiple selectors resolve to a single CIDR or you specify the same direct CIDR multiple times in the same or different network policies, these all count toward this limit.
+ For any of your Kubernetes services, the service port must be the same as the container port. If you’re using named ports, use the same name in the service spec too.

 **Admin Network Policies** 

1.  **Admin tier policies (evaluated first)**: All Admin tier ClusterNetworkPolicies are evaluated before any other policies. Within the Admin tier, policies are processed in priority order (lowest priority number first). The action type determines what happens next.
   +  **Deny action (highest precedence)**: When an Admin policy with a Deny action matches traffic, that traffic is immediately blocked regardless of any other policies. No further ClusterNetworkPolicy or NetworkPolicy rules are processed. This ensures that organization-wide security controls cannot be overridden by namespace-level policies.
   +  **Allow action**: After Deny rules are evaluated, Admin policies with Allow actions are processed in priority order (lowest priority number first). When an Allow action matches, the traffic is accepted and no further policy evaluation occurs. These policies can grant access across multiple namespaces based on label selectors, providing centralized control over which workloads can access specific resources.
   +  **Pass action**: Pass actions in Admin tier policies delegate decision-making to lower tiers. When traffic matches a Pass rule, evaluation skips all remaining Admin tier rules for that traffic and proceeds directly to the NetworkPolicy tier. This allows administrators to explicitly delegate control for certain traffic patterns to application teams. For example, you might use Pass rules to delegate intra-namespace traffic management to namespace administrators while maintaining strict controls over external access.

1.  **Network policy tier**: If no Admin tier policy matches with Deny or Allow, or if a Pass action was matched, namespace-scoped NetworkPolicy resources are evaluated next. These policies provide fine-grained control within individual namespaces and are managed by application teams. Namespace-scoped policies can only be more restrictive than Admin policies. They cannot override an Admin policy’s Deny decision, but they can further restrict traffic that was allowed or passed by Admin policies.

1.  **Baseline tier Admin policies**: If no Admin or namespace-scoped policies match the traffic, Baseline tier ClusterNetworkPolicies are evaluated. These provide default security postures that can be overridden by namespace-scoped policies, allowing administrators to set organization-wide defaults while giving teams flexibility to customize as needed. Baseline policies are evaluated in priority order (lowest priority number first).

1.  **Default deny (if no policies match)**: This deny-by-default behavior ensures that only explicitly permitted connections are allowed, maintaining a strong security posture.

 **Migration** 
+ If your cluster is currently using a third party solution to manage Kubernetes network policies, you can use those same policies with the Amazon VPC CNI plugin for Kubernetes. However you must remove your existing solution so that it isn’t managing the same policies.

**Warning**  
We recommend that after you remove a network policy solution, then you replace all of the nodes that had the network policy solution applied to them. This is because the traffic rules might get left behind by a pod of the solution if it exits suddenly.

 **Installation** 
+ The network policy feature creates and requires a `PolicyEndpoint` Custom Resource Definition (CRD) called `policyendpoints.networking.k8s.aws`. `PolicyEndpoint` objects of the Custom Resource are managed by Amazon EKS. You shouldn’t modify or delete these resources.
+ If you run pods that use the instance role IAM credentials or connect to the EC2 IMDS, be careful to check for network policies that would block access to the EC2 IMDS. You may need to add a network policy to allow access to EC2 IMDS. For more information, see [Instance metadata and user data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) in the Amazon EC2 User Guide.

  Pods that use *IAM roles for service accounts* or *EKS Pod Identity* don’t access EC2 IMDS.
+ The Amazon VPC CNI plugin for Kubernetes doesn’t apply network policies to additional network interfaces for each pod, only the primary interface for each pod (`eth0`). This affects the following architectures:
  +  `IPv6` pods with the `ENABLE_V4_EGRESS` variable set to `true`. This variable enables the `IPv4` egress feature to connect the IPv6 pods to `IPv4` endpoints such as those outside the cluster. The `IPv4` egress feature works by creating an additional network interface with a local loopback IPv4 address.
  + When using chained network plugins such as Multus. Because these plugins add network interfaces to each pod, network policies aren’t applied to the chained network plugins.

# Restrict Pod network traffic with Kubernetes network policies
<a name="cni-network-policy-configure"></a>

You can use a Kubernetes network policy to restrict network traffic to and from your Pods. For more information, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) in the Kubernetes documentation.

You must configure the following in order to use this feature:

1. Set up policy enforcement at Pod startup. You do this in the `aws-node` container of the VPC CNI `DaemonSet`.

1. Enable the network policy parameter for the add-on.

1. Configure your cluster to use the Kubernetes network policy

Before you begin, review the considerations. For more information, see [Considerations](cni-network-policy.md#cni-network-policy-considerations).

## Prerequisites
<a name="cni-network-policy-prereqs"></a>

The following are prerequisites for the feature:

### Minimum cluster version
<a name="cni-network-policy-minimum"></a>

An existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md). The cluster must be running one of the Kubernetes versions and platform versions listed in the following table. Note that any Kubernetes and platform versions later than those listed are also supported. You can check your current Kubernetes version by replacing *my-cluster* in the following command with the name of your cluster and then running the modified command:

```
aws eks describe-cluster --name my-cluster --query cluster.version --output text
```


| Kubernetes version | Platform version | 
| --- | --- | 
|   `1.27.4`   |   `eks.5`   | 
|   `1.26.7`   |   `eks.6`   | 

### Minimum VPC CNI version
<a name="cni-network-policy-minimum-vpc"></a>

To create both standard Kubernetes network policies and admin network policies, you need to run version `1.21` of the VPC CNI plugin. You can see which version that you currently have with the following command.

```
kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
```

If your version is earlier than `1.21`, see [Update the Amazon VPC CNI (Amazon EKS add-on)](vpc-add-on-update.md) to upgrade to version `1.21` or later.

### Minimum Linux kernel version
<a name="cni-network-policy-minimum-linux"></a>

Your nodes must have Linux kernel version `5.10` or later. You can check your kernel version with `uname -r`. If you’re using the latest versions of the Amazon EKS optimized Amazon Linux, Amazon EKS optimized accelerated Amazon Linux AMIs, and Bottlerocket AMIs, they already have the required kernel version.

The Amazon EKS optimized accelerated Amazon Linux AMI version `v20231116` or later have kernel version `5.10`.

## Step 1: Set up policy enforcement at Pod startup
<a name="cni-network-policy-configure-policy"></a>

The Amazon VPC CNI plugin for Kubernetes configures network policies for pods in parallel with the pod provisioning. Until all of the policies are configured for the new pod, containers in the new pod will start with a *default allow policy*. This is called *standard mode*. A default allow policy means that all ingress and egress traffic is allowed to and from the new pods. For example, the pods will not have any firewall rules enforced (all traffic is allowed) until the new pod is updated with the active policies.

With the `NETWORK_POLICY_ENFORCING_MODE` variable set to `strict`, pods that use the VPC CNI start with a *default deny policy*, then policies are configured. This is called *strict mode*. In strict mode, you must have a network policy for every endpoint that your pods need to access in your cluster. Note that this requirement applies to the CoreDNS pods. The default deny policy isn’t configured for pods with Host networking.

You can change the default network policy by setting the environment variable `NETWORK_POLICY_ENFORCING_MODE` to `strict` in the `aws-node` container of the VPC CNI `DaemonSet`.

```
env:
  - name: NETWORK_POLICY_ENFORCING_MODE
    value: "strict"
```

## Step 2: Enable the network policy parameter for the add-on
<a name="enable-network-policy-parameter"></a>

The network policy feature uses port `8162` on the node for metrics by default. Also, the feature uses port `8163` for health probes. If you run another application on the nodes or inside pods that needs to use these ports, the app fails to run. From VPC CNI version `v1.14.1` or later, you can change these ports.

Use the following procedure to enable the network policy parameter for the add-on.

### AWS Management Console
<a name="cni-network-policy-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure `Amazon VPC CNI` ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** list.

   1. Expand the **Optional configuration settings**.

   1. Enter the JSON key `"enableNetworkPolicy":` and value `"true"` in **Configuration values**. The resulting text must be a valid JSON object. If this key and value are the only data in the text box, surround the key and value with curly braces `{ }`.

      The following example has network policy feature enabled and metrics and health probes are set to the default port numbers:

      ```
      {
          "enableNetworkPolicy": "true",
          "nodeAgent": {
              "healthProbeBindAddr": "8163",
              "metricsBindAddr": "8162"
          }
      }
      ```

### Helm
<a name="cni-network-helm"></a>

If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to change the ports.

1. Run the following command to change the ports. Set the port number in the value for either key `nodeAgent.metricsBindAddr` or key `nodeAgent.healthProbeBindAddr`, respectively.

   ```
   helm upgrade --set nodeAgent.metricsBindAddr=8162 --set nodeAgent.healthProbeBindAddr=8163 aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

### kubectl
<a name="cni-network-policy-kubectl"></a>

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the port numbers in the following command arguments in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` daemonset manifest.

   ```
       - args:
               - --metrics-bind-addr=:8162
               - --health-probe-bind-addr=:8163
   ```

## Step 3: Configure your cluster to use Kubernetes network policies
<a name="cni-network-policy-setup"></a>

You can set this for an Amazon EKS add-on or self-managed add-on.

### Amazon EKS add-on
<a name="cni-network-policy-setup-procedure-add-on"></a>

Using the AWS CLI, you can configure the cluster to use Kubernetes network policies by running the following command. Replace `my-cluster` with the name of your cluster and the IAM role ARN with the role that you are using.

```
aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.14.0-eksbuild.3 \
    --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKSVPCCNIRole \
    --resolve-conflicts PRESERVE --configuration-values '{"enableNetworkPolicy": "true"}'
```

To configure this using the AWS Management Console, follow the below steps:

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure `Amazon VPC CNI` ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** list.

   1. Expand the **Optional configuration settings**.

   1. Enter the JSON key `"enableNetworkPolicy":` and value `"true"` in **Configuration values**. The resulting text must be a valid JSON object. If this key and value are the only data in the text box, surround the key and value with curly braces `{ }`. The following example shows network policy is enabled:

      ```
      { "enableNetworkPolicy": "true" }
      ```

      The following screenshot shows an example of this scenario.  
![\[<shared id="consolelong"/> showing the VPC CNI add-on with network policy in the optional configuration.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/console-cni-config-network-policy.png)

### Self-managed add-on
<a name="cni-network-policy-setup-procedure-self-managed-add-on"></a>

If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to enable network policy.

1. Run the following command to enable network policy.

   ```
   helm upgrade --set enableNetworkPolicy=true aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

1. Open the `amazon-vpc-cni` `ConfigMap` in your editor.

   ```
   kubectl edit configmap -n kube-system amazon-vpc-cni -o yaml
   ```

1. Add the following line to the `data` in the `ConfigMap`.

   ```
   enable-network-policy-controller: "true"
   ```

   Once you’ve added the line, your `ConfigMap` should look like the following example.

   ```
   apiVersion: v1
    kind: ConfigMap
    metadata:
     name: amazon-vpc-cni
     namespace: kube-system
    data:
     enable-network-policy-controller: "true"
   ```

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

   1. Replace the `false` with `true` in the command argument `--enable-network-policy=false` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` daemonset manifest.

      ```
           - args:
              - --enable-network-policy=true
      ```

## Step 4. Next steps
<a name="cni-network-policy-setup-procedure-confirm"></a>

After you complete the configuration, confirm that the `aws-node` pods are running on your cluster.

```
kubectl get pods -n kube-system | grep 'aws-node\|amazon'
```

An example output is as follows.

```
aws-node-gmqp7                                          2/2     Running   1 (24h ago)   24h
aws-node-prnsh                                          2/2     Running   1 (24h ago)   24h
```

There are 2 containers in the `aws-node` pods in versions `1.14` and later. In previous versions and if network policy is disabled, there is only a single container in the `aws-node` pods.

You can now deploy Kubernetes network policies to your cluster.

To implement Kubernetes network policies, you can create Kubernetes `NetworkPolicy` or `ClusterNetworkPolicy` objects and deploy them to your cluster. `NetworkPolicy` objects are scoped to a namespace, while `ClusterNetworkPolicy` objects can be scoped to the whole cluster or multiple namespaces. You implement policies to allow or deny traffic between Pods based on label selectors, namespaces, and IP address ranges. For more information about creating `NetworkPolicy` objects, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource) in the Kubernetes documentation.

Enforcement of Kubernetes `NetworkPolicy` objects is implemented using the Extended Berkeley Packet Filter (eBPF). Relative to `iptables` based implementations, it offers lower latency and performance characteristics, including reduced CPU utilization and avoiding sequential lookups. Additionally, eBPF probes provide access to context rich data that helps debug complex kernel level issues and improve observability. Amazon EKS supports an eBPF-based exporter that leverages the probes to log policy results on each node and export the data to external log collectors to aid in debugging. For more information, see the [eBPF documentation](https://ebpf.io/what-is-ebpf/#what-is-ebpf).

# Disable Kubernetes network policies for Amazon EKS Pod network traffic
<a name="network-policy-disable"></a>

Disable Kubernetes network policies to stop restricting Amazon EKS Pod network traffic

1. List all Kubernetes network policies.

   ```
   kubectl get netpol -A
   ```

1. Delete each Kubernetes network policy. You must delete all network policies before disabling network policies.

   ```
   kubectl delete netpol <policy-name>
   ```

1. Open the aws-node DaemonSet in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the `true` with `false` in the command argument `--enable-network-policy=true` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` daemonset manifest.

   ```
        - args:
           - --enable-network-policy=true
   ```

# Troubleshooting Kubernetes network policies For Amazon EKS
<a name="network-policies-troubleshooting"></a>

This is the troubleshooting guide for network policy feature of the Amazon VPC CNI.

This guide covers:
+ Install information, CRD and RBAC permissions [New `policyendpoints` CRD and permissions](#network-policies-troubleshooting-permissions) 
+ Logs to examine when diagnosing network policy problems [Network policy logs](#network-policies-troubleshooting-flowlogs) 
+ Running the eBPF SDK collection of tools to troubleshoot
+ Known issues and solutions [Known issues and solutions](#network-policies-troubleshooting-known-issues) 

**Note**  
Note that network policies are only applied to pods that are made by Kubernetes *Deployments*. For more limitations of the network policies in the VPC CNI, see [Considerations](cni-network-policy.md#cni-network-policy-considerations).

You can troubleshoot and investigate network connections that use network policies by reading the [Network policy logs](#network-policies-troubleshooting-flowlogs) and by running tools from the [eBPF SDK](#network-policies-ebpf-sdk).

## New `policyendpoints` CRD and permissions
<a name="network-policies-troubleshooting-permissions"></a>
+ CRD: `policyendpoints.networking.k8s.aws` 
+ Kubernetes API: `apiservice` called `v1.networking.k8s.io` 
+ Kubernetes resource: `Kind: NetworkPolicy` 
+ RBAC: `ClusterRole` called `aws-node` (VPC CNI), `ClusterRole` called `eks:network-policy-controller` (network policy controller in EKS cluster control plane)

For network policy, the VPC CNI creates a new `CustomResourceDefinition` (CRD) called `policyendpoints.networking.k8s.aws`. The VPC CNI must have permissions to create the CRD and create CustomResources (CR) of this and the other CRD installed by the VPC CNI (`eniconfigs.crd.k8s.amazonaws.com`). Both of the CRDs are available in the [`crds.yaml` file](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/charts/aws-vpc-cni/crds/customresourcedefinition.yaml) on GitHub. Specifically, the VPC CNI must have "get", "list", and "watch" verb permissions for `policyendpoints`.

The Kubernetes *Network Policy* is part of the `apiservice` called `v1.networking.k8s.io`, and this is `apiversion: networking.k8s.io/v1` in your policy YAML files. The VPC CNI `DaemonSet` must have permissions to use this part of the Kubernetes API.

The VPC CNI permissions are in a `ClusterRole` called `aws-node`. Note that `ClusterRole` objects aren’t grouped in namespaces. The following shows the `aws-node` of a cluster:

```
kubectl get clusterrole aws-node -o yaml
```

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: aws-vpc-cni
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: aws-node
    app.kubernetes.io/version: v1.19.4
    helm.sh/chart: aws-vpc-cni-1.19.4
    k8s-app: aws-node
  name: aws-node
rules:
- apiGroups:
  - crd.k8s.amazonaws.com
  resources:
  - eniconfigs
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  - events.k8s.io
  resources:
  - events
  verbs:
  - create
  - patch
  - list
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints/status
  verbs:
  - get
- apiGroups:
  - vpcresources.k8s.aws
  resources:
  - cninodes
  verbs:
  - get
  - list
  - watch
  - patch
```

Also, a new controller runs in the control plane of each EKS cluster. The controller uses the permissions of the `ClusterRole` called `eks:network-policy-controller`. The following shows the `eks:network-policy-controller` of a cluster:

```
kubectl get clusterrole eks:network-policy-controller -o yaml
```

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: amazon-network-policy-controller-k8s
  name: eks:network-policy-controller
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints/finalizers
  verbs:
  - update
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - networkpolicies
  verbs:
  - get
  - list
  - patch
  - update
  - watch
```

## Network policy logs
<a name="network-policies-troubleshooting-flowlogs"></a>

Each decision by the VPC CNI whether connections are allowed or denied by a network policies is logged in *flow logs*. The network policy logs on each node include the flow logs for every pod that has a network policy. Network policy logs are stored at `/var/log/aws-routed-eni/network-policy-agent.log`. The following example is from a `network-policy-agent.log` file:

```
{"level":"info","timestamp":"2023-05-30T16:05:32.573Z","logger":"ebpf-client","msg":"Flow Info: ","Src
IP":"192.168.87.155","Src Port":38971,"Dest IP":"64.6.160","Dest
Port":53,"Proto":"UDP","Verdict":"ACCEPT"}
```

Network policy logs are disabled by default. To enable the network policy logs, follow these steps:

**Note**  
Network policy logs require an additional 1 vCPU for the `aws-network-policy-agent` container in the VPC CNI `aws-node` `DaemonSet` manifest.

### Amazon EKS add-on
<a name="cni-network-policy-flowlogs-addon"></a>

 ** AWS Management Console **   

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure *Amazon VPC CNI* ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** dropdown list.

   1. Expand the **Optional configuration settings**.

   1. Enter the top-level JSON key `"nodeAgent":` and value is an object with a key `"enablePolicyEventLogs":` and value of `"true"` in **Configuration values**. The resulting text must be a valid JSON object. The following example shows network policy and the network policy logs are enabled, and the network policy logs are sent to CloudWatch Logs:

      ```
      {
          "enableNetworkPolicy": "true",
          "nodeAgent": {
              "enablePolicyEventLogs": "true"
          }
      }
      ```

The following screenshot shows an example of this scenario.

![\[<shared id="consolelong"/> showing the VPC CNI add-on with network policy and CloudWatch Logs in the optional configuration.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/console-cni-config-network-policy-logs.png)


 AWS CLI  

1. Run the following AWS CLI command. Replace `my-cluster` with the name of your cluster and replace the IAM role ARN with the role that you are using.

   ```
   aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.14.0-eksbuild.3 \
       --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKSVPCCNIRole \
       --resolve-conflicts PRESERVE --configuration-values '{"nodeAgent": {"enablePolicyEventLogs": "true"}}'
   ```

### Self-managed add-on
<a name="cni-network-policy-flowlogs-selfmanaged"></a>

Helm  
If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to write the network policy logs.  

1. Run the following command to enable network policy.

   ```
   helm upgrade --set nodeAgent.enablePolicyEventLogs=true aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

kubectl  
If you have installed the Amazon VPC CNI plugin for Kubernetes through `kubectl`, you can update the configuration to write the network policy logs.  

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the `false` with `true` in the command argument `--enable-policy-event-logs=false` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` `DaemonSet` manifest.

   ```
        - args:
           - --enable-policy-event-logs=true
   ```

### Send network policy logs to Amazon CloudWatch Logs
<a name="network-policies-cloudwatchlogs"></a>

You can monitor the network policy logs using services such as Amazon CloudWatch Logs. You can use the following methods to send the network policy logs to CloudWatch Logs.

For EKS clusters, the policy logs will be located under `/aws/eks/cluster-name/cluster/` and for self-managed K8S clusters, the logs will be placed under `/aws/k8s-cluster/cluster/`.

#### Send network policy logs with Amazon VPC CNI plugin for Kubernetes
<a name="network-policies-cwl-agent"></a>

If you enable network policy, a second container is add to the `aws-node` pods for a *node agent*. This node agent can send the network policy logs to CloudWatch Logs.

**Note**  
Only the network policy logs are sent by the node agent. Other logs made by the VPC CNI aren’t included.

##### Prerequisites
<a name="cni-network-policy-cwl-agent-prereqs"></a>
+ Add the following permissions as a stanza or separate policy to the IAM role that you are using for the VPC CNI.

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "VisualEditor0",
              "Effect": "Allow",
              "Action": [
                  "logs:DescribeLogGroups",
                  "logs:CreateLogGroup",
                  "logs:CreateLogStream",
                  "logs:PutLogEvents"
              ],
              "Resource": "*"
          }
      ]
  }
  ```

##### Amazon EKS add-on
<a name="cni-network-policy-cwl-agent-addon"></a>

 ** AWS Management Console **   

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure *Amazon VPC CNI* ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** dropdown list.

   1. Expand the **Optional configuration settings**.

   1. Enter the top-level JSON key `"nodeAgent":` and value is an object with a key `"enableCloudWatchLogs":` and value of `"true"` in **Configuration values**. The resulting text must be a valid JSON object. The following example shows network policy and the network policy logs are enabled, and the logs are sent to CloudWatch Logs:

      ```
      {
          "enableNetworkPolicy": "true",
          "nodeAgent": {
              "enablePolicyEventLogs": "true",
              "enableCloudWatchLogs": "true",
          }
      }
      ```
The following screenshot shows an example of this scenario.

![\[<shared id="consolelong"/> showing the VPC CNI add-on with network policy and CloudWatch Logs in the optional configuration.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/console-cni-config-network-policy-logs-cwl.png)


 ** AWS CLI**   

1. Run the following AWS CLI command. Replace `my-cluster` with the name of your cluster and replace the IAM role ARN with the role that you are using.

   ```
   aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.14.0-eksbuild.3 \
       --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKSVPCCNIRole \
       --resolve-conflicts PRESERVE --configuration-values '{"nodeAgent": {"enablePolicyEventLogs": "true", "enableCloudWatchLogs": "true"}}'
   ```

##### Self-managed add-on
<a name="cni-network-policy-cwl-agent-selfmanaged"></a>

 **Helm**   
If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to send network policy logs to CloudWatch Logs.  

1. Run the following command to enable network policy logs and send them to CloudWatch Logs.

   ```
   helm upgrade --set nodeAgent.enablePolicyEventLogs=true --set nodeAgent.enableCloudWatchLogs=true aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

 **kubectl**   

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the `false` with `true` in two command arguments `--enable-policy-event-logs=false` and `--enable-cloudwatch-logs=false` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` `DaemonSet` manifest.

   ```
        - args:
           - --enable-policy-event-logs=true
           - --enable-cloudwatch-logs=true
   ```

#### Send network policy logs with a Fluent Bit `DaemonSet`
<a name="network-policies-cwl-fluentbit"></a>

If you are using Fluent Bit in a `DaemonSet` to send logs from your nodes, you can add configuration to include the network policy logs from network policies. You can use the following example configuration:

```
    [INPUT]
        Name              tail
        Tag               eksnp.*
        Path              /var/log/aws-routed-eni/network-policy-agent*.log
        Parser            json
        DB                /var/log/aws-routed-eni/flb_npagent.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10
```

## Included eBPF SDK
<a name="network-policies-ebpf-sdk"></a>

The Amazon VPC CNI plugin for Kubernetes installs eBPF SDK collection of tools on the nodes. You can use the eBPF SDK tools to identify issues with network policies. For example, the following command lists the programs that are running on the node.

```
sudo /opt/cni/bin/aws-eks-na-cli ebpf progs
```

To run this command, you can use any method to connect to the node.

## Known issues and solutions
<a name="network-policies-troubleshooting-known-issues"></a>

The following sections describe known issues with the Amazon VPC CNI network policy feature and their solutions.

### Network policy logs generated despite enable-policy-event-logs set to false
<a name="network-policies-troubleshooting-policy-event-logs"></a>

 **Issue**: EKS VPC CNI is generating network policy logs even when the `enable-policy-event-logs` setting is set to `false`.

 **Solution**: The `enable-policy-event-logs` setting only disables the policy "decision" logs, but it won’t disable all Network Policy agent logging. This behavior is documented in the [aws-network-policy-agent README](https://github.com/aws/aws-network-policy-agent/) on GitHub. To completely disable logging, you might need to adjust other logging configurations.

### Network policy map cleanup issues
<a name="network-policies-troubleshooting-map-cleanup"></a>

 **Issue**: Problems with network `policyendpoint` still existing and not being cleaned up after pods are deleted.

 **Solution**: This issue was caused by a problem with the VPC CNI add-on version 1.19.3-eksbuild.1. Update to a newer version of the VPC CNI add-on to resolve this issue.

### Network policies aren’t applied
<a name="network-policies-troubleshooting-policyendpoint"></a>

 **Issue**: Network policy feature is enabled in the Amazon VPC CNI plugin, but network policies are not being applied correctly.

If you make a network policy `kind: NetworkPolicy` and it doesn’t effect the pod, check that the policyendpoint object was created in the same namespace as the pod. If there aren’t `policyendpoint` objects in the namespaces, the network policy controller (part of the EKS cluster) was unable to create network policy rules for the network policy agent (part of the VPC CNI) to apply.

 **Solution**: The solution is to fix the permissions of the VPC CNI (`ClusterRole` : `aws-node`) and the network policy controller (`ClusterRole` : `eks:network-policy-controller`) and to allow these actions in any policy enforcement tool such as Kyverno. Ensure that Kyverno policies are not blocking the creation of `policyendpoint` objects. See previous section for the permissions necessary permissions in [New `policyendpoints` CRD and permissions](#network-policies-troubleshooting-permissions).

### Pods don’t return to default deny state after policy deletion in strict mode
<a name="network-policies-troubleshooting-strict-mode-fallback"></a>

 **Issue**: When network policies are enabled in strict mode, pods start with a default deny policy. After policies are applied, traffic is allowed to the specified endpoints. However, when policies are deleted, the pod doesn’t return to the default deny state and instead goes to a default allow state.

 **Solution**: This issue was fixed in the VPC CNI release 1.19.3, which included the network policy agent 1.2.0 release. After the fix, with strict mode enabled, once policies are removed, the pod will fall back to the default deny state as expected.

### Security Groups for Pods startup latency
<a name="network-policies-troubleshooting-sgfp-latency"></a>

 **Issue**: When using the Security Groups for Pods feature in EKS, there is increased pod startup latency.

 **Solution**: The latency is due to rate limiting in the resource controller from API throttling on the `CreateNetworkInterface` API, which the VPC resource controller uses to create branch ENIs for the pods. Check your account’s API limits for this operation and consider requesting a limit increase if needed.

### FailedScheduling due to insufficient vpc.amazonaws.com/pod-eni
<a name="network-policies-troubleshooting-insufficient-pod-eni"></a>

 **Issue**: Pods fail to schedule with the error: `FailedScheduling 2m53s (x28 over 137m) default-scheduler 0/5 nodes are available: 5 Insufficient vpc.amazonaws.com/pod-eni. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.` 

 **Solution**: As with the previous issue, assigning Security Groups to pods increases pod scheduling latency and it can increase beyond the CNI threshold for time to add each ENI, causing failures to start pods. This is expected behavior when using Security Groups for Pods. Consider the scheduling implications when designing your workload architecture.

### IPAM connectivity issues and segmentation faults
<a name="network-policies-troubleshooting-systemd-udev"></a>

 **Issue**: Multiple errors occur including IPAM connectivity issues, throttling requests, and segmentation faults:
+  `Checking for IPAM connectivity …​` 
+  `Throttling request took 1.047064274s` 
+  `Retrying waiting for IPAM-D` 
+  `panic: runtime error: invalid memory address or nil pointer dereference` 

 **Solution**: This issue occurs if you install `systemd-udev` on AL2023, as the file is re-written with a breaking policy. This can happen when updating to a different `releasever` that has an updated package or manually updating the package itself. Avoid installing or updating `systemd-udev` on AL2023 nodes.

### Failed to find device by name error
<a name="network-policies-troubleshooting-device-not-found"></a>

 **Issue**: Error message: `{"level":"error","ts":"2025-02-05T20:27:18.669Z","caller":"ebpf/bpf_client.go:578","msg":"failed to find device by name eni9ea69618bf0: %!w(netlink.LinkNotFoundError={0xc000115310})"}` 

 **Solution**: This issue has been identified and fixed in the latest versions of the Amazon VPC CNI network policy agent (v1.2.0). Update to the latest version of the VPC CNI to resolve this issue.

### CVE vulnerabilities in Multus CNI image
<a name="network-policies-troubleshooting-cve-multus"></a>

 **Issue**: Enhanced EKS ImageScan CVE Report identifies vulnerabilities in the Multus CNI image version v4.1.4-eksbuild.2\$1thick.

 **Solution**: Update to the new version of the Multus CNI image and the new Network Policy Controller image, which have no vulnerabilities. The scanner can be updated to address the vulnerabilities found in the previous version.

### Flow Info DENY verdicts in logs
<a name="network-policies-troubleshooting-flow-info-deny"></a>

 **Issue**: Network policy logs show DENY verdicts: `{"level":"info","ts":"2024-11-25T13:34:24.808Z","logger":"ebpf-client","caller":"events/events.go:193","msg":"Flow Info: ","Src IP":"","Src Port":9096,"Dest IP":"","Dest Port":56830,"Proto":"TCP","Verdict":"DENY"}` 

 **Solution**: This issue has been resolved in the new version of the Network Policy Controller. Update to the latest EKS platform version to resolve logging issues.

### Pod-to-pod communication issues after migrating from Calico
<a name="network-policies-troubleshooting-calico-migration"></a>

 **Issue**: After upgrading an EKS cluster to version 1.30 and switching from Calico to Amazon VPC CNI for network policy, pod-to-pod communication fails when network policies are applied. Communication is restored when network policies are deleted.

 **Solution**: The network policy agent in the VPC CNI can’t have as many ports specified as Calico does. Instead, use port ranges in the network policies. The maximum number of unique combinations of ports for each protocol in each `ingress:` or `egress:` selector in a network policy is 24. Use port ranges to reduce the number of unique ports and avoid this limitation.

### Network policy agent doesn’t support standalone pods
<a name="network-policies-troubleshooting-standalone-pods"></a>

 **Issue**: Network policies applied to standalone pods may have inconsistent behavior.

 **Solution**: The Network Policy agent currently only supports pods that are deployed as part of a deployment/replicaset. If network policies are applied to standalone pods, there might be some inconsistencies in the behavior. This is documented at the top of this page, in the [Considerations](cni-network-policy.md#cni-network-policy-considerations), and in the [aws-network-policy-agent GitHub issue \$1327](https://github.com/aws/aws-network-policy-agent/issues/327) on GitHub. Deploy pods as part of a deployment or replicaset for consistent network policy behavior.

# Stars demo of network policy for Amazon EKS
<a name="network-policy-stars-demo"></a>

This demo creates a front-end, back-end, and client service on your Amazon EKS cluster. The demo also creates a management graphical user interface that shows the available ingress and egress paths between each service. We recommend that you complete the demo on a cluster that you don’t run production workloads on.

Before you create any network policies, all services can communicate bidirectionally. After you apply the network policies, you can see that the client can only communicate with the front-end service, and the back-end only accepts traffic from the front-end.

1. Apply the front-end, back-end, client, and management user interface services:

   ```
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/namespace.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/management-ui.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/backend.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/frontend.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/client.yaml
   ```

1. View all Pods on the cluster.

   ```
   kubectl get pods -A
   ```

   An example output is as follows.

   In your output, you should see pods in the namespaces shown in the following output. The *NAMES* of your pods and the number of pods in the `READY` column are different than those in the following output. Don’t continue until you see pods with similar names and they all have `Running` in the `STATUS` column.

   ```
   NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE
   [...]
   client            client-xlffc                               1/1     Running   0          5m19s
   [...]
   management-ui     management-ui-qrb2g                        1/1     Running   0          5m24s
   stars             backend-sz87q                              1/1     Running   0          5m23s
   stars             frontend-cscnf                             1/1     Running   0          5m21s
   [...]
   ```

1. To connect to the management user interface, connect to the `EXTERNAL-IP` of the service running on your cluster:

   ```
   kubectl get service/management-ui -n management-ui
   ```

1. Open the a browser to the location from the previous step. You should see the management user interface. The **C** node is the client service, the **F** node is the front-end service, and the **B** node is the back-end service. Each node has full communication access to all other nodes, as indicated by the bold, colored lines.  
![\[Open network policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-default.png)

1. Apply the following network policy in both the `stars` and `client` namespaces to isolate the services from each other:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     name: default-deny
   spec:
     podSelector:
       matchLabels: {}
   ```

   You can use the following commands to apply the policy to both namespaces:

   ```
   kubectl apply -n stars -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/default-deny.yaml
   kubectl apply -n client -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/default-deny.yaml
   ```

1. Refresh your browser. You see that the management user interface can no longer reach any of the nodes, so they don’t show up in the user interface.

1. Apply the following different network policies to allow the management user interface to access the services. Apply this policy to allow the UI:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: stars
     name: allow-ui
   spec:
     podSelector:
       matchLabels: {}
     ingress:
       - from:
           - namespaceSelector:
               matchLabels:
                 role: management-ui
   ```

   Apply this policy to allow the client:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: client
     name: allow-ui
   spec:
     podSelector:
       matchLabels: {}
     ingress:
       - from:
           - namespaceSelector:
               matchLabels:
                 role: management-ui
   ```

   You can use the following commands to apply both policies:

   ```
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/allow-ui.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/allow-ui-client.yaml
   ```

1. Refresh your browser. You see that the management user interface can reach the nodes again, but the nodes cannot communicate with each other.  
![\[UI access network policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-no-traffic.png)

1. Apply the following network policy to allow traffic from the front-end service to the back-end service:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: stars
     name: backend-policy
   spec:
     podSelector:
       matchLabels:
         role: backend
     ingress:
       - from:
           - podSelector:
               matchLabels:
                 role: frontend
         ports:
           - protocol: TCP
             port: 6379
   ```

1. Refresh your browser. You see that the front-end can communicate with the back-end.  
![\[Front-end to back-end policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-front-end-back-end.png)

1. Apply the following network policy to allow traffic from the client to the front-end service:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: stars
     name: frontend-policy
   spec:
     podSelector:
       matchLabels:
         role: frontend
     ingress:
       - from:
           - namespaceSelector:
               matchLabels:
                 role: client
         ports:
           - protocol: TCP
             port: 80
   ```

1. Refresh your browser. You see that the client can communicate to the front-end service. The front-end service can still communicate to the back-end service.  
![\[Final network policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-final.png)

1. (Optional) When you are done with the demo, you can delete its resources.

   ```
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/client.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/frontend.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/backend.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/management-ui.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/namespace.yaml
   ```

   Even after deleting the resources, there can still be network policy endpoints on the nodes that might interfere in unexpected ways with networking in your cluster. The only sure way to remove these rules is to reboot the nodes or terminate all of the nodes and recycle them. To terminate all nodes, either set the Auto Scaling Group desired count to 0, then back up to the desired number, or just terminate the nodes.

# Deploy Pods in alternate subnets with custom networking
<a name="cni-custom-network"></a>

 **Applies to**: Linux `IPv4` Fargate nodes, Linux nodes with Amazon EC2 instances

![\[Diagram of node with multiple network interfaces\]](http://docs.aws.amazon.com/eks/latest/userguide/images/cn-image.png)


By default, when the Amazon VPC CNI plugin for Kubernetes creates secondary [elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) (network interfaces) for your Amazon EC2 node, it creates them in the same subnet as the node’s primary network interface. It also associates the same security groups to the secondary network interface that are associated to the primary network interface. For one or more of the following reasons, you might want the plugin to create secondary network interfaces in a different subnet or want to associate different security groups to the secondary network interfaces, or both:
+ There’s a limited number of `IPv4` addresses that are available in the subnet that the primary network interface is in. This might limit the number of Pods that you can create in the subnet. By using a different subnet for secondary network interfaces, you can increase the number of available `IPv4` addresses available for Pods.
+ For security reasons, your Pods might need to use a different subnet or security groups than the node’s primary network interface.
+ The nodes are configured in public subnets, and you want to place the Pods in private subnets. The route table associated to a public subnet includes a route to an internet gateway. The route table associated to a private subnet doesn’t include a route to an internet gateway.

**Tip**  
You can also add a new or existing subnet directly to your Amazon EKS Cluster, without using custom networking. For more information, see [Add an existing VPC Subnet to an Amazon EKS cluster from the management console](eks-networking.md#add-existing-subnet).

## Considerations
<a name="cni-custom-network-considerations"></a>

The following are considerations for using the feature.
+ With custom networking enabled, no IP addresses assigned to the primary network interface are assigned to Pods. Only IP addresses from secondary network interfaces are assigned to Pods.
+ If your cluster uses the `IPv6` family, you can’t use custom networking.
+ If you plan to use custom networking only to help alleviate `IPv4` address exhaustion, you can create a cluster using the `IPv6` family instead. For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).
+ Even though Pods deployed to subnets specified for secondary network interfaces can use different subnet and security groups than the node’s primary network interface, the subnets and security groups must be in the same VPC as the node.
+ For Fargate, subnets are controlled through the Fargate profile. For more information, see [Define which Pods use AWS Fargate when launched](fargate-profile.md).

# Customize the secondary network interface in Amazon EKS nodes
<a name="cni-custom-network-tutorial"></a>

Complete the following before you start the tutorial:
+ Review the considerations
+ Familiarity with how the Amazon VPC CNI plugin for Kubernetes creates secondary network interfaces and assigns IP addresses to Pods. For more information, see [ENI Allocation](https://github.com/aws/amazon-vpc-cni-k8s#eni-allocation) on GitHub.
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ We recommend that you complete the steps in this topic in a Bash shell. If you aren’t using a Bash shell, some script commands such as line continuation characters and the way variables are set and used require adjustment for your shell. Additionally, the quoting and escaping rules for your shell might be different. For more information, see [Using quotation marks with strings in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-quoting-strings.html) in the AWS Command Line Interface User Guide.

For this tutorial, we recommend using the example values, except where it’s noted to replace them. You can replace any example value when completing the steps for a production cluster. We recommend completing all steps in the same terminal. This is because variables are set and used throughout the steps and won’t exist in different terminals.

The commands in this topic are formatted using the conventions listed in [Using the AWS CLI examples](https://docs.aws.amazon.com/cli/latest/userguide/welcome-examples.html). If you’re running commands from the command line against resources that are in a different AWS Region than the default AWS Region defined in the AWS CLI [profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-profiles) that you’re using, then you need to add `--region us-west-2` to the commands, replacing `us-west-2` with your AWS region.

When you want to deploy custom networking to your production cluster, skip to [Step 2: Configure your VPC](#custom-networking-configure-vpc).

## Step 1: Create a test VPC and cluster
<a name="custom-networking-create-cluster"></a>

The following procedures help you create a test VPC and cluster and configure custom networking for that cluster. We don’t recommend using the test cluster for production workloads because several unrelated features that you might use on your production cluster aren’t covered in this topic. For more information, see [Create an Amazon EKS cluster](create-cluster.md).

1. Run the following command to define the `account_id` variable.

   ```
   account_id=$(aws sts get-caller-identity --query Account --output text)
   ```

1. Create a VPC.

   1. If you are deploying to a test system, create a VPC using an Amazon EKS AWS CloudFormation template.

      ```
      aws cloudformation create-stack --stack-name my-eks-custom-networking-vpc \
        --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml \
        --parameters ParameterKey=VpcBlock,ParameterValue=192.168.0.0/24 \
        ParameterKey=PrivateSubnet01Block,ParameterValue=192.168.0.64/27 \
        ParameterKey=PrivateSubnet02Block,ParameterValue=192.168.0.96/27 \
        ParameterKey=PublicSubnet01Block,ParameterValue=192.168.0.0/27 \
        ParameterKey=PublicSubnet02Block,ParameterValue=192.168.0.32/27
      ```

   1. The AWS CloudFormation stack takes a few minutes to create. To check on the stack’s deployment status, run the following command.

      ```
      aws cloudformation describe-stacks --stack-name my-eks-custom-networking-vpc --query Stacks\[\].StackStatus  --output text
      ```

      Don’t continue to the next step until the output of the command is `CREATE_COMPLETE`.

   1. Define variables with the values of the private subnet IDs created by the template.

      ```
      subnet_id_1=$(aws cloudformation describe-stack-resources --stack-name my-eks-custom-networking-vpc \
          --query "StackResources[?LogicalResourceId=='PrivateSubnet01'].PhysicalResourceId" --output text)
      subnet_id_2=$(aws cloudformation describe-stack-resources --stack-name my-eks-custom-networking-vpc \
          --query "StackResources[?LogicalResourceId=='PrivateSubnet02'].PhysicalResourceId" --output text)
      ```

   1. Define variables with the Availability Zones of the subnets retrieved in the previous step.

      ```
      az_1=$(aws ec2 describe-subnets --subnet-ids $subnet_id_1 --query 'Subnets[*].AvailabilityZone' --output text)
      az_2=$(aws ec2 describe-subnets --subnet-ids $subnet_id_2 --query 'Subnets[*].AvailabilityZone' --output text)
      ```

1. Create a cluster IAM role.

   1. Run the following command to create an IAM trust policy JSON file.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "eks.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      ```

   1. Create the Amazon EKS cluster IAM role. If necessary, preface `eks-cluster-role-trust-policy.json` with the path on your computer that you wrote the file to in the previous step. The command associates the trust policy that you created in the previous step to the role. To create an IAM role, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that is creating the role must be assigned the `iam:CreateRole` action (permission).

      ```
      aws iam create-role --role-name myCustomNetworkingAmazonEKSClusterRole --assume-role-policy-document file://"eks-cluster-role-trust-policy.json"
      ```

   1. Attach the Amazon EKS managed policy named [AmazonEKSClusterPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKSClusterPolicy.html#AmazonEKSClusterPolicy-json) to the role. To attach an IAM policy to an [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal), the principal that is attaching the policy must be assigned one of the following IAM actions (permissions): `iam:AttachUserPolicy` or `iam:AttachRolePolicy`.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy --role-name myCustomNetworkingAmazonEKSClusterRole
      ```

1. Create an Amazon EKS cluster and configure your device to communicate with it.

   1. Create a cluster.

      ```
      aws eks create-cluster --name my-custom-networking-cluster \
         --role-arn arn:aws:iam::$account_id:role/myCustomNetworkingAmazonEKSClusterRole \
         --resources-vpc-config subnetIds="$subnet_id_1","$subnet_id_2"
      ```
**Note**  
You might receive an error that one of the Availability Zones in your request doesn’t have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account. For more information, see [Insufficient capacity](troubleshooting.md#ice).

   1. The cluster takes several minutes to create. To check on the cluster’s deployment status, run the following command.

      ```
      aws eks describe-cluster --name my-custom-networking-cluster --query cluster.status
      ```

      Don’t continue to the next step until the output of the command is `"ACTIVE"`.

   1. Configure `kubectl` to communicate with your cluster.

      ```
      aws eks update-kubeconfig --name my-custom-networking-cluster
      ```

## Step 2: Configure your VPC
<a name="custom-networking-configure-vpc"></a>

This tutorial requires the VPC created in [Step 1: Create a test VPC and cluster](#custom-networking-create-cluster). For a production cluster, adjust the steps accordingly for your VPC by replacing all of the example values with your own.

1. Confirm that your currently-installed Amazon VPC CNI plugin for Kubernetes is the latest version. To determine the latest version for the Amazon EKS add-on type and update your version to it, see [Update an Amazon EKS add-on](updating-an-add-on.md). To determine the latest version for the self-managed add-on type and update your version to it, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md).

1. Retrieve the ID of your cluster VPC and store it in a variable for use in later steps.

   ```
   vpc_id=$(aws eks describe-cluster --name my-custom-networking-cluster --query "cluster.resourcesVpcConfig.vpcId" --output text)
   ```

1. Associate an additional Classless Inter-Domain Routing (CIDR) block with your cluster’s VPC. The CIDR block can’t overlap with any existing associated CIDR blocks.

   1. View the current CIDR blocks associated to your VPC.

      ```
      aws ec2 describe-vpcs --vpc-ids $vpc_id \
          --query 'Vpcs[*].CidrBlockAssociationSet[*].{CIDRBlock: CidrBlock, State: CidrBlockState.State}' --out table
      ```

      An example output is as follows.

      ```
      ----------------------------------
      |          DescribeVpcs          |
      +-----------------+--------------+
      |    CIDRBlock    |    State     |
      +-----------------+--------------+
      |  192.168.0.0/24 |  associated  |
      +-----------------+--------------+
      ```

   1. Associate an additional CIDR block to your VPC. Replace the CIDR block value in the following command. For more information, see [Associate additional IPv4 CIDR blocks with your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/modify-vpcs.html#add-ipv4-cidr) in the Amazon VPC User Guide.

      ```
      aws ec2 associate-vpc-cidr-block --vpc-id $vpc_id --cidr-block 192.168.1.0/24
      ```

   1. Confirm that the new block is associated.

      ```
      aws ec2 describe-vpcs --vpc-ids $vpc_id --query 'Vpcs[*].CidrBlockAssociationSet[*].{CIDRBlock: CidrBlock, State: CidrBlockState.State}' --out table
      ```

      An example output is as follows.

      ```
      ----------------------------------
      |          DescribeVpcs          |
      +-----------------+--------------+
      |    CIDRBlock    |    State     |
      +-----------------+--------------+
      |  192.168.0.0/24 |  associated  |
      |  192.168.1.0/24 |  associated  |
      +-----------------+--------------+
      ```

   Don’t proceed to the next step until your new CIDR block’s `State` is `associated`.

1. Create as many subnets as you want to use in each Availability Zone that your existing subnets are in. Specify a CIDR block that’s within the CIDR block that you associated with your VPC in a previous step.

   1. Create new subnets. Replace the CIDR block values in the following command. The subnets must be created in a different VPC CIDR block than your existing subnets are in, but in the same Availability Zones as your existing subnets. In this example, one subnet is created in the new CIDR block in each Availability Zone that the current private subnets exist in. The IDs of the subnets created are stored in variables for use in later steps. The `Name` values match the values assigned to the subnets created using the Amazon EKS VPC template in a previous step. Names aren’t required. You can use different names.

      ```
      new_subnet_id_1=$(aws ec2 create-subnet --vpc-id $vpc_id --availability-zone $az_1 --cidr-block 192.168.1.0/27 \
          --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=my-eks-custom-networking-vpc-PrivateSubnet01},{Key=kubernetes.io/role/internal-elb,Value=1}]' \
          --query Subnet.SubnetId --output text)
      new_subnet_id_2=$(aws ec2 create-subnet --vpc-id $vpc_id --availability-zone $az_2 --cidr-block 192.168.1.32/27 \
          --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=my-eks-custom-networking-vpc-PrivateSubnet02},{Key=kubernetes.io/role/internal-elb,Value=1}]' \
          --query Subnet.SubnetId --output text)
      ```
**Important**  
By default, your new subnets are implicitly associated with your VPC’s [main route table](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html#RouteTables). This route table allows communication between all the resources that are deployed in the VPC. However, it doesn’t allow communication with resources that have IP addresses that are outside the CIDR blocks that are associated with your VPC. You can associate your own route table to your subnets to change this behavior. For more information, see [Subnet route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html#subnet-route-tables) in the Amazon VPC User Guide.

   1. View the current subnets in your VPC.

      ```
      aws ec2 describe-subnets --filters "Name=vpc-id,Values=$vpc_id" \
          --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \
          --output table
      ```

      An example output is as follows.

      ```
      ----------------------------------------------------------------------
      |                           DescribeSubnets                          |
      +------------------+--------------------+----------------------------+
      | AvailabilityZone |     CidrBlock      |         SubnetId           |
      +------------------+--------------------+----------------------------+
      |  us-west-2d      |  192.168.0.0/27    |     subnet-example1        |
      |  us-west-2a      |  192.168.0.32/27   |     subnet-example2        |
      |  us-west-2a      |  192.168.0.64/27   |     subnet-example3        |
      |  us-west-2d      |  192.168.0.96/27   |     subnet-example4        |
      |  us-west-2a      |  192.168.1.0/27    |     subnet-example5        |
      |  us-west-2d      |  192.168.1.32/27   |     subnet-example6        |
      +------------------+--------------------+----------------------------+
      ```

      You can see the subnets in the `192.168.1.0` CIDR block that you created are in the same Availability Zones as the subnets in the `192.168.0.0` CIDR block.

## Step 3: Configure Kubernetes resources
<a name="custom-networking-configure-kubernetes"></a>

1. Set the `AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG` environment variable to `true` in the `aws-node` DaemonSet.

   ```
   kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
   ```

1. Retrieve the ID of your [cluster security group](sec-group-reqs.md) and store it in a variable for use in the next step. Amazon EKS automatically creates this security group when you create your cluster.

   ```
   cluster_security_group_id=$(aws eks describe-cluster --name my-custom-networking-cluster --query cluster.resourcesVpcConfig.clusterSecurityGroupId --output text)
   ```

1.  Create an `ENIConfig` custom resource for each subnet that you want to deploy Pods in.

   1. Create a unique file for each network interface configuration.

      The following commands create separate `ENIConfig` files for the two subnets that were created in a previous step. The value for `name` must be unique. The name is the same as the Availability Zone that the subnet is in. The cluster security group is assigned to the `ENIConfig`.

      ```
      cat >$az_1.yaml <<EOF
      apiVersion: crd.k8s.amazonaws.com/v1alpha1
      kind: ENIConfig
      metadata:
        name: $az_1
      spec:
        securityGroups:
          - $cluster_security_group_id
        subnet: $new_subnet_id_1
      EOF
      ```

      ```
      cat >$az_2.yaml <<EOF
      apiVersion: crd.k8s.amazonaws.com/v1alpha1
      kind: ENIConfig
      metadata:
        name: $az_2
      spec:
        securityGroups:
          - $cluster_security_group_id
        subnet: $new_subnet_id_2
      EOF
      ```

      For a production cluster, you can make the following changes to the previous commands:
      + Replace \$1cluster\$1security\$1group\$1id with the ID of an existing [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html) that you want to use for each `ENIConfig`.
      + We recommend naming your `ENIConfigs` the same as the Availability Zone that you’ll use the `ENIConfig` for, whenever possible. You might need to use different names for your `ENIConfigs` than the names of the Availability Zones for a variety of reasons. For example, if you have more than two subnets in the same Availability Zone and want to use them both with custom networking, then you need multiple `ENIConfigs` for the same Availability Zone. Since each `ENIConfig` requires a unique name, you can’t name more than one of your `ENIConfigs` using the Availability Zone name.

        If your `ENIConfig` names aren’t all the same as Availability Zone names, then replace \$1az\$11 and \$1az\$12 with your own names in the previous commands and [annotate your nodes with the ENIConfig](#custom-networking-annotate-eniconfig) later in this tutorial.
**Note**  
If you don’t specify a valid security group for use with a production cluster and you’re using:
      + version `1.8.0` or later of the Amazon VPC CNI plugin for Kubernetes, then the security groups associated with the node’s primary elastic network interface are used.
      + a version of the Amazon VPC CNI plugin for Kubernetes that’s earlier than `1.8.0`, then the default security group for the VPC is assigned to secondary network interfaces.
**Important**  
 `AWS_VPC_K8S_CNI_EXTERNALSNAT=false` is a default setting in the configuration for the Amazon VPC CNI plugin for Kubernetes. If you’re using the default setting, then traffic that is destined for IP addresses that aren’t within one of the CIDR blocks associated with your VPC use the security groups and subnets of your node’s primary network interface. The subnets and security groups defined in your `ENIConfigs` that are used to create secondary network interfaces aren’t used for this traffic. For more information about this setting, see [Enable outbound internet access for Pods](external-snat.md).
If you also use security groups for Pods, the security group that’s specified in a `SecurityGroupPolicy` is used instead of the security group that’s specified in the `ENIConfigs`. For more information, see [Assign security groups to individual Pods](security-groups-for-pods.md).

   1. Apply each custom resource file that you created to your cluster with the following commands.

      ```
      kubectl apply -f $az_1.yaml
      kubectl apply -f $az_2.yaml
      ```

1. Confirm that your `ENIConfigs` were created.

   ```
   kubectl get ENIConfigs
   ```

   An example output is as follows.

   ```
   NAME         AGE
   us-west-2a   117s
   us-west-2d   105s
   ```

1. If you’re enabling custom networking on a production cluster and named your `ENIConfigs` something other than the Availability Zone that you’re using them for, then skip to the [next step](#custom-networking-deploy-nodes) to deploy Amazon EC2 nodes.

   Enable Kubernetes to automatically apply the `ENIConfig` for an Availability Zone to any new Amazon EC2 nodes created in your cluster.

   1. For the test cluster in this tutorial, skip to the [next step](#custom-networking-automatically-apply-eniconfig).

      For a production cluster, check to see if an annotation with the key `k8s.amazonaws.com/eniConfig` for the ` [ENI\$1CONFIG\$1ANNOTATION\$1DEF](https://github.com/aws/amazon-vpc-cni-k8s#eni_config_annotation_def) ` environment variable exists in the container spec for the `aws-node` DaemonSet.

      ```
      kubectl describe daemonset aws-node -n kube-system | grep ENI_CONFIG_ANNOTATION_DEF
      ```

      If output is returned, the annotation exists. If no output is returned, then the variable is not set. For a production cluster, you can use either this setting or the setting in the following step. If you use this setting, it overrides the setting in the following step. In this tutorial, the setting in the next step is used.

   1.  Update your `aws-node` DaemonSet to automatically apply the `ENIConfig` for an Availability Zone to any new Amazon EC2 nodes created in your cluster.

      ```
      kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone
      ```

## Step 4: Deploy Amazon EC2 nodes
<a name="custom-networking-deploy-nodes"></a>

1. Create a node IAM role.

   1. Run the following command to create an IAM trust policy JSON file.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      ```

   1. Create an IAM role and store its returned Amazon Resource Name (ARN) in a variable for use in a later step.

      ```
      node_role_arn=$(aws iam create-role --role-name myCustomNetworkingNodeRole --assume-role-policy-document file://"node-role-trust-relationship.json" \
          --query Role.Arn --output text)
      ```

   1. Attach three required IAM managed policies to the IAM role.

      ```
      aws iam attach-role-policy \
        --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
        --role-name myCustomNetworkingNodeRole
      aws iam attach-role-policy \
        --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
        --role-name myCustomNetworkingNodeRole
      aws iam attach-role-policy \
          --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
          --role-name myCustomNetworkingNodeRole
      ```
**Important**  
For simplicity in this tutorial, the [AmazonEKS\$1CNI\$1Policy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKS_CNI_Policy.html) policy is attached to the node IAM role. In a production cluster however, we recommend attaching the policy to a separate IAM role that is used only with the Amazon VPC CNI plugin for Kubernetes. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

1. Create one of the following types of node groups. To determine the instance type that you want to deploy, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md). For this tutorial, complete the **Managed**, **Without a launch template or with a launch template without an AMI ID specified** option. If you’re going to use the node group for production workloads, then we recommend that you familiarize yourself with all of the [managed node group](create-managed-node-group.md) and [self-managed node group](worker.md) options before deploying the node group.
   +  **Managed** – Deploy your node group using one of the following options:
     +  **Without a launch template or with a launch template without an AMI ID specified** – Run the following command. For this tutorial, use the example values. For a production node group, replace all example values with your own. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.

       ```
       aws eks create-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup \
           --subnets $subnet_id_1 $subnet_id_2 --instance-types t3.medium --node-role $node_role_arn
       ```
     +  **With a launch template with a specified AMI ID** 

       1. Determine the maximum number of Pods for your nodes based on your instance type. For more information, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence). Note the value for use in the next step.

       1. In your launch template, specify an Amazon EKS optimized AMI ID, or a custom AMI built off the Amazon EKS optimized AMI, then [deploy the node group using a launch template](launch-templates.md) and provide the following user data in the launch template. This user data passes arguments into the `NodeConfig` specification. For more information about NodeConfig, see the [NodeConfig API reference](https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#nodeconfig). You can replace `20` with either the value from the previous step (recommended) or your own value.

          ```
          ---
          MIME-Version: 1.0
          Content-Type: multipart/mixed; boundary="BOUNDARY"
          --BOUNDARY
          Content-Type: application/node.eks.aws
          
          ---
          apiVersion: node.eks.aws/v1alpha1
          kind: NodeConfig
          spec:
            cluster:
              name: my-cluster
              ...
              kubelet:
                config:
                  maxPods: 20
          ```

          If you’ve created a custom AMI that is not built off the Amazon EKS optimized AMI, then you need to custom create the configuration yourself.
   +  **Self-managed** 

     1. Determine the maximum number of Pods for your nodes based on your instance type. For more information, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence). Note the value for use in the next step.

     1. Deploy the node group using the instructions in [Create self-managed Amazon Linux nodes](launch-workers.md).
**Note**  
If you want nodes in a production cluster to support a significantly higher number of Pods, you can enable prefix delegation. For example, `110` is returned for an `m5.large` instance type. For instructions on how to enable this capability, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md). You can use this capability with custom networking.

1. Node group creation takes several minutes. You can check the status of the creation of a managed node group with the following command.

   ```
   aws eks describe-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup --query nodegroup.status --output text
   ```

   Don’t continue to the next step until the output returned is `ACTIVE`.

1.  For the tutorial, you can skip this step.

   For a production cluster, if you didn’t name your `ENIConfigs` the same as the Availability Zone that you’re using them for, then you must annotate your nodes with the `ENIConfig` name that should be used with the node. This step isn’t necessary if you only have one subnet in each Availability Zone and you named your `ENIConfigs` with the same names as your Availability Zones. This is because the Amazon VPC CNI plugin for Kubernetes automatically associates the correct `ENIConfig` with the node for you when you enabled it to do so in a [previous step](#custom-networking-automatically-apply-eniconfig).

   1. Get the list of nodes in your cluster.

      ```
      kubectl get nodes
      ```

      An example output is as follows.

      ```
      NAME                                          STATUS   ROLES    AGE     VERSION
      ip-192-168-0-126.us-west-2.compute.internal   Ready    <none>   8m49s   v1.22.9-eks-810597c
      ip-192-168-0-92.us-west-2.compute.internal    Ready    <none>   8m34s   v1.22.9-eks-810597c
      ```

   1. Determine which Availability Zone each node is in. Run the following command for each node that was returned in the previous step, replacing the IP addresses based on the previous output.

      ```
      aws ec2 describe-instances --filters Name=network-interface.private-dns-name,Values=ip-192-168-0-126.us-west-2.compute.internal \
      --query 'Reservations[].Instances[].{AvailabilityZone: Placement.AvailabilityZone, SubnetId: SubnetId}'
      ```

      An example output is as follows.

      ```
      [
          {
              "AvailabilityZone": "us-west-2d",
              "SubnetId": "subnet-Example5"
          }
      ]
      ```

   1. Annotate each node with the `ENIConfig` that you created for the subnet ID and Availability Zone. You can only annotate a node with one `ENIConfig`, though multiple nodes can be annotated with the same `ENIConfig`. Replace the example values with your own.

      ```
      kubectl annotate node ip-192-168-0-126.us-west-2.compute.internal k8s.amazonaws.com/eniConfig=EniConfigName1
      kubectl annotate node ip-192-168-0-92.us-west-2.compute.internal k8s.amazonaws.com/eniConfig=EniConfigName2
      ```

1.  If you had nodes in a production cluster with running Pods before you switched to using the custom networking feature, complete the following tasks:

   1. Make sure that you have available nodes that are using the custom networking feature.

   1. Cordon and drain the nodes to gracefully shut down the Pods. For more information, see [Safely Drain a Node](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) in the Kubernetes documentation.

   1. Terminate the nodes. If the nodes are in an existing managed node group, you can delete the node group. Run the following command.

      ```
      aws eks delete-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup
      ```

   Only new nodes that are registered with the `k8s.amazonaws.com/eniConfig` label use the custom networking feature.

1. Confirm that Pods are assigned an IP address from a CIDR block that’s associated to one of the subnets that you created in a previous step.

   ```
   kubectl get pods -A -o wide
   ```

   An example output is as follows.

   ```
   NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE     IP              NODE                                          NOMINATED NODE   READINESS GATES
   kube-system   aws-node-2rkn4             1/1     Running   0          7m19s   192.168.0.92    ip-192-168-0-92.us-west-2.compute.internal    <none>           <none>
   kube-system   aws-node-k96wp             1/1     Running   0          7m15s   192.168.0.126   ip-192-168-0-126.us-west-2.compute.internal   <none>           <none>
   kube-system   coredns-657694c6f4-smcgr   1/1     Running   0          56m     192.168.1.23    ip-192-168-0-92.us-west-2.compute.internal    <none>           <none>
   kube-system   coredns-657694c6f4-stwv9   1/1     Running   0          56m     192.168.1.28    ip-192-168-0-92.us-west-2.compute.internal    <none>           <none>
   kube-system   kube-proxy-jgshq           1/1     Running   0          7m19s   192.168.0.92    ip-192-168-0-92.us-west-2.compute.internal    <none>           <none>
   kube-system   kube-proxy-wx9vk           1/1     Running   0          7m15s   192.168.0.126   ip-192-168-0-126.us-west-2.compute.internal   <none>           <none>
   ```

   You can see that the coredns Pods are assigned IP addresses from the `192.168.1.0` CIDR block that you added to your VPC. Without custom networking, they would have been assigned addresses from the `192.168.0.0` CIDR block, because it was the only CIDR block originally associated with the VPC.

   If a Pod’s `spec` contains `hostNetwork=true`, it’s assigned the primary IP address of the node. It isn’t assigned an address from the subnets that you added. By default, this value is set to `false`. This value is set to `true` for the `kube-proxy` and Amazon VPC CNI plugin for Kubernetes (`aws-node`) Pods that run on your cluster. This is why the `kube-proxy` and the plugin’s `aws-node` Pods aren’t assigned 192.168.1.x addresses in the previous output. For more information about a Pod’s `hostNetwork` setting, see [PodSpec v1 core](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podspec-v1-core) in the Kubernetes API reference.

## Step 5: Delete tutorial resources
<a name="custom-network-delete-resources"></a>

After you complete the tutorial, we recommend that you delete the resources that you created. You can then adjust the steps to enable custom networking for a production cluster.

1. If the node group that you created was just for testing, then delete it.

   ```
   aws eks delete-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup
   ```

1. Even after the AWS CLI output says that the cluster is deleted, the delete process might not actually be complete. The delete process takes a few minutes. Confirm that it’s complete by running the following command.

   ```
   aws eks describe-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup --query nodegroup.status --output text
   ```

   Don’t continue until the returned output is similar to the following output.

   ```
   An error occurred (ResourceNotFoundException) when calling the DescribeNodegroup operation: No node group found for name: my-nodegroup.
   ```

1. If the node group that you created was just for testing, then delete the node IAM role.

   1. Detach the policies from the role.

      ```
      aws iam detach-role-policy --role-name myCustomNetworkingNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
      aws iam detach-role-policy --role-name myCustomNetworkingNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
      aws iam detach-role-policy --role-name myCustomNetworkingNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
      ```

   1. Delete the role.

      ```
      aws iam delete-role --role-name myCustomNetworkingNodeRole
      ```

1. Delete the cluster.

   ```
   aws eks delete-cluster --name my-custom-networking-cluster
   ```

   Confirm the cluster is deleted with the following command.

   ```
   aws eks describe-cluster --name my-custom-networking-cluster --query cluster.status --output text
   ```

   When output similar to the following is returned, the cluster is successfully deleted.

   ```
   An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: my-custom-networking-cluster.
   ```

1. Delete the cluster IAM role.

   1. Detach the policies from the role.

      ```
      aws iam detach-role-policy --role-name myCustomNetworkingAmazonEKSClusterRole --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
      ```

   1. Delete the role.

      ```
      aws iam delete-role --role-name myCustomNetworkingAmazonEKSClusterRole
      ```

1. Delete the subnets that you created in a previous step.

   ```
   aws ec2 delete-subnet --subnet-id $new_subnet_id_1
   aws ec2 delete-subnet --subnet-id $new_subnet_id_2
   ```

1. Delete the VPC that you created.

   ```
   aws cloudformation delete-stack --stack-name my-eks-custom-networking-vpc
   ```

# Assign more IP addresses to Amazon EKS nodes with prefixes
<a name="cni-increase-ip-addresses"></a>

 **Applies to**: Linux and Windows nodes with Amazon EC2 instances

 **Applies to**: Public and private subnets

Each Amazon EC2 instance supports a maximum number of elastic network interfaces and a maximum number of IP addresses that can be assigned to each network interface. Each node requires one IP address for each network interface. All other available IP addresses can be assigned to `Pods`. Each `Pod` requires its own IP address. As a result, you might have nodes that have available compute and memory resources, but can’t accommodate additional `Pods` because the node has run out of IP addresses to assign to `Pods`.

You can increase the number of IP addresses that nodes can assign to `Pods` by assigning IP prefixes, rather than assigning individual secondary IP addresses to your nodes. Each prefix includes several IP addresses. If you don’t configure your cluster for IP prefix assignment, your cluster must make more Amazon EC2 application programming interface (API) calls to configure network interfaces and IP addresses necessary for Pod connectivity. As clusters grow to larger sizes, the frequency of these API calls can lead to longer Pod and instance launch times. This results in scaling delays to meet the demand of large and spiky workloads, and adds cost and management overhead because you need to provision additional clusters and VPCs to meet scaling requirements. For more information, see [Kubernetes Scalability thresholds](https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md) on GitHub.

## Compatibility with Amazon VPC CNI plugin for Kubernetes features
<a name="cni-increase-ip-addresses-compatability"></a>

You can use IP prefixes with the following features:
+ IPv4 Source Network Address Translation - For more information, see [Enable outbound internet access for Pods](external-snat.md).
+ IPv6 addresses to clusters, Pods, and services - For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).
+ Restricting traffic using Kubernetes network policies - For more information, see [Limit Pod traffic with Kubernetes network policies](cni-network-policy.md).

The following list provides information about the Amazon VPC CNI plugin settings that apply. For more information about each setting, see [amazon-vpc-cni-k8s](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/README.md) on GitHub.
+  `WARM_IP_TARGET` 
+  `MINIMUM_IP_TARGET` 
+  `WARM_PREFIX_TARGET` 

## Considerations
<a name="cni-increase-ip-addresses-considerations"></a>

Consider the following when you use this feature:
+ Each Amazon EC2 instance type supports a maximum number of Pods. If your managed node group consists of multiple instance types, the smallest number of maximum Pods for an instance in the cluster is applied to all nodes in the cluster.
+ By default, the maximum number of `Pods` that you can run on a node is 110, but you can change that number. If you change the number and have an existing managed node group, the next AMI or launch template update of your node group results in new nodes coming up with the changed value.
+ When transitioning from assigning IP addresses to assigning IP prefixes, we recommend that you create new node groups to increase the number of available IP addresses, rather than doing a rolling replacement of existing nodes. Running Pods on a node that has both IP addresses and prefixes assigned can lead to inconsistency in the advertised IP address capacity, impacting the future workloads on the node. For the recommended way of performing the transition, see [Prefix Delegation mode for Linux](https://docs.aws.amazon.com/eks/latest/best-practices/prefix-mode-linux.html) in the *Amazon EKS Best Practices Guide*.
+ The security group scope is at the node-level - For more information, see [Security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html).
+ IP prefixes assigned to a network interface support high Pod density per node and have the best launch time.
+ IP prefixes and IP addresses are associated with standard Amazon EC2 elastic network interfaces. Pods requiring specific security groups are assigned the primary IP address of a branch network interface. You can mix Pods getting IP addresses, or IP addresses from IP prefixes with Pods getting branch network interfaces on the same node.
+ For clusters with Linux nodes only.
  + After you configure the add-on to assign prefixes to network interfaces, you can’t downgrade your Amazon VPC CNI plugin for Kubernetes add-on to a version lower than `1.9.0` (or `1.10.1`) without removing all nodes in all node groups in your cluster.
  + If you’re also using security groups for Pods, with `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard` and `AWS_VPC_K8S_CNI_EXTERNALSNAT`=`false`, when your Pods communicate with endpoints outside of your VPC, the node’s security groups are used, rather than any security groups you’ve assigned to your Pods.

    If you’re also using [security groups for Pods](security-groups-for-pods.md), with `POD_SECURITY_GROUP_ENFORCING_MODE`=`strict`, when your `Pods` communicate with endpoints outside of your VPC, the `Pod’s` security groups are used.

# Increase the available IP addresses for your Amazon EKS node
<a name="cni-increase-ip-addresses-procedure"></a>

You can increase the number of IP addresses that nodes can assign to Pods by assigning IP prefixes, rather than assigning individual secondary IP addresses to your nodes.

## Prerequisites
<a name="_prerequisites"></a>
+ You need an existing cluster. To deploy one, see [Create an Amazon EKS cluster](create-cluster.md).
+ The subnets that your Amazon EKS nodes are in must have sufficient contiguous `/28` (for `IPv4` clusters) or `/80` (for `IPv6` clusters) Classless Inter-Domain Routing (CIDR) blocks. You can only have Linux nodes in an `IPv6` cluster. Using IP prefixes can fail if IP addresses are scattered throughout the subnet CIDR. We recommend the following:
  + Using a subnet CIDR reservation so that even if any IP addresses within the reserved range are still in use, upon their release, the IP addresses aren’t reassigned. This ensures that prefixes are available for allocation without segmentation.
  + Use new subnets that are specifically used for running the workloads that IP prefixes are assigned to. Both Windows and Linux workloads can run in the same subnet when assigning IP prefixes.
+ To assign IP prefixes to your nodes, your nodes must be AWS Nitro-based. Instances that aren’t Nitro-based continue to allocate individual secondary IP addresses, but have a significantly lower number of IP addresses to assign to Pods than Nitro-based instances do.
+  **For clusters with Linux nodes only** – If your cluster is configured for the `IPv4` family, you must have version `1.9.0` or later of the Amazon VPC CNI plugin for Kubernetes add-on installed. You can check your current version with the following command.

  ```
  kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
  ```

  If your cluster is configured for the `IPv6` family, you must have version `1.10.1` of the add-on installed. If your plugin version is earlier than the required versions, you must update it. For more information, see the updating sections of [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md).
+  **For clusters with Windows nodes only** 
  + You must have Windows support enabled for your cluster. For more information, see [Deploy Windows nodes on EKS clusters](windows-support.md).

## Assign IP address prefixes to nodes
<a name="cni-increase-ip-procedure"></a>

Configure your cluster to assign IP address prefixes to nodes. Complete the procedure that matches your node’s operating system.

### Linux
<a name="_linux"></a>

1. Enable the parameter to assign prefixes to network interfaces for the Amazon VPC CNI DaemonSet. When you deploy a cluster, version `1.10.1` or later of the Amazon VPC CNI plugin for Kubernetes add-on is deployed with it. If you created the cluster with the `IPv6` family, this setting was set to `true` by default. If you created the cluster with the `IPv4` family, this setting was set to `false` by default.

   ```
   kubectl set env daemonset aws-node -n kube-system ENABLE_PREFIX_DELEGATION=true
   ```
**Important**  
Even if your subnet has available IP addresses, if the subnet does not have any contiguous `/28` blocks available, you will see the following error in the Amazon VPC CNI plugin for Kubernetes logs.  

   ```
   InsufficientCidrBlocks: The specified subnet does not have enough free cidr blocks to satisfy the request
   ```
This can happen due to fragmentation of existing secondary IP addresses spread out across a subnet. To resolve this error, either create a new subnet and launch Pods there, or use an Amazon EC2 subnet CIDR reservation to reserve space within a subnet for use with prefix assignment. For more information, see [Subnet CIDR reservations](https://docs.aws.amazon.com/vpc/latest/userguide/subnet-cidr-reservation.html) in the Amazon VPC User Guide.

1. If you plan to deploy a managed node group without a launch template, or with a launch template that you haven’t specified an AMI ID in, and you’re using a version of the Amazon VPC CNI plugin for Kubernetes at or later than the versions listed in the prerequisites, then skip to the next step. Managed node groups automatically calculates the maximum number of Pods for you.

   If you’re deploying a self-managed node group or a managed node group with a launch template that you have specified an AMI ID in, then you must set the maximum number of Pods for your nodes. For more information about how to determine the appropriate value, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).
**Important**  
Managed node groups enforces a maximum number on the value of `maxPods`. For instances with less than 30 vCPUs the maximum number is 110 and for all other instances the maximum number is 250. This maximum number is applied whether prefix delegation is enabled or not.

1. If you’re using a cluster configured for `IPv6`, skip to the next step.

   Specify the parameters in one of the following options. To determine which option is right for you and what value to provide for it, see [WARM\$1PREFIX\$1TARGET, WARM\$1IP\$1TARGET, and MINIMUM\$1IP\$1TARGET](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/prefix-and-ip-target.md) on GitHub.

   You can replace the example values with a value greater than zero.
   +  `WARM_PREFIX_TARGET` 

     ```
     kubectl set env ds aws-node -n kube-system WARM_PREFIX_TARGET=1
     ```
   +  `WARM_IP_TARGET` or `MINIMUM_IP_TARGET` – If either value is set, it overrides any value set for `WARM_PREFIX_TARGET`.

     ```
     kubectl set env ds aws-node -n kube-system WARM_IP_TARGET=5
     ```

     ```
     kubectl set env ds aws-node -n kube-system MINIMUM_IP_TARGET=2
     ```

1. Create one of the following types of node groups with at least one Amazon EC2 Nitro Amazon Linux 2023 instance type. For a list of Nitro instance types, see [Instances built on the Nitro System](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) in the Amazon EC2 User Guide. This capability is not supported on Windows. For the options that include *110*, replace it with either the value from step 3 (recommended), or your own value.
   +  **Self-managed** – Deploy the node group using the instructions in [Create self-managed Amazon Linux nodes](launch-workers.md). Before creating the CloudFormation stack, open the template file and adjust the `UserData` in the `NodeLaunchTemplate` to be like the following

     ```
     ...
                 apiVersion: node.eks.aws/v1alpha1
                 kind: NodeConfig
                 spec:
                   cluster:
                     name: ${ClusterName}
                     apiServerEndpoint: ${ApiServerEndpoint}
                     certificateAuthority: ${CertificateAuthorityData}
                     cidr: ${ServiceCidr}
                   kubelet:
                     config:
                       maxPods: 110
     ...
     ```

     If you’re using `eksctl` to create the node group, you can use the following command.

     ```
     eksctl create nodegroup --cluster my-cluster --managed=false --max-pods-per-node 110
     ```
   +  **Managed** – Deploy your node group using one of the following options:
     +  **Without a launch template or with a launch template without an AMI ID specified** – Complete the procedure in [Create a managed node group for your cluster](create-managed-node-group.md). Managed node groups automatically calculates the Amazon EKS recommended `max-pods` value for you.
     +  **With a launch template with a specified AMI ID** – In your launch template, specify an Amazon EKS optimized AMI ID, or a custom AMI built off the Amazon EKS optimized AMI, then [deploy the node group using a launch template](launch-templates.md) and provide the following user data in the launch template. This user data passes a `NodeConfig` object to be read by the `nodeadm` tool on the node. For more information about `nodeadm`, see [the nodeadm documentation](https://awslabs.github.io/amazon-eks-ami/nodeadm).

       ```
       MIME-Version: 1.0
       Content-Type: multipart/mixed; boundary="//"
       
       --//
       Content-Type: application/node.eks.aws
       
       ---
       apiVersion: node.eks.aws/v1alpha1
       kind: NodeConfig
       spec:
        cluster:
          apiServerEndpoint: [.replaceable]`my-cluster`
          certificateAuthority: [.replaceable]`LS0t...`
          cidr: [.replaceable]`10.100.0.0/16`
          name: [.replaceable]`my-cluster
        kubelet:
          config:
            maxPods: [.replaceable]`110`
       --//--
       ```

       If you’re using `eksctl` to create the node group, you can use the following command.

       ```
       eksctl create nodegroup --cluster my-cluster --max-pods-per-node 110
       ```

       If you’ve created a custom AMI that is not built off the Amazon EKS optimized AMI, then you need to custom create the configuration yourself.
**Note**  
If you also want to assign IP addresses to Pods from a different subnet than the instance’s, then you need to enable the capability in this step. For more information, see [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md).

### Windows
<a name="_windows"></a>

1. Enable assignment of IP prefixes.

   1. Open the `amazon-vpc-cni` `ConfigMap` for editing.

      ```
      kubectl edit configmap -n kube-system amazon-vpc-cni -o yaml
      ```

   1. Add the following line to the `data` section.

      ```
        enable-windows-prefix-delegation: "true"
      ```

   1. Save the file and close the editor.

   1. Confirm that the line was added to the `ConfigMap`.

      ```
      kubectl get configmap -n kube-system amazon-vpc-cni -o "jsonpath={.data.enable-windows-prefix-delegation}"
      ```

      If the returned output isn’t `true`, then there might have been an error. Try completing the step again.
**Important**  
Even if your subnet has available IP addresses, if the subnet does not have any contiguous `/28` blocks available, you will see the following error in the Amazon VPC CNI plugin for Kubernetes logs.  

      ```
      InsufficientCidrBlocks: The specified subnet does not have enough free cidr blocks to satisfy the request
      ```
This can happen due to fragmentation of existing secondary IP addresses spread out across a subnet. To resolve this error, either create a new subnet and launch Pods there, or use an Amazon EC2 subnet CIDR reservation to reserve space within a subnet for use with prefix assignment. For more information, see [Subnet CIDR reservations](https://docs.aws.amazon.com/vpc/latest/userguide/subnet-cidr-reservation.html) in the Amazon VPC User Guide.

1. (Optional) Specify additional configuration for controlling the pre-scaling and dynamic scaling behavior for your cluster. For more information, see [Configuration options with Prefix Delegation mode on Windows](https://github.com/aws/amazon-vpc-resource-controller-k8s/blob/master/docs/windows/prefix_delegation_config_options.md) on GitHub.

   1. Open the `amazon-vpc-cni` `ConfigMap` for editing.

      ```
      kubectl edit configmap -n kube-system amazon-vpc-cni -o yaml
      ```

   1. Replace the example values with a value greater than zero and add the entries that you require to the `data` section of the `ConfigMap`. If you set a value for either `warm-ip-target` or `minimum-ip-target`, the value overrides any value set for `warm-prefix-target`.

      ```
        warm-prefix-target: "1"
        warm-ip-target: "5"
        minimum-ip-target: "2"
      ```

   1. Save the file and close the editor.

1. Create Windows node groups with at least one Amazon EC2 Nitro instance type. For a list of Nitro instance types, see [Instances built on the Nitro System](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instance-types.html#ec2-nitro-instances) in the Amazon EC2 User Guide. By default, the maximum number of Pods that you can deploy to a node is 110. If you want to increase or decrease that number, specify the following in the user data for the bootstrap configuration. Replace *max-pods-quantity* with your max pods value.

   ```
   -KubeletExtraArgs '--max-pods=max-pods-quantity'
   ```

   If you’re deploying managed node groups, this configuration needs to be added in the launch template. For more information, see [Customize managed nodes with launch templates](launch-templates.md). For more information about the configuration parameters for Windows bootstrap script, see [Bootstrap script configuration parameters](eks-optimized-windows-ami.md#bootstrap-script-configuration-parameters).

## Determine max Pods and available IP addresses
<a name="cni-increase-ip-verify"></a>

1. Once your nodes are deployed, view the nodes in your cluster.

   ```
   kubectl get nodes
   ```

   An example output is as follows.

   ```
   NAME                                             STATUS     ROLES    AGE   VERSION
   ip-192-168-22-103.region-code.compute.internal   Ready      <none>   19m   v1.XX.X-eks-6b7464
   ip-192-168-97-94.region-code.compute.internal    Ready      <none>   19m   v1.XX.X-eks-6b7464
   ```

1. Describe one of the nodes to determine the value of `max-pods` for the node and the number of available IP addresses. Replace *192.168.30.193* with the `IPv4` address in the name of one of your nodes returned in the previous output.

   ```
   kubectl describe node ip-192-168-30-193.region-code.compute.internal | grep 'pods\|PrivateIPv4Address'
   ```

   An example output is as follows.

   ```
   pods:                                  110
   vpc.amazonaws.com/PrivateIPv4Address:  144
   ```

   In the previous output, `110` is the maximum number of Pods that Kubernetes will deploy to the node, even though *144* IP addresses are available.

# Assign security groups to individual Pods
<a name="security-groups-for-pods"></a>

 **Applies to**: Linux nodes with Amazon EC2 instances

 **Applies to**: Private subnets

Security groups for Pods integrate Amazon EC2 security groups with Kubernetes Pods. You can use Amazon EC2 security groups to define rules that allow inbound and outbound network traffic to and from Pods that you deploy to nodes running on many Amazon EC2 instance types and Fargate. For a detailed explanation of this capability, see the [Introducing security groups for Pods](https://aws.amazon.com/blogs/containers/introducing-security-groups-for-pods) blog post.

## Compatibility with Amazon VPC CNI plugin for Kubernetes features
<a name="security-groups-for-pods-compatability"></a>

You can use security groups for Pods with the following features:
+ IPv4 Source Network Address Translation - For more information, see [Enable outbound internet access for Pods](external-snat.md).
+ IPv6 addresses to clusters, Pods, and services - For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).
+ Restricting traffic using Kubernetes network policies - For more information, see [Limit Pod traffic with Kubernetes network policies](cni-network-policy.md).

## Considerations
<a name="sg-pods-considerations"></a>

Before deploying security groups for Pods, consider the following limitations and conditions:
+ Security groups for Pods can’t be used with Windows nodes or EKS Auto Mode.
+ Security groups for Pods can be used with clusters configured for the `IPv6` family that contain Amazon EC2 nodes by using version 1.16.0 or later of the Amazon VPC CNI plugin. You can use security groups for Pods with clusters configure `IPv6` family that contain only Fargate nodes by using version 1.7.7 or later of the Amazon VPC CNI plugin. For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md) 
+ Security groups for Pods are supported by most [Nitro-based](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) Amazon EC2 instance families, though not by all generations of a family. For example, the `m5`, `c5`, `r5`, `m6g`, `c6g`, and `r6g` instance family and generations are supported. No instance types in the `t` family are supported. For a complete list of supported instance types, see the [limits.go](https://github.com/aws/amazon-vpc-resource-controller-k8s/blob/v1.5.0/pkg/aws/vpc/limits.go) file on GitHub. Your nodes must be one of the listed instance types that have `IsTrunkingCompatible: true` in that file.
+ If you’re using custom networking and security groups for Pods together, the security group specified by security groups for Pods is used instead of the security group specified in the `ENIConfig`.
+ If you’re using version `1.10.2` or earlier of the Amazon VPC CNI plugin and you include the `terminationGracePeriodSeconds` setting in your Pod spec, the value for the setting can’t be zero.
+ If you’re using version `1.10` or earlier of the Amazon VPC CNI plugin, or version `1.11` with `POD_SECURITY_GROUP_ENFORCING_MODE`=`strict`, which is the default setting, then Kubernetes services of type `NodePort` and `LoadBalancer` using instance targets with an `externalTrafficPolicy` set to `Local` aren’t supported with Pods that you assign security groups to. For more information about using a load balancer with instance targets, see [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md).
+ If you’re using version `1.10` or earlier of the Amazon VPC CNI plugin or version `1.11` with `POD_SECURITY_GROUP_ENFORCING_MODE`=`strict`, which is the default setting, source NAT is disabled for outbound traffic from Pods with assigned security groups so that outbound security group rules are applied. To access the internet, Pods with assigned security groups must be launched on nodes that are deployed in a private subnet configured with a NAT gateway or instance. Pods with assigned security groups deployed to public subnets are not able to access the internet.

  If you’re using version `1.11` or later of the plugin with `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard`, then Pod traffic destined for outside of the VPC is translated to the IP address of the instance’s primary network interface. For this traffic, the rules in the security groups for the primary network interface are used, rather than the rules in the Pod’s security groups.
+ To use Calico network policy with Pods that have associated security groups, you must use version `1.11.0` or later of the Amazon VPC CNI plugin and set `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard`. Otherwise, traffic flow to and from Pods with associated security groups are not subjected to Calico network policy enforcement and are limited to Amazon EC2 security group enforcement only. To update your Amazon VPC CNI version, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md) 
+ Pods running on Amazon EC2 nodes that use security groups in clusters that use [NodeLocal DNSCache](https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/) are only supported with version `1.11.0` or later of the Amazon VPC CNI plugin and with `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard`. To update your Amazon VPC CNI plugin version, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md) 
+ Security groups for Pods might lead to higher Pod startup latency for Pods with high churn. This is due to rate limiting in the resource controller.
+ The EC2 security group scope is at the Pod-level - For more information, see [Security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html).

  If you set `POD_SECURITY_GROUP_ENFORCING_MODE=standard` and `AWS_VPC_K8S_CNI_EXTERNALSNAT=false`, traffic destined for endpoints outside the VPC use the node’s security groups, not the Pod’s security groups.

# Configure the Amazon VPC CNI plugin for Kubernetes for security groups for Amazon EKS Pods
<a name="security-groups-pods-deployment"></a>

If you use Pods with Amazon EC2 instances, you need to configure the Amazon VPC CNI plugin for Kubernetes for security groups

If you use Fargate Pods only, and don’t have any Amazon EC2 nodes in your cluster, see [Use a security group policy for an Amazon EKS Pod](sg-pods-example-deployment.md).

1. Check your current Amazon VPC CNI plugin for Kubernetes version with the following command:

   ```
   kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
   ```

   An example output is as follows.

   ```
   v1.7.6
   ```

   If your Amazon VPC CNI plugin for Kubernetes version is earlier than `1.7.7`, then update the plugin to version `1.7.7` or later. For more information, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md) 

1. Add the [AmazonEKSVPCResourceController](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonEKSVPCResourceController) managed IAM policy to the [cluster role](cluster-iam-role.md#create-service-role) that is associated with your Amazon EKS cluster. The policy allows the role to manage network interfaces, their private IP addresses, and their attachment and detachment to and from network instances.

   1. Retrieve the name of your cluster IAM role and store it in a variable. Replace *my-cluster* with the name of your cluster.

      ```
      cluster_role=$(aws eks describe-cluster --name my-cluster --query cluster.roleArn --output text | cut -d / -f 2)
      ```

   1. Attach the policy to the role.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController --role-name $cluster_role
      ```

1. Enable the Amazon VPC CNI add-on to manage network interfaces for Pods by setting the `ENABLE_POD_ENI` variable to `true` in the `aws-node` DaemonSet. Once this setting is set to `true`, for each node in the cluster the add-on creates a `cninode` custom resource. The VPC resource controller creates and attaches one special network interface called a *trunk network interface* with the description `aws-k8s-trunk-eni`.

   ```
   kubectl set env daemonset aws-node -n kube-system ENABLE_POD_ENI=true
   ```
**Note**  
The trunk network interface is included in the maximum number of network interfaces supported by the instance type. For a list of the maximum number of network interfaces supported by each instance type, see [IP addresses per network interface per instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) in the *Amazon EC2 User Guide*. If your node already has the maximum number of standard network interfaces attached to it then the VPC resource controller will reserve a space. You will have to scale down your running Pods enough for the controller to detach and delete a standard network interface, create the trunk network interface, and attach it to the instance.

1. You can see which of your nodes have a `CNINode` custom resource with the following command. If `No resources found` is returned, then wait several seconds and try again. The previous step requires restarting the Amazon VPC CNI plugin for Kubernetes Pods, which takes several seconds.

   ```
   kubectl get cninode -A
        NAME FEATURES
        ip-192-168-64-141.us-west-2.compute.internal [{"name":"SecurityGroupsForPods"}]
        ip-192-168-7-203.us-west-2.compute.internal [{"name":"SecurityGroupsForPods"}]
   ```

   If you are using VPC CNI versions older than `1.15`, node labels were used instead of the `CNINode` custom resource. You can see which of your nodes have the node label `aws-k8s-trunk-eni` set to `true` with the following command. If `No resources found` is returned, then wait several seconds and try again. The previous step requires restarting the Amazon VPC CNI plugin for Kubernetes Pods, which takes several seconds.

   ```
   kubectl get nodes -o wide -l vpc.amazonaws.com/has-trunk-attached=true
   ```

   Once the trunk network interface is created, Pods are assigned secondary IP addresses from the trunk or standard network interfaces. The trunk interface is automatically deleted if the node is deleted.

   When you deploy a security group for a Pod in a later step, the VPC resource controller creates a special network interface called a *branch network interface* with a description of `aws-k8s-branch-eni` and associates the security groups to it. Branch network interfaces are created in addition to the standard and trunk network interfaces attached to the node.

   If you are using liveness or readiness probes, then you also need to disable *TCP early demux*, so that the `kubelet` can connect to Pods on branch network interfaces using TCP. To disable *TCP early demux*, run the following command:

   ```
   kubectl patch daemonset aws-node -n kube-system \
     -p '{"spec": {"template": {"spec": {"initContainers": [{"env":[{"name":"DISABLE_TCP_EARLY_DEMUX","value":"true"}],"name":"aws-vpc-cni-init"}]}}}}'
   ```
**Note**  
If you’re using `1.11.0` or later of the Amazon VPC CNI plugin for Kubernetes add-on and set `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard`, as described in the next step, then you don’t need to run the previous command.

1. If your cluster uses `NodeLocal DNSCache`, or you want to use Calico network policy with your Pods that have their own security groups, or you have Kubernetes services of type `NodePort` and `LoadBalancer` using instance targets with an `externalTrafficPolicy` set to `Local` for Pods that you want to assign security groups to, then you must be using version `1.11.0` or later of the Amazon VPC CNI plugin for Kubernetes add-on, and you must enable the following setting:

   ```
   kubectl set env daemonset aws-node -n kube-system POD_SECURITY_GROUP_ENFORCING_MODE=standard
   ```

   IMPORTANT: ** Pod security group rules aren’t applied to traffic between Pods or between Pods and services, such as `kubelet` or `nodeLocalDNS`, that are on the same node. Pods using different security groups on the same node can’t communicate because they are configured in different subnets, and routing is disabled between these subnets. ** Outbound traffic from Pods to addresses outside of the VPC is network address translated to the IP address of the instance’s primary network interface (unless you’ve also set `AWS_VPC_K8S_CNI_EXTERNALSNAT=true`). For this traffic, the rules in the security groups for the primary network interface are used, rather than the rules in the Pod’s security groups. \$1\$1 For this setting to apply to existing Pods, you must restart the Pods or the nodes that the Pods are running on.

1. To see how to use a security group policy for your Pod, see [Use a security group policy for an Amazon EKS Pod](sg-pods-example-deployment.md).

# Use a security group policy for an Amazon EKS Pod
<a name="sg-pods-example-deployment"></a>

To use security groups for Pods, you must have an existing security group. The following steps show you how to use the security group policy for a Pod. Unless otherwise noted, complete all steps from the same terminal because variables are used in the following steps that don’t persist across terminals.

If you have a Pod with Amazon EC2 instances, you must configure the plugin before you use this procedure. For more information, see [Configure the Amazon VPC CNI plugin for Kubernetes for security groups for Amazon EKS Pods](security-groups-pods-deployment.md).

1. Create a Kubernetes namespace to deploy resources to. You can replace *my-namespace* with the name of a namespace that you want to use.

   ```
   kubectl create namespace my-namespace
   ```

1.  Deploy an Amazon EKS `SecurityGroupPolicy` to your cluster.

   1. Copy the following contents to your device. You can replace *podSelector* with `serviceAccountSelector` if you’d rather select Pods based on service account labels. You must specify one selector or the other. An empty `podSelector` (example: `podSelector: {}`) selects all Pods in the namespace. You can change *my-role* to the name of your role. An empty `serviceAccountSelector` selects all service accounts in the namespace. You can replace *my-security-group-policy* with a name for your `SecurityGroupPolicy` and *my-namespace* with the namespace that you want to create the `SecurityGroupPolicy` in.

      You must replace *my\$1pod\$1security\$1group\$1id* with the ID of an existing security group. If you don’t have an existing security group, then you must create one. For more information, see [Amazon EC2 security groups for Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html) in the [Amazon EC2 User Guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/). You can specify 1-5 security group IDs. If you specify more than one ID, then the combination of all the rules in all the security groups are effective for the selected Pods.

      ```
      cat >my-security-group-policy.yaml <<EOF
      apiVersion: vpcresources.k8s.aws/v1beta1
      kind: SecurityGroupPolicy
      metadata:
        name: my-security-group-policy
        namespace: my-namespace
      spec:
        podSelector:
          matchLabels:
            role: my-role
        securityGroups:
          groupIds:
            - my_pod_security_group_id
      EOF
      ```
**Important**  
The security group or groups that you specify for your Pods must meet the following criteria:  
They must exist. If they don’t exist, then, when you deploy a Pod that matches the selector, your Pod remains stuck in the creation process. If you describe the Pod, you’ll see an error message similar to the following one: `An error occurred (InvalidSecurityGroupID.NotFound) when calling the CreateNetworkInterface operation: The securityGroup ID 'sg-05b1d815d1EXAMPLE' does not exist`.
They must allow inbound communication from the security group applied to your nodes (for `kubelet`) over any ports that you’ve configured probes for.
They must allow outbound communication over `TCP` and `UDP` ports 53 to a security group assigned to the Pods (or nodes that the Pods run on) running CoreDNS. The security group for your CoreDNS Pods must allow inbound `TCP` and `UDP` port 53 traffic from the security group that you specify.
They must have necessary inbound and outbound rules to communicate with other Pods that they need to communicate with.
They must have rules that allow the Pods to communicate with the Kubernetes control plane if you’re using the security group with Fargate. The easiest way to do this is to specify the cluster security group as one of the security groups.
Security group policies only apply to newly scheduled Pods. They do not affect running Pods.

   1. Deploy the policy.

      ```
      kubectl apply -f my-security-group-policy.yaml
      ```

1. Deploy a sample application with a label that matches the *my-role* value for *podSelector* that you specified in a previous step.

   1. Copy the following contents to your device. Replace the example values with your own and then run the modified command. If you replace *my-role*, make sure that it’s the same as the value you specified for the selector in a previous step.

      ```
      cat >sample-application.yaml <<EOF
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: my-deployment
        namespace: my-namespace
        labels:
          app: my-app
      spec:
        replicas: 4
        selector:
          matchLabels:
            app: my-app
        template:
          metadata:
            labels:
              app: my-app
              role: my-role
          spec:
            terminationGracePeriodSeconds: 120
            containers:
            - name: nginx
              image: public.ecr.aws/nginx/nginx:1.23
              ports:
              - containerPort: 80
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: my-app
        namespace: my-namespace
        labels:
          app: my-app
      spec:
        selector:
          app: my-app
        ports:
          - protocol: TCP
            port: 80
            targetPort: 80
      EOF
      ```

   1. Deploy the application with the following command. When you deploy the application, the Amazon VPC CNI plugin for Kubernetes matches the `role` label and the security groups that you specified in the previous step are applied to the Pod.

      ```
      kubectl apply -f sample-application.yaml
      ```

1. View the Pods deployed with the sample application. For the remainder of this topic, this terminal is referred to as `TerminalA`.

   ```
   kubectl get pods -n my-namespace -o wide
   ```

   An example output is as follows.

   ```
   NAME                             READY   STATUS    RESTARTS   AGE     IP               NODE                                            NOMINATED NODE   READINESS GATES
   my-deployment-5df6f7687b-4fbjm   1/1     Running   0          7m51s   192.168.53.48    ip-192-168-33-28.region-code.compute.internal   <none>           <none>
   my-deployment-5df6f7687b-j9fl4   1/1     Running   0          7m51s   192.168.70.145   ip-192-168-92-33.region-code.compute.internal   <none>           <none>
   my-deployment-5df6f7687b-rjxcz   1/1     Running   0          7m51s   192.168.73.207   ip-192-168-92-33.region-code.compute.internal   <none>           <none>
   my-deployment-5df6f7687b-zmb42   1/1     Running   0          7m51s   192.168.63.27    ip-192-168-33-28.region-code.compute.internal   <none>           <none>
   ```
**Note**  
Try these tips if any Pods are stuck.  
If any Pods are stuck in the `Waiting` state, then run `kubectl describe pod my-deployment-xxxxxxxxxx-xxxxx -n my-namespace `. If you see `Insufficient permissions: Unable to create Elastic Network Interface.`, confirm that you added the IAM policy to the IAM cluster role in a previous step.
If any Pods are stuck in the `Pending` state, confirm that your node instance type is listed in [limits.go](https://github.com/aws/amazon-vpc-resource-controller-k8s/blob/master/pkg/aws/vpc/limits.go) and that the product of the maximum number of branch network interfaces supported by the instance type multiplied times the number of nodes in your node group hasn’t already been met. For example, an `m5.large` instance supports nine branch network interfaces. If your node group has five nodes, then a maximum of 45 branch network interfaces can be created for the node group. The 46th Pod that you attempt to deploy will sit in `Pending` state until another Pod that has associated security groups is deleted.

   If you run `kubectl describe pod my-deployment-xxxxxxxxxx-xxxxx -n my-namespace ` and see a message similar to the following message, then it can be safely ignored. This message might appear when the Amazon VPC CNI plugin for Kubernetes tries to set up host networking and fails while the network interface is being created. The plugin logs this event until the network interface is created.

   ```
   Failed to create Pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e24268322e55c8185721f52df6493684f6c2c3bf4fd59c9c121fd4cdc894579f" network for Pod "my-deployment-5df6f7687b-4fbjm": networkPlugin
   cni failed to set up Pod "my-deployment-5df6f7687b-4fbjm-c89wx_my-namespace" network: add cmd: failed to assign an IP address to container
   ```

   You can’t exceed the maximum number of Pods that can be run on the instance type. For a list of the maximum number of Pods that you can run on each instance type, see [eni-max-pods.txt](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/misc/eni-max-pods.txt) on GitHub. When you delete a Pod that has associated security groups, or delete the node that the Pod is running on, the VPC resource controller deletes the branch network interface. If you delete a cluster with Pods using Pods for security groups, then the controller doesn’t delete the branch network interfaces, so you’ll need to delete them yourself. For information about how to delete network interfaces, see [Delete a network interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#delete_eni) in the Amazon EC2 User Guide.

1. In a separate terminal, shell into one of the Pods. For the remainder of this topic, this terminal is referred to as `TerminalB`. Replace *5df6f7687b-4fbjm* with the ID of one of the Pods returned in your output from the previous step.

   ```
   kubectl exec -it -n my-namespace my-deployment-5df6f7687b-4fbjm -- /bin/bash
   ```

1. From the shell in `TerminalB`, confirm that the sample application works.

   ```
   curl my-app
   ```

   An example output is as follows.

   ```
   <!DOCTYPE html>
   <html>
   <head>
   <title>Welcome to nginx!</title>
   [...]
   ```

   You received the output because all Pods running the application are associated with the security group that you created. That group contains a rule that allows all traffic between all Pods that the security group is associated to. DNS traffic is allowed outbound from that security group to the cluster security group, which is associated with your nodes. The nodes are running the CoreDNS Pods, which your Pods did a name lookup to.

1. From `TerminalA`, remove the security group rules that allow DNS communication to the cluster security group from your security group. If you didn’t add the DNS rules to the cluster security group in a previous step, then replace *\$1my\$1cluster\$1security\$1group\$1id* with the ID of the security group that you created the rules in.

   ```
   aws ec2 revoke-security-group-ingress --group-id $my_cluster_security_group_id --security-group-rule-ids $my_tcp_rule_id
   aws ec2 revoke-security-group-ingress --group-id $my_cluster_security_group_id --security-group-rule-ids $my_udp_rule_id
   ```

1. From `TerminalB`, attempt to access the application again.

   ```
   curl my-app
   ```

   An example output is as follows.

   ```
   curl: (6) Could not resolve host: my-app
   ```

   The attempt fails because the Pod is no longer able to access the CoreDNS Pods, which have the cluster security group associated to them. The cluster security group no longer has the security group rules that allow DNS communication from the security group associated to your Pod.

   If you attempt to access the application using the IP addresses returned for one of the Pods in a previous step, you still receive a response because all ports are allowed between Pods that have the security group associated to them and a name lookup isn’t required.

1. Once you’ve finished experimenting, you can remove the sample security group policy, application, and security group that you created. Run the following commands from `TerminalA`.

   ```
   kubectl delete namespace my-namespace
   aws ec2 revoke-security-group-ingress --group-id $my_pod_security_group_id --security-group-rule-ids $my_inbound_self_rule_id
   wait
   sleep 45s
   aws ec2 delete-security-group --group-id $my_pod_security_group_id
   ```

# Attach multiple network interfaces to Pods
<a name="pod-multiple-network-interfaces"></a>

By default, the Amazon VPC CNI plugin assigns one IP address to each pod. This IP address is attached to an *elastic network interface* that handles all incoming and outgoing traffic for the pod. To increase the bandwidth and packet per second rate performance, you can use the *Multi-NIC feature* of the VPC CNI to configure a multi-homed pod. A multi-homed pod is a single Kubernetes pod that uses multiple network interfaces (and multiple IP addresses). By running a multi-homed pod, you can spread its application traffic across multiple network interfaces by using concurrent connections. This is especially useful for Artificial Intelligence (AI), Machine Learning (ML), and High Performance Computing (HPC) use cases.

The following diagram shows a multi-homed pod running on a worker node with multiple network interface cards (NICs) in use.

![\[A multi-homed pod with two network interfaces attached one network interface with ENA and one network interface with ENA and EFA\]](http://docs.aws.amazon.com/eks/latest/userguide/images/multi-homed-pod.png)


## Background
<a name="pod-multi-nic-background"></a>

On Amazon EC2, an *elastic network interface* is a logical networking component in a VPC that represents a virtual network card. For many EC2 instance types, the network interfaces share a single network interface card (NIC) in hardware. This single NIC has a maximum bandwidth and packet per second rate.

If the multi-NIC feature is enabled, the VPC CNI doesn’t assign IP addresses in bulk, which it does by default. Instead, the VPC CNI assigns one IP address to a network interface on each network card on-demand when a new pod starts. This behavior reduces the rate of IP address exhaustion, which is increased by using multi-homed pods. Because the VPC CNI is assigning IP address on-demand, pods might take longer to start on instances with the multi-NIC feature enabled.

## Considerations
<a name="pod-multi-nic-considerations"></a>
+ Ensure that your Kubernetes cluster is running VPC CNI version `1.20.0` and later. The multi-NIC feature is only available in version `1.20.0` of the VPC CNI or later.
+ Enable the `ENABLE_MULTI_NIC` environment variable in the VPC CNI plugin. You can run the following command to set the variable and start a deployment of the DaemonSet.
  +  `kubectl set env daemonset aws-node -n kube-system ENABLE_MULTI_NIC=true` 
+ Ensure that you create worker nodes that have multiple network interface cards (NICs). For a list of EC2 instances that have multiple network interface cards, see [Network cards](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#network-cards) in the **Amazon EC2 User Guide**.
+ If the multi-NIC feature is enabled, the VPC CNI doesn’t assign IP addresses in bulk, which it does by default. Because the VPC CNI is assigning IP address on-demand, pods might take longer to start on instances with the multi-NIC feature enabled. For more information, see the previous section [Background](#pod-multi-nic-background).
+ With the multi-NIC feature enabled, pods don’t have multiple network interfaces by default. You must configure each workload to use multi-NIC. Add the `k8s.amazonaws.com/nicConfig: multi-nic-attachment` annotation to workloads that should have multiple network interfaces.

### `IPv6` Considerations
<a name="pod-multi-nic-considerations-ipv6"></a>
+  **Custom IAM policy** - For `IPv6` clusters, create and use the following custom IAM policy for the VPC CNI. This policy is specific to multi-NIC. For more general information about using the VPC CNI with `IPv6` clusters, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "AmazonEKSCNIPolicyIPv6MultiNIC",
              "Effect": "Allow",
              "Action": [
                  "ec2:CreateNetworkInterface",
                  "ec2:DescribeInstances",
                  "ec2:AssignIpv6Addresses",
                  "ec2:DetachNetworkInterface",
                  "ec2:DescribeNetworkInterfaces",
                  "ec2:DescribeTags",
                  "ec2:ModifyNetworkInterfaceAttribute",
                  "ec2:DeleteNetworkInterface",
                  "ec2:DescribeInstanceTypes",
                  "ec2:UnassignIpv6Addresses",
                  "ec2:AttachNetworkInterface",
                  "ec2:DescribeSubnets"
              ],
              "Resource": "*"
          },
          {
              "Sid": "AmazonEKSCNIPolicyENITagIPv6MultiNIC",
              "Effect": "Allow",
              "Action": "ec2:CreateTags",
              "Resource": "arn:aws:ec2:*:*:network-interface/*"
          }
      ]
  }
  ```
+  `IPv6` **transition mechanism not available** - If you use the multi-NIC feature, the VPC CNI doesn’t assign an `IPv4` address to pods on an `IPv6` cluster. Otherwise, the VPC CNI assigns a host-local `IPv4` address to each pod so that a pod can communicate with external `IPv4` resources in another Amazon VPC or the internet.

## Usage
<a name="pod-multi-NIC-usage"></a>

After the multi-NIC feature is enabled in the VPC CNI and the `aws-node` pods have restarted, you can configure each workload to be multi-homed. The following example of a YAML configuration with the required annotation:

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: orders-deployment
  namespace: ecommerce
  labels:
    app: orders
spec:
  replicas: 3
  selector:
    matchLabels:
      app: orders
  template:
    metadata:
      annotations:
         k8s.amazonaws.com/nicConfig: multi-nic-attachment
      labels:
        app: orders
    spec:
...
```

## Frequently Asked Questions
<a name="pod-muti-nic-faqs"></a>

### **1. What is a network interface card (NIC)?**
<a name="pod-muti-nic-faqs-nic"></a>

A network interface card (NIC), also simply called a network card, is a physical device that enables network connectivity for the underlying cloud compute hardware. In modern EC2 servers, this refers to the Nitro network card. An Elastic Network Interface (ENI) is a virtual representation of this underlying network card.

Some EC2 instance types have multiple NICs for greater bandwidth and packet rate performance. For such instances, you can assign secondary ENIs to the additional network cards. For example, ENI \$11 can function as the interface for the NIC attached to network card index 0, whereas ENI \$12 can function as the interface for the NIC attached to a separate network card index.

### **2. What is a multi-homed pod?**
<a name="pod-muti-nic-faqs-pod"></a>

A multi-homed pod is a single Kubernetes pod with multiple network interfaces (and by implication multiple IP addresses). Each pod network interface is associated with an [Elastic Network Interface (ENI)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html), and these ENIs are logical representations of separate NICs on the underlying worker node. With multiple network interfaces, a multi-homed pod has additional data transfer capacity, which also raises its data transfer rate.

**Important**  
The VPC CNI can only configure multi-homed pods on instance types that have multiple NICs.

### **3. Why should I use this feature?**
<a name="pod-muti-nic-faqs-why"></a>

If you need to scale network performance in your Kubernetes-based workloads, you can use the multi-NIC feature to run multi-homed pods that interface with all the underlying NICs that have an ENA device attached to it. Leveraging additional network cards raises the bandwidth capacity and packet rate performance in your applications by distributing application traffic across multiple concurrent connections. This is especially useful for Artificial Intelligence (AI), Machine Learning (ML), and High Performance Computing (HPC) use cases.

### **4. How do I use this feature?**
<a name="pod-muti-nic-faqs-how-to-enable"></a>

1. First, you must ensure that your Kubernetes cluster is using VPC CNI version 1.20 or later. For the steps to update the VPC CNI as an EKS add-on, see [Update the Amazon VPC CNI (Amazon EKS add-on)](vpc-add-on-update.md).

1. Then, you have to enable multi-NIC support in the VPC CNI by using the `ENABLE_MULTI_NIC` environment variable.

1. Then, you must ensure that you make and join nodes that have multiple network cards. For a list of EC2 instance types that have multiple network cards, see [Network cards](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#network-cards) in the *Amazon EC2 User Guide*.

1. Finally, you configure each workload to use either multiple network interfaces (multi-homed pods) or use a single network interface.

### **5. How do I configure my workloads to use multiple NICs on a supported worker node?**
<a name="pod-muti-nic-faqs-how-to-workloads"></a>

To use multi-homed pods, you need to add the following annotation: `k8s.amazonaws.com/nicConfig: multi-nic-attachment`. This will attach an ENI from every NIC in the underlying instance to the pod (one to many mapping between a pod and the NICs).

If this annotation is missing, the VPC CNI assumes that your pod only requires 1 network interface and assigns it an IP from an ENI on any available NIC.

### **6. What network interface adapters are supported with this feature?**
<a name="pod-muti-nic-faqs-adapters"></a>

You can use any network interface adapter if you have at least one ENA attached to the underlying network card for IP traffic. For more information about ENA, see [Elastic Network Adapter (ENA)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html) in the *Amazon EC2 User Guide*.

Supported network device configurations:
+  **ENA** interfaces provide all of the traditional IP networking and routing features that are required to support IP networking for a VPC. For more information, see [Enable enhanced networking with ENA on your EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html).
+  **EFA** **(EFA** **with ENA)** interfaces provide both the ENA device for IP networking and the EFA device for low-latency, high-throughput communication.

**Important**  
If a network card only has an **EFA-only** adapter attached to it, the VPC CNI will skip it when provisioning network connectivity for a multi-homed pod. However, if you combine an **EFA-only** adapter with an **ENA** adapter on a network card, then the VPC CNI will manage ENIs on this device as well. To use EFA-only interfaces with EKS clusters, see [Run machine learning training on Amazon EKS with Elastic Fabric Adapter](node-efa.md).

### **7. Can I see if a node in my cluster has ENA support?**
<a name="pod-muti-nic-faqs-node-ena"></a>

Yes, you can use the AWS CLI or EC2 API to retrieve network information about an EC2 instance in your cluster. This provides details on whether or not the instance has ENA support. In the following example, replace `<your-instance-id>` with the EC2 instance ID of a node.

 AWS CLI example:

```
aws ec2 describe-instances --instance-ids <your-instance-id> --query "Reservations[].Instances[].EnaSupport"
```

Example output:

```
[ true ]
```

### **8. Can I see the different IP addresses associated with a pod?**
<a name="pod-muti-nic-faqs-list-ips"></a>

No, not easily. However, you can use `nsenter` from the node to run common network tools such as `ip route show` and see the additional IP addresses and interfaces.

### **9. Can I control the number of network interfaces for my pods?**
<a name="pod-muti-nic-faqs-number-of-enis"></a>

No. When your workload is configured to use multiple NICs on a supported instance, a single pod automatically has an IP address from every network card on the instance. Alternatively, single-homed pods will have one network interface attached to one NIC on the instance.

**Important**  
Network cards that *only* have an **EFA-only** device attached to it are skipped by the VPC CNI.

### **10. Can I configure my pods to use a specific NIC?**
<a name="pod-muti-nic-faqs-specify-nic"></a>

No, this isn’t supported. If a pod has the relevant annotation, then the VPC CNI automatically configures it to use every NIC with an ENA adapter on the worker node.

### **11. Does this feature work with the other VPC CNI networking features?**
<a name="pod-muti-nic-faqs-modes"></a>

Yes, the multi-NIC feature in the VPC CNI works with both *custom networking* and *enhanced subnet discovery*. However, the multi-homed pods don’t use the custom subnets or security groups. Instead, the VPC CNI assigns IP addresses and network interfaces to the multi-homed pods with the same subnet and security group configuration as the node. For more information about custom networking, see [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md).

The multi-NIC feature in the VPC CNI doesn’t work with and can’t be combined with *security groups for pods*.

### **12. Can I use network policies with this feature?**
<a name="pod-muti-nic-faqs-netpol"></a>

Yes, you can use Kubernetes network policies with multi-NIC. Kubernetes network policies restrict network traffic to and from your pods. For more information about applying network policies with the VPC CNI, see [Limit Pod traffic with Kubernetes network policies](cni-network-policy.md).

### **13. Is multi-NIC support enabled in EKS Auto Mode?**
<a name="pod-muti-nic-faqs-auto-mode"></a>

Multi-NIC isn’t supported for EKS Auto Mode clusters.

# Alternate CNI plugins for Amazon EKS clusters
<a name="alternate-cni-plugins"></a>

The [Amazon VPC CNI plugin for Kubernetes](https://github.com/aws/amazon-vpc-cni-plugins) is the only CNI plugin supported by Amazon EKS with Amazon EC2 nodes. Amazon EKS supports the core capabilities of Cilium and Calico for Amazon EKS Hybrid Nodes. Amazon EKS runs upstream Kubernetes, so you can install alternate compatible CNI plugins to Amazon EC2 nodes in your cluster. If you have Fargate nodes in your cluster, the Amazon VPC CNI plugin for Kubernetes is already on your Fargate nodes. It’s the only CNI plugin you can use with Fargate nodes. An attempt to install an alternate CNI plugin on Fargate nodes fails.

If you plan to use an alternate CNI plugin on Amazon EC2 nodes, we recommend that you obtain commercial support for the plugin or have the in-house expertise to troubleshoot and contribute fixes to the CNI plugin project.

Amazon EKS maintains relationships with a network of partners that offer support for alternate compatible CNI plugins. For details about the versions, qualifications, and testing performed, see the following partner documentation.


| Partner | Product | Documentation | 
| --- | --- | --- | 
|  Tigera  |   [Calico](https://www.tigera.io/partners/aws/)   |   [Installation instructions](https://docs.projectcalico.org/getting-started/kubernetes/managed-public-cloud/eks)   | 
|  Isovalent  |   [Cilium](https://cilium.io)   |   [Installation instructions](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/)   | 
|  Juniper  |   [Cloud-Native Contrail Networking (CN2)](https://www.juniper.net/us/en/products/sdn-and-orchestration/contrail/cloud-native-contrail-networking.html)   |   [Installation instructions](https://www.juniper.net/documentation/us/en/software/cn-cloud-native23.2/cn-cloud-native-eks-install-and-lcm/index.html)   | 
|  VMware  |   [Antrea](https://antrea.io/)   |   [Installation instructions](https://antrea.io/docs/main/docs/eks-installation)   | 

Amazon EKS aims to give you a wide selection of options to cover all use cases.

## Alternate compatible network policy plugins
<a name="alternate-network-policy-plugins"></a>

 [Calico](https://www.tigera.io/project-calico) is a widely adopted solution for container networking and security. Using Calico on EKS provides a fully compliant network policy enforcement for your EKS clusters. Additionally, you can opt to use Calico’s networking, which conserve IP addresses from your underlying VPC. [Calico Cloud](https://www.tigera.io/tigera-products/calico-cloud/) enhances the features of Calico Open Source, providing advanced security and observability capabilities.

Traffic flow to and from Pods with associated security groups are not subjected to Calico network policy enforcement and are limited to Amazon VPC security group enforcement only.

If you use Calico network policy enforcement, we recommend that you set the environment variable `ANNOTATE_POD_IP` to `true` to avoid a known issue with Kubernetes. To use this feature, you must add `patch` permission for pods to the `aws-node` ClusterRole. Note that adding patch permissions to the `aws-node` DaemonSet increases the security scope for the plugin. For more information, see [ANNOTATE\$1POD\$1IP](https://github.com/aws/amazon-vpc-cni-k8s/?tab=readme-ov-file#annotate_pod_ip-v193) in the VPC CNI repo on GitHub.

## Considerations for Amazon EKS Auto Mode
<a name="_considerations_for_amazon_eks_auto_mode"></a>

Amazon EKS Auto Mode does not support alternate CNI plugins or network policy plugins. For more information, see [Automate cluster infrastructure with EKS Auto Mode](automode.md).

# Attach multiple network interfaces to Pods with Multus
<a name="pod-multus"></a>

Multus CNI is a container network interface (CNI) plugin for Amazon EKS that enables attaching multiple network interfaces to a Pod. For more information, see the [Multus-CNI](https://github.com/k8snetworkplumbingwg/multus-cni) documentation on GitHub.

In Amazon EKS, each Pod has one network interface assigned by the Amazon VPC CNI plugin. With Multus, you can create a multi-homed Pod that has multiple interfaces. This is accomplished by Multus acting as a "meta-plugin"; a CNI plugin that can call multiple other CNI plugins. AWS support for Multus comes configured with the Amazon VPC CNI plugin as the default delegate plugin.
+ Amazon EKS won’t be building and publishing single root I/O virtualization (SR-IOV) and Data Plane Development Kit (DPDK) CNI plugins. However, you can achieve packet acceleration by connecting directly to Amazon EC2 Elastic Network Adapters (ENA) through Multus managed host-device and `ipvlan` plugins.
+ Amazon EKS is supporting Multus, which provides a generic process that enables simple chaining of additional CNI plugins. Multus and the process of chaining is supported, but AWS won’t provide support for all compatible CNI plugins that can be chained, or issues that may arise in those CNI plugins that are unrelated to the chaining configuration.
+ Amazon EKS is providing support and life cycle management for the Multus plugin, but isn’t responsible for any IP address or additional management associated with the additional network interfaces. The IP address and management of the default network interface utilizing the Amazon VPC CNI plugin remains unchanged.
+ Only the Amazon VPC CNI plugin is officially supported as the default delegate plugin. You need to modify the published Multus installation manifest to reconfigure the default delegate plugin to an alternate CNI if you choose not to use the Amazon VPC CNI plugin for primary networking.
+ Multus is only supported when using the Amazon VPC CNI as the primary CNI. We do not support the Amazon VPC CNI when used for higher order interfaces, secondary or otherwise.
+ To prevent the Amazon VPC CNI plugin from trying to manage additional network interfaces assigned to Pods, add the following tag to the network interface:  
 **key**   
: `node.k8s.amazonaws.com/no_manage`   
 **value**   
: `true` 
+ Multus is compatible with network policies, but the policy has to be enriched to include ports and IP addresses that may be part of additional network interfaces attached to Pods.

For an implementation walk through, see the [Multus Setup Guide](https://github.com/aws-samples/eks-install-guide-for-multus/blob/main/README.md) on GitHub.

# Route internet traffic with AWS Load Balancer Controller
<a name="aws-load-balancer-controller"></a>

**Tip**  
 [Register](https://aws-experience.com/emea/smb/events/series/get-hands-on-with-amazon-eks?trk=4a9b4147-2490-4c63-bc9f-f8a84b122c8c&sc_channel=el) for upcoming Amazon EKS workshops.

The AWS Load Balancer Controller manages AWS Elastic Load Balancers for a Kubernetes cluster. You can use the controller to expose your cluster apps to the internet. The controller provisions AWS load balancers that point to cluster Service or Ingress resources. In other words, the controller creates a single IP address or DNS name that points to multiple pods in your cluster.

![\[Architecture diagram. Illustration of traffic coming from internet users, to Amazon Load Balancer. Amazon Load Balancer distributes traffic to pods in the cluster.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/lbc-overview.png)


The controller watches for Kubernetes Ingress or Service resources. In response, it creates the appropriate AWS Elastic Load Balancing resources. You can configure the specific behavior of the load balancers by applying annotations to the Kubernetes resources. For example, you can attach AWS security groups to load balancers using annotations.

The controller provisions the following resources:

 **Kubernetes `Ingress` **   
The LBC creates an [AWS Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) when you create a Kubernetes `Ingress`. [Review the annotations you can apply to an Ingress resource.](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/) 

 **Kubernetes service of the `LoadBalancer` type**   
The LBC creates an [AWS Network Load Balancer (NLB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html)when you create a Kubernetes service of type `LoadBalancer`. [Review the annotations you can apply to a Service resource.](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)   
In the past, the Kubernetes network load balancer was used for *instance* targets, but the LBC was used for *IP* targets. With the AWS Load Balancer Controller version `2.3.0` or later, you can create NLBs using either target type. For more information about NLB target types, see [Target type](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#target-type) in the User Guide for Network Load Balancers.

The controller is an [open-source project](https://github.com/kubernetes-sigs/aws-load-balancer-controller) managed on GitHub.

Before deploying the controller, we recommend that you review the prerequisites and considerations in [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) and [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md). In those topics, you will deploy a sample app that includes an AWS load balancer.

 **Kubernetes `Gateway` API**   
With the AWS Load Balancer Controller version `2.14.0` or later, the LBC creates an [AWS Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) when you create a Kubernetes `Gateway`. Kubernetes Gateway standardizes more configuration than Ingress, which needed custom annotations for many common options. [Review the configuration that you can apply to an Gateway resource.](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/gateway/gateway/) For more information about the `Gateway` API, see [Gateway API](https://kubernetes.io/docs/concepts/services-networking/gateway/) in the Kubernetes documentation.

## Install the controller
<a name="lbc-overview"></a>

You can use one of the following procedures to install the AWS Load Balancer Controller:
+ If you are new to Amazon EKS, we recommend that you use Helm for the installation because it simplifies the AWS Load Balancer Controller installation. For more information, see [Install AWS Load Balancer Controller with Helm](lbc-helm.md).
+ For advanced configurations, such as clusters with restricted network access to public container registries, use Kubernetes Manifests. For more information, see [Install AWS Load Balancer Controller with manifests](lbc-manifest.md).

## Migrate from deprecated controller versions
<a name="lbc-deprecated"></a>
+ If you have deprecated versions of the AWS Load Balancer Controller installed, see [Migrate apps from deprecated ALB Ingress Controller](lbc-remove.md).
+ Deprecated versions cannot be upgraded. They must be removed and a current version of the AWS Load Balancer Controller installed.
+ Deprecated versions include:
  +  AWS ALB Ingress Controller for Kubernetes ("Ingress Controller"), a predecessor to the AWS Load Balancer Controller.
  + Any `0.1.x ` version of the AWS Load Balancer Controller

## Legacy cloud provider
<a name="lbc-legacy"></a>

Kubernetes includes a legacy cloud provider for AWS. The legacy cloud provider is capable of provisioning AWS load balancers, similar to the AWS Load Balancer Controller. The legacy cloud provider creates Classic Load Balancers. If you do not install the AWS Load Balancer Controller, Kubernetes will default to using the legacy cloud provider. You should install the AWS Load Balancer Controller and avoid using the legacy cloud provider.

**Important**  
In versions 2.5 and newer, the AWS Load Balancer Controller becomes the default controller for Kubernetes *service* resources with the `type: LoadBalancer` and makes an AWS Network Load Balancer (NLB) for each service. It does this by making a mutating webhook for services, which sets the `spec.loadBalancerClass` field to `service.k8s.aws/nlb` for new services of `type: LoadBalancer`. You can turn off this feature and revert to using the [legacy Cloud Provider](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/#legacy-cloud-provider) as the default controller, by setting the helm chart value `enableServiceMutatorWebhook` to `false`. The cluster won’t provision new Classic Load Balancers for your services unless you turn off this feature. Existing Classic Load Balancers will continue to work.

# Install AWS Load Balancer Controller with Helm
<a name="lbc-helm"></a>

**Tip**  
 [Register](https://aws-experience.com/emea/smb/events/series/get-hands-on-with-amazon-eks?trk=4a9b4147-2490-4c63-bc9f-f8a84b122c8c&sc_channel=el) for upcoming Amazon EKS workshops.

**Tip**  
With Amazon EKS Auto Mode, you don’t need to install or upgrade networking add-ons. Auto Mode includes pod networking and load balancing capabilities.  
For more information, see [Automate cluster infrastructure with EKS Auto Mode](automode.md).

This topic describes how to install the AWS Load Balancer Controller using Helm, a package manager for Kubernetes, and `eksctl`. The controller is installed with default options. For more information about the controller, including details on configuring it with annotations, see the [AWS Load Balancer Controller Documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/) on GitHub.

In the following steps, replace the example values with your own values.

## Prerequisites
<a name="lbc-prereqs"></a>

Before starting this tutorial, you must complete the following steps:
+ Create an Amazon EKS cluster. To create one, see [Get started with Amazon EKS](getting-started.md).
+ Install [Helm](https://helm.sh/docs/helm/helm_install/) on your local machine.
+ Make sure that your Amazon VPC CNI plugin for Kubernetes, `kube-proxy`, and CoreDNS add-ons are at the minimum versions listed in [Service account tokens](service-accounts.md#boundserviceaccounttoken-validated-add-on-versions).
+ Learn about AWS Elastic Load Balancing concepts. For more information, see the [Elastic Load Balancing User Guide](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/).
+ Learn about Kubernetes [service](https://kubernetes.io/docs/concepts/services-networking/service/) and [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resources.

### Considerations
<a name="lbc-considerations"></a>

Before proceeding with the configuration steps on this page, consider the following:
+ The IAM policy and role (`AmazonEKSLoadBalancerControllerRole`) can be reused across multiple EKS clusters in the same AWS account.
+ If you’re installing the controller on the same cluster where the role (`AmazonEKSLoadBalancerControllerRole`) was originally created, go to [Step 2: Install Load Balancer Controller](#lbc-helm-install) after verifying the role exists.
+ If you’re using IAM Roles for Service Accounts (IRSA), IRSA must be set up for each cluster, and the OpenID Connect (OIDC) provider ARN in the role’s trust policy is specific to each EKS cluster. Additionally, if you’re installing the controller on a new cluster with an existing `AmazonEKSLoadBalancerControllerRole`, update the role’s trust policy to include the new cluster’s OIDC provider and create a new service account with the appropriate role annotation. To determine whether you already have an OIDC provider, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

## Step 1: Create IAM Role using `eksctl`
<a name="lbc-helm-iam"></a>

The following steps refer to the AWS Load Balancer Controller **v2.14.1** release version. For more information about all releases, see the [AWS Load Balancer Controller Release Page](https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/) on GitHub.

1. Download an IAM policy for the AWS Load Balancer Controller that allows it to make calls to AWS APIs on your behalf.

   ```
   curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.14.1/docs/install/iam_policy.json
   ```
   + If you are a non-standard AWS partition, such as a Government or China region, [review the policies on GitHub](https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/main/docs/install) and download the appropriate policy for your region.

1. Create an IAM policy using the policy downloaded in the previous step.

   ```
   aws iam create-policy \
       --policy-name AWSLoadBalancerControllerIAMPolicy \
       --policy-document file://iam_policy.json
   ```
**Note**  
If you view the policy in the AWS Management Console, the console shows warnings for the **ELB** service, but not for the **ELB v2** service. This happens because some of the actions in the policy exist for **ELB v2**, but not for **ELB**. You can ignore the warnings for **ELB**.

1. Replace the values for cluster name, region code, and account ID.

   ```
   eksctl create iamserviceaccount \
       --cluster=<cluster-name> \
       --namespace=kube-system \
       --name=aws-load-balancer-controller \
       --attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \
       --override-existing-serviceaccounts \
       --region <aws-region-code> \
       --approve
   ```

## Step 2: Install AWS Load Balancer Controller
<a name="lbc-helm-install"></a>

1. Add the `eks-charts` Helm chart repository. AWS maintains [this repository](https://github.com/aws/eks-charts) on GitHub.

   ```
   helm repo add eks https://aws.github.io/eks-charts
   ```

1. Update your local repo to make sure that you have the most recent charts.

   ```
   helm repo update eks
   ```

1. Install the AWS Load Balancer Controller.

   If you’re deploying the controller to Amazon EC2 nodes that have [restricted access to the Amazon EC2 instance metadata service (IMDS)](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node), or if you’re deploying to Fargate or Amazon EKS Hybrid Nodes, then add the following flags to the `helm` command that follows:
   +  `--set region=region-code ` 
   +  `--set vpcId=vpc-xxxxxxxx ` 

     Replace *my-cluster* with the name of your cluster. In the following command, `aws-load-balancer-controller` is the Kubernetes service account that you created in a previous step.

     For more information about configuring the helm chart, see [values.yaml](https://github.com/aws/eks-charts/blob/master/stable/aws-load-balancer-controller/values.yaml) on GitHub.

     ```
     helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
       -n kube-system \
       --set clusterName=my-cluster \
       --set serviceAccount.create=false \
       --set serviceAccount.name=aws-load-balancer-controller \
       --version 1.14.0
     ```

**Important**  
The deployed chart doesn’t receive security updates automatically. You need to manually upgrade to a newer chart when it becomes available. When upgrading, change *install* to `upgrade` in the previous command.

The `helm install` command automatically installs the custom resource definitions (CRDs) for the controller. The `helm upgrade` command does not. If you use `helm upgrade,` you must manually install the CRDs. Run the following command to install the CRDs:

```
wget https://raw.githubusercontent.com/aws/eks-charts/master/stable/aws-load-balancer-controller/crds/crds.yaml
kubectl apply -f crds.yaml
```

## Step 3: Verify that the controller is installed
<a name="lbc-helm-verify"></a>

1. Verify that the controller is installed.

   ```
   kubectl get deployment -n kube-system aws-load-balancer-controller
   ```

   An example output is as follows.

   ```
   NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
   aws-load-balancer-controller   2/2     2            2           84s
   ```

   You receive the previous output if you deployed using Helm. If you deployed using the Kubernetes manifest, you only have one replica.

1. Before using the controller to provision AWS resources, your cluster must meet specific requirements. For more information, see [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) and [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md).

# Install AWS Load Balancer Controller with manifests
<a name="lbc-manifest"></a>

**Tip**  
With Amazon EKS Auto Mode, you don’t need to install or upgrade networking add-ons. Auto Mode includes pod networking and load balancing capabilities.  
For more information, see [Automate cluster infrastructure with EKS Auto Mode](automode.md).

This topic describes how to install the controller by downloading and applying Kubernetes manifests. You can view the full [documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/) for the controller on GitHub.

In the following steps, replace the example values with your own values.

## Prerequisites
<a name="lbc-manifest-prereqs"></a>

Before starting this tutorial, you must complete the following steps:
+ Create an Amazon EKS cluster. To create one, see [Get started with Amazon EKS](getting-started.md).
+ Install [Helm](https://helm.sh/docs/helm/helm_install/) on your local machine.
+ Make sure that your Amazon VPC CNI plugin for Kubernetes, `kube-proxy`, and CoreDNS add-ons are at the minimum versions listed in [Service account tokens](service-accounts.md#boundserviceaccounttoken-validated-add-on-versions).
+ Learn about AWS Elastic Load Balancing concepts. For more information, see the [Elastic Load Balancing User Guide](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/).
+ Learn about Kubernetes [service](https://kubernetes.io/docs/concepts/services-networking/service/) and [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resources.

### Considerations
<a name="lbc-manifest-considerations"></a>

Before proceeding with the configuration steps on this page, consider the following:
+ The IAM policy and role (`AmazonEKSLoadBalancerControllerRole`) can be reused across multiple EKS clusters in the same AWS account.
+ If you’re installing the controller on the same cluster where the role (`AmazonEKSLoadBalancerControllerRole`) was originally created, go to [Step 2: Install cert-manager](#lbc-cert) after verifying the role exists.
+ If you’re using IAM Roles for Service Accounts (IRSA), IRSA must be set up for each cluster, and the OpenID Connect (OIDC) provider ARN in the role’s trust policy is specific to each EKS cluster. Additionally, if you’re installing the controller on a new cluster with an existing `AmazonEKSLoadBalancerControllerRole`, update the role’s trust policy to include the new cluster’s OIDC provider and create a new service account with the appropriate role annotation. To determine whether you already have an OIDC provider, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

## Step 1: Configure IAM
<a name="lbc-iam"></a>

The following steps refer to the AWS Load Balancer Controller **v2.14.1** release version. For more information about all releases, see the [AWS Load Balancer Controller Release Page](https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/) on GitHub.

1. Download an IAM policy for the AWS Load Balancer Controller that allows it to make calls to AWS APIs on your behalf.  
**Example**  

------
#### [  AWS  ]

   ```
   curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.14.1/docs/install/iam_policy.json
   ```

------
#### [  AWS GovCloud (US) ]

   ```
   curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.14.1/docs/install/iam_policy_us-gov.json
   ```

   ```
   mv iam_policy_us-gov.json iam_policy.json
   ```

------

1. Create an IAM policy using the policy downloaded in the previous step.

   ```
   aws iam create-policy \
       --policy-name AWSLoadBalancerControllerIAMPolicy \
       --policy-document file://iam_policy.json
   ```
**Note**  
If you view the policy in the AWS Management Console, the console shows warnings for the **ELB** service, but not for the **ELB v2** service. This happens because some of the actions in the policy exist for **ELB v2**, but not for **ELB**. You can ignore the warnings for **ELB**.

**Example**  

1. Replace *my-cluster* with the name of your cluster, *111122223333* with your account ID, and then run the command.

   ```
   eksctl create iamserviceaccount \
     --cluster=my-cluster \
     --namespace=kube-system \
     --name=aws-load-balancer-controller \
     --role-name AmazonEKSLoadBalancerControllerRole \
     --attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \
     --approve
   ```

1. Retrieve your cluster’s OIDC provider ID and store it in a variable.

   ```
   oidc_id=$(aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
   ```

1. Determine whether an IAM OIDC provider with your cluster’s ID is already in your account. You need OIDC configured for both the cluster and IAM.

   ```
   aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4
   ```

   If output is returned, then you already have an IAM OIDC provider for your cluster. If no output is returned, then you must create an IAM OIDC provider for your cluster. For more information, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

1. Copy the following contents to your device. Replace *111122223333* with your account ID. Replace *region-code* with the AWS Region that your cluster is in. Replace *EXAMPLED539D4633E53DE1B71EXAMPLE* with the output returned in the previous step.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
               },
               "Action": "sts:AssumeRoleWithWebIdentity",
               "Condition": {
                   "StringEquals": {
                       "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com",
                       "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:aws-load-balancer-controller"
                   }
               }
           }
       ]
   }
   ```

1. Create the IAM role.

   ```
   aws iam create-role \
     --role-name AmazonEKSLoadBalancerControllerRole \
     --assume-role-policy-document file://"load-balancer-role-trust-policy.json"
   ```

1. Attach the required Amazon EKS managed IAM policy to the IAM role. Replace *111122223333* with your account ID.

   ```
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \
     --role-name AmazonEKSLoadBalancerControllerRole
   ```

1. Copy the following contents to your device. Replace *111122223333* with your account ID. After replacing the text, run the modified command to create the `aws-load-balancer-controller-service-account.yaml` file.

   ```
   cat >aws-load-balancer-controller-service-account.yaml <<EOF
   apiVersion: v1
   kind: ServiceAccount
   metadata:
     labels:
       app.kubernetes.io/component: controller
       app.kubernetes.io/name: aws-load-balancer-controller
     name: aws-load-balancer-controller
     namespace: kube-system
     annotations:
       eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/AmazonEKSLoadBalancerControllerRole
   EOF
   ```

1. Create the Kubernetes service account on your cluster. The Kubernetes service account named `aws-load-balancer-controller` is annotated with the IAM role that you created named *AmazonEKSLoadBalancerControllerRole*.

   ```
   kubectl apply -f aws-load-balancer-controller-service-account.yaml
   ```

## Step 2: Install `cert-manager`
<a name="lbc-cert"></a>

Install `cert-manager` using one of the following methods to inject certificate configuration into the webhooks. For more information, see [Getting Started](https://cert-manager.io/docs/installation/#getting-started) in the *cert-manager Documentation*.

We recommend using the `quay.io` container registry to install `cert-manager`. If your nodes do not have access to the `quay.io` container registry, Install `cert-manager` using Amazon ECR (see below).

**Example**  

1. If your nodes have access to the `quay.io` container registry, install `cert-manager` to inject certificate configuration into the webhooks.

   ```
   kubectl apply \
       --validate=false \
       -f https://github.com/jetstack/cert-manager/releases/download/v1.13.5/cert-manager.yaml
   ```

1. Install `cert-manager` using one of the following methods to inject certificate configuration into the webhooks. For more information, see [Getting Started](https://cert-manager.io/docs/installation/#getting-started) in the *cert-manager Documentation*.

1. Download the manifest.

   ```
   curl -Lo cert-manager.yaml https://github.com/jetstack/cert-manager/releases/download/v1.13.5/cert-manager.yaml
   ```

1. Pull the following images and push them to a repository that your nodes have access to. For more information on how to pull, tag, and push the images to your own repository, see [Copy a container image from one repository to another repository](copy-image-to-repository.md).

   ```
   quay.io/jetstack/cert-manager-cainjector:v1.13.5
   quay.io/jetstack/cert-manager-controller:v1.13.5
   quay.io/jetstack/cert-manager-webhook:v1.13.5
   ```

1. Replace `quay.io` in the manifest for the three images with your own registry name. The following command assumes that your private repository’s name is the same as the source repository. Replace *111122223333.dkr.ecr.region-code.amazonaws.com* with your private registry.

   ```
   sed -i.bak -e 's|quay.io|111122223333.dkr.ecr.region-code.amazonaws.com|' ./cert-manager.yaml
   ```

1. Apply the manifest.

   ```
   kubectl apply \
       --validate=false \
       -f ./cert-manager.yaml
   ```

## Step 3: Install AWS Load Balancer Controller
<a name="lbc-install"></a>

1. Download the controller specification. For more information about the controller, see the [documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/) on GitHub.

   ```
   curl -Lo v2_14_1_full.yaml https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.14.1/v2_14_1_full.yaml
   ```

1. Make the following edits to the file.

   1. If you downloaded the `v2_14_1_full.yaml` file, run the following command to remove the `ServiceAccount` section in the manifest. If you don’t remove this section, the required annotation that you made to the service account in a previous step is overwritten. Removing this section also preserves the service account that you created in a previous step if you delete the controller.

      ```
      sed -i.bak -e '764,772d' ./v2_14_1_full.yaml
      ```

      If you downloaded a different file version, then open the file in an editor and remove the following lines.

      ```
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        labels:
          app.kubernetes.io/component: controller
          app.kubernetes.io/name: aws-load-balancer-controller
        name: aws-load-balancer-controller
        namespace: kube-system
      ---
      ```

   1. Replace `your-cluster-name` in the `Deployment` `spec` section of the file with the name of your cluster by replacing *my-cluster* with the name of your cluster.

      ```
      sed -i.bak -e 's|your-cluster-name|my-cluster|' ./v2_14_1_full.yaml
      ```

   1. If your nodes don’t have access to the Amazon EKS Amazon ECR image repositories, then you need to pull the following image and push it to a repository that your nodes have access to. For more information on how to pull, tag, and push an image to your own repository, see [Copy a container image from one repository to another repository](copy-image-to-repository.md).

      ```
      public.ecr.aws/eks/aws-load-balancer-controller:v2.14.1
      ```

      Add your registry’s name to the manifest. The following command assumes that your private repository’s name is the same as the source repository and adds your private registry’s name to the file. Replace *111122223333.dkr.ecr.region-code.amazonaws.com* with your registry. This line assumes that you named your private repository the same as the source repository. If not, change the `eks/aws-load-balancer-controller` text after your private registry name to your repository name.

      ```
      sed -i.bak -e 's|public.ecr.aws/eks/aws-load-balancer-controller|111122223333.dkr.ecr.region-code.amazonaws.com/eks/aws-load-balancer-controller|' ./v2_14_1_full.yaml
      ```

   1. (Required only for Fargate or Restricted IMDS)

      If you’re deploying the controller to Amazon EC2 nodes that have [restricted access to the Amazon EC2 instance metadata service (IMDS)](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node), or if you’re deploying to Fargate or Amazon EKS Hybrid Nodes, then add the `following parameters` under `- args:`.

      ```
      [...]
      spec:
            containers:
              - args:
                  - --cluster-name=your-cluster-name
                  - --ingress-class=alb
                  - --aws-vpc-id=vpc-xxxxxxxx
                  - --aws-region=region-code
      
      
      [...]
      ```

1. Apply the file.

   ```
   kubectl apply -f v2_14_1_full.yaml
   ```

1. Download the `IngressClass` and `IngressClassParams` manifest to your cluster.

   ```
   curl -Lo v2.14.1_ingclass.yaml https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.14.1/v2_14_1_ingclass.yaml
   ```

1. Apply the manifest to your cluster.

   ```
   kubectl apply -f v2.14.1_ingclass.yaml
   ```

## Step 4: Verify that the controller is installed
<a name="lbc-verify"></a>

1. Verify that the controller is installed.

   ```
   kubectl get deployment -n kube-system aws-load-balancer-controller
   ```

   An example output is as follows.

   ```
   NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
   aws-load-balancer-controller   2/2     2            2           84s
   ```

   You receive the previous output if you deployed using Helm. If you deployed using the Kubernetes manifest, you only have one replica.

1. Before using the controller to provision AWS resources, your cluster must meet specific requirements. For more information, see [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) and [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md).

# Migrate apps from deprecated ALB Ingress Controller
<a name="lbc-remove"></a>

This topic describes how to migrate from deprecated controller versions. More specifically, it describes how to remove deprecated versions of the AWS Load Balancer Controller.
+ Deprecated versions cannot be upgraded. You must remove them first, and then install a current version.
+ Deprecated versions include:
  +  AWS ALB Ingress Controller for Kubernetes ("Ingress Controller"), a predecessor to the AWS Load Balancer Controller.
  + Any `0.1.x ` version of the AWS Load Balancer Controller

## Remove the deprecated controller version
<a name="lbc-remove-desc"></a>

**Note**  
You may have installed the deprecated version using Helm or manually with Kubernetes manifests. Complete the procedure using the tool that you originally installed it with.

1. If you installed the `incubator/aws-alb-ingress-controller` Helm chart, uninstall it.

   ```
   helm delete aws-alb-ingress-controller -n kube-system
   ```

1. If you have version `0.1.x ` of the `eks-charts/aws-load-balancer-controller` chart installed, uninstall it. The upgrade from `0.1.x ` to version `1.0.0` doesn’t work due to incompatibility with the webhook API version.

   ```
   helm delete aws-load-balancer-controller -n kube-system
   ```

1. Check to see if the controller is currently installed.

   ```
   kubectl get deployment -n kube-system alb-ingress-controller
   ```

   This is the output if the controller isn’t installed.

   ```
   Error from server (NotFound): deployments.apps "alb-ingress-controller" not found
   ```

   This is the output if the controller is installed.

   ```
   NAME                   READY UP-TO-DATE AVAILABLE AGE
   alb-ingress-controller 1/1   1          1         122d
   ```

1. Enter the following commands to remove the controller.

   ```
   kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/alb-ingress-controller.yaml
   kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/rbac-role.yaml
   ```

## Migrate to AWS Load Balancer Controller
<a name="lbc-migrate"></a>

To migrate from the ALB Ingress Controller for Kubernetes to the AWS Load Balancer Controller, you need to:

1. Remove the ALB Ingress Controller (see above).

1.  [Install the AWS Load Balancer Controller.](aws-load-balancer-controller.md#lbc-overview) 

1. Add an additional policy to the IAM Role used by the AWS Load Balancer Controller. This policy permits the LBC to manage resources created by the ALB Ingress Controller for Kubernetes.

1. Download the IAM policy. This policy permits the AWS Load Balancer Controller to manage resources created by the ALB Ingress Controller for Kubernetes. You can also [view the policy](https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy_v1_to_v2_additional.json).

   ```
   curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.14.1/docs/install/iam_policy_v1_to_v2_additional.json
   ```

1. If your cluster is in the AWS GovCloud (US-East) or AWS GovCloud (US-West) AWS Regions, then replace ` arn:aws: ` with `arn:aws-us-gov:`.

   ```
   sed -i.bak -e 's|arn:aws:|arn:aws-us-gov:|' iam_policy_v1_to_v2_additional.json
   ```

1. Create the IAM policy and note the ARN that is returned.

   ```
   aws iam create-policy \
     --policy-name AWSLoadBalancerControllerAdditionalIAMPolicy \
     --policy-document file://iam_policy_v1_to_v2_additional.json
   ```

1. Attach the IAM policy to the IAM role used by the AWS Load Balancer Controller. Replace *your-role-name* with the name of the role, such as `AmazonEKSLoadBalancerControllerRole`.

   If you created the role using `eksctl`, then to find the role name that was created, open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation) and select the **eksctl-*my-cluster*-addon-iamserviceaccount-kube-system-aws-load-balancer-controller** stack. Select the **Resources** tab. The role name is in the **Physical ID** column.

   ```
   aws iam attach-role-policy \
     --role-name your-role-name \
     --policy-arn arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerAdditionalIAMPolicy
   ```

# Manage CoreDNS for DNS in Amazon EKS clusters
<a name="managing-coredns"></a>

**Tip**  
With Amazon EKS Auto Mode, you don’t need to install or upgrade networking add-ons. Auto Mode includes pod networking and load balancing capabilities.  
For more information, see [Automate cluster infrastructure with EKS Auto Mode](automode.md).

CoreDNS is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS. When you launch an Amazon EKS cluster with at least one node, two replicas of the CoreDNS image are deployed by default, regardless of the number of nodes deployed in your cluster. The CoreDNS Pods provide name resolution for all Pods in the cluster. The CoreDNS Pods can be deployed to Fargate nodes if your cluster includes a [Fargate Profile](fargate-profile.md) with a namespace that matches the namespace for the CoreDNS `deployment`. For more information about CoreDNS, see [Using CoreDNS for Service Discovery](https://kubernetes.io/docs/tasks/administer-cluster/coredns/) in the Kubernetes documentation.

## CoreDNS versions
<a name="coredns-versions"></a>

The following table lists the latest version of the Amazon EKS add-on type for each Kubernetes version.


| Kubernetes version | CoreDNS version | 
| --- | --- | 
|  1.35  |  v1.13.2-eksbuild.4  | 
|  1.34  |  v1.13.2-eksbuild.4  | 
|  1.33  |  v1.13.2-eksbuild.4  | 
|  1.32  |  v1.11.4-eksbuild.33  | 
|  1.31  |  v1.11.4-eksbuild.33  | 
|  1.30  |  v1.11.4-eksbuild.33  | 

**Important**  
If you’re self-managing this add-on, the versions in the table might not be the same as the available self-managed versions. For more information about updating the self-managed type of this add-on, see [Update the CoreDNS Amazon EKS self-managed add-on](coredns-add-on-self-managed-update.md).

## Important CoreDNS upgrade considerations
<a name="coredns-upgrade"></a>
+ CoreDNS updates utilize a PodDisruptionBudget to help maintain DNS service availability during the update process.
+ To improve the stability and availability of the CoreDNS Deployment, versions `v1.9.3-eksbuild.6` and later and `v1.10.1-eksbuild.3` are deployed with a `PodDisruptionBudget`. If you’ve deployed an existing `PodDisruptionBudget`, your upgrade to these versions might fail. If the upgrade fails, completing one of the following tasks should resolve the issue:
  + When doing the upgrade of the Amazon EKS add-on, choose to override the existing settings as your conflict resolution option. If you’ve made other custom settings to the Deployment, make sure to back up your settings before upgrading so that you can reapply your other custom settings after the upgrade.
  + Remove your existing `PodDisruptionBudget` and try the upgrade again.
+ In EKS add-on versions `v1.9.3-eksbuild.3` and later and `v1.10.1-eksbuild.6` and later, the CoreDNS Deployment sets the `readinessProbe` to use the `/ready` endpoint. This endpoint is enabled in the `Corefile` configuration file for CoreDNS.

  If you use a custom `Corefile`, you must add the `ready` plugin to the config, so that the `/ready` endpoint is active in CoreDNS for the probe to use.
+ In EKS add-on versions `v1.9.3-eksbuild.7` and later and `v1.10.1-eksbuild.4` and later, you can change the `PodDisruptionBudget`. You can edit the add-on and change these settings in the **Optional configuration settings** using the fields in the following example. This example shows the default `PodDisruptionBudget`.

  ```
  {
      "podDisruptionBudget": {
          "enabled": true,
          "maxUnavailable": 1
          }
  }
  ```

  You can set `maxUnavailable` or `minAvailable`, but you can’t set both in a single `PodDisruptionBudget`. For more information about `PodDisruptionBudgets`, see [Specifying a PodDisruptionBudget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget) in the *Kubernetes documentation*.

  Note that if you set `enabled` to `false`, the `PodDisruptionBudget` isn’t removed. After you set this field to `false`, you must delete the `PodDisruptionBudget` object. Similarly, if you edit the add-on to use an older version of the add-on (downgrade the add-on) after upgrading to a version with a `PodDisruptionBudget`, the `PodDisruptionBudget` isn’t removed. To delete the `PodDisruptionBudget`, you can run the following command:

  ```
  kubectl delete poddisruptionbudget coredns -n kube-system
  ```
+ In EKS add-on versions `v1.10.1-eksbuild.5` and later, change the default toleration from `node-role.kubernetes.io/master:NoSchedule` to `node-role.kubernetes.io/control-plane:NoSchedule` to comply with KEP 2067. For more information about KEP 2067, see [KEP-2067: Rename the kubeadm "master" label and taint](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint#renaming-the-node-rolekubernetesiomaster-node-taint) in the *Kubernetes Enhancement Proposals (KEPs)* on GitHub.

  In EKS add-on versions `v1.8.7-eksbuild.8` and later and `v1.9.3-eksbuild.9` and later, both tolerations are set to be compatible with every Kubernetes version.
+ In EKS add-on versions `v1.9.3-eksbuild.11` and `v1.10.1-eksbuild.7` and later, the CoreDNS Deployment sets a default value for `topologySpreadConstraints`. The default value ensures that the CoreDNS Pods are spread across the Availability Zones if there are nodes in multiple Availability Zones available. You can set a custom value that will be used instead of the default value. The default value follows:

  ```
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: topology.kubernetes.io/zone
      whenUnsatisfiable: ScheduleAnyway
      labelSelector:
        matchLabels:
          k8s-app: kube-dns
  ```

### CoreDNS `v1.11` upgrade considerations
<a name="coredns-upgrade-1"></a>
+ In EKS add-on versions `v1.11.1-eksbuild.4` and later, the container image is based on a [minimal base image](https://gallery.ecr.aws/eks-distro-build-tooling/eks-distro-minimal-base) maintained by Amazon EKS Distro, which contains minimal packages and doesn’t have shells. For more information, see [Amazon EKS Distro](https://distro.eks.amazonaws.com/). The usage and troubleshooting of the CoreDNS image remains the same.

# Create the CoreDNS Amazon EKS add-on
<a name="coredns-add-on-create"></a>

Create the CoreDNS Amazon EKS add-on. You must have a cluster before you create the add-on. For more information, see [Create an Amazon EKS cluster](create-cluster.md).

1. See which version of the add-on is installed on your cluster.

   ```
   kubectl describe deployment coredns --namespace kube-system | grep coredns: | cut -d : -f 3
   ```

   An example output is as follows.

   ```
   v1.10.1-eksbuild.13
   ```

1. See which type of the add-on is installed on your cluster. Depending on the tool that you created your cluster with, you might not currently have the Amazon EKS add-on type installed on your cluster. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name coredns --query addon.addonVersion --output text
   ```

   If a version number is returned, you have the Amazon EKS type of the add-on installed on your cluster and don’t need to complete the remaining steps in this procedure. If an error is returned, you don’t have the Amazon EKS type of the add-on installed on your cluster. Complete the remaining steps of this procedure to install it.

1. Save the configuration of your currently installed add-on.

   ```
   kubectl get deployment coredns -n kube-system -o yaml > aws-k8s-coredns-old.yaml
   ```

1. Create the add-on using the AWS CLI. If you want to use the AWS Management Console or `eksctl` to create the add-on, see [Create an Amazon EKS add-on](creating-an-add-on.md) and specify `coredns` for the add-on name. Copy the command that follows to your device. Make the following modifications to the command, as needed, and then run the modified command.
   + Replace *my-cluster* with the name of your cluster.
   + Replace *v1.11.3-eksbuild.1* with the latest version listed in the [latest version table](managing-coredns.md#coredns-versions) for your cluster version.

     ```
     aws eks create-addon --cluster-name my-cluster --addon-name coredns --addon-version v1.11.3-eksbuild.1
     ```

     If you’ve applied custom settings to your current add-on that conflict with the default settings of the Amazon EKS add-on, creation might fail. If creation fails, you receive an error that can help you resolve the issue. Alternatively, you can add `--resolve-conflicts OVERWRITE` to the previous command. This allows the add-on to overwrite any existing custom settings. Once you’ve created the add-on, you can update it with your custom settings.

1. Confirm that the latest version of the add-on for your cluster’s Kubernetes version was added to your cluster. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name coredns --query addon.addonVersion --output text
   ```

   It might take several seconds for add-on creation to complete.

   An example output is as follows.

   ```
   v1.11.3-eksbuild.1
   ```

1. If you made custom settings to your original add-on, before you created the Amazon EKS add-on, use the configuration that you saved in a previous step to update the Amazon EKS add-on with your custom settings. For instructions to update the add-on, see [Update the CoreDNS Amazon EKS add-on](coredns-add-on-update.md).

# Update the CoreDNS Amazon EKS add-on
<a name="coredns-add-on-update"></a>

Update the Amazon EKS type of the add-on. If you haven’t added the Amazon EKS add-on to your cluster, either [add it](coredns-add-on-create.md) or see [Update the CoreDNS Amazon EKS self-managed add-on](coredns-add-on-self-managed-update.md).

Before you begin, review the upgrade considerations. For more information, see [Important CoreDNS upgrade considerations](managing-coredns.md#coredns-upgrade).

1. See which version of the add-on is installed on your cluster. Replace *my-cluster* with your cluster name.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name coredns --query "addon.addonVersion" --output text
   ```

   An example output is as follows.

   ```
   v1.10.1-eksbuild.13
   ```

   If the version returned is the same as the version for your cluster’s Kubernetes version in the [latest version table](managing-coredns.md#coredns-versions), then you already have the latest version installed on your cluster and don’t need to complete the rest of this procedure. If you receive an error, instead of a version number in your output, then you don’t have the Amazon EKS type of the add-on installed on your cluster. You need to [create the add-on](coredns-add-on-create.md) before you can update it with this procedure.

1. Save the configuration of your currently installed add-on.

   ```
   kubectl get deployment coredns -n kube-system -o yaml > aws-k8s-coredns-old.yaml
   ```

1. Update your add-on using the AWS CLI. If you want to use the AWS Management Console or `eksctl` to update the add-on, see [Update an Amazon EKS add-on](updating-an-add-on.md). Copy the command that follows to your device. Make the following modifications to the command, as needed, and then run the modified command.
   + Replace *my-cluster* with the name of your cluster.
   + Replace *v1.11.3-eksbuild.1* with the latest version listed in the [latest version table](managing-coredns.md#coredns-versions) for your cluster version.
   + The `--resolve-conflictsPRESERVE ` option preserves existing configuration values for the add-on. If you’ve set custom values for add-on settings, and you don’t use this option, Amazon EKS overwrites your values with its default values. If you use this option, then we recommend testing any field and value changes on a non-production cluster before updating the add-on on your production cluster. If you change this value to `OVERWRITE`, all settings are changed to Amazon EKS default values. If you’ve set custom values for any settings, they might be overwritten with Amazon EKS default values. If you change this value to `none`, Amazon EKS doesn’t change the value of any settings, but the update might fail. If the update fails, you receive an error message to help you resolve the conflict.
   + If you’re not updating a configuration setting, remove `--configuration-values '{"replicaCount":3}'` from the command. If you’re updating a configuration setting, replace *"replicaCount":3* with the setting that you want to set. In this example, the number of replicas of CoreDNS is set to `3`. The value that you specify must be valid for the configuration schema. If you don’t know the configuration schema, run `aws eks describe-addon-configuration --addon-name coredns --addon-version v1.11.3-eksbuild.1 `, replacing *v1.11.3-eksbuild.1* with the version number of the add-on that you want to see the configuration for. The schema is returned in the output. If you have any existing custom configuration, want to remove it all, and set the values for all settings back to Amazon EKS defaults, remove *"replicaCount":3* from the command, so that you have empty `{}`. For more information about CoreDNS settings, see [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/) in the Kubernetes documentation.

     ```
     aws eks update-addon --cluster-name my-cluster --addon-name coredns --addon-version v1.11.3-eksbuild.1 \
         --resolve-conflicts PRESERVE --configuration-values '{"replicaCount":3}'
     ```

     It might take several seconds for the update to complete.

1. Confirm that the add-on version was updated. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name coredns
   ```

   It might take several seconds for the update to complete.

   An example output is as follows.

   ```
   {
       "addon": {
           "addonName": "coredns",
           "clusterName": "my-cluster",
           "status": "ACTIVE",
           "addonVersion": "v1.11.3-eksbuild.1",
           "health": {
               "issues": []
           },
           "addonArn": "arn:aws:eks:region:111122223333:addon/my-cluster/coredns/d2c34f06-1111-2222-1eb0-24f64ce37fa4",
           "createdAt": "2023-03-01T16:41:32.442000+00:00",
           "modifiedAt": "2023-03-01T18:16:54.332000+00:00",
           "tags": {},
           "configurationValues": "{\"replicaCount\":3}"
       }
   }
   ```

# Update the CoreDNS Amazon EKS self-managed add-on
<a name="coredns-add-on-self-managed-update"></a>

**Important**  
We recommend adding the Amazon EKS type of the add-on to your cluster instead of using the self-managed type of the add-on. If you’re not familiar with the difference between the types, see [Amazon EKS add-ons](eks-add-ons.md). For more information about adding an Amazon EKS add-on to your cluster, see [Create an Amazon EKS add-on](creating-an-add-on.md). If you’re unable to use the Amazon EKS add-on, we encourage you to submit an issue about why you can’t to the [Containers roadmap GitHub repository](https://github.com/aws/containers-roadmap/issues).

Before you begin, review the upgrade considerations. For more information, see [Important CoreDNS upgrade considerations](managing-coredns.md#coredns-upgrade).

1. Confirm that you have the self-managed type of the add-on installed on your cluster. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name coredns --query addon.addonVersion --output text
   ```

   If an error message is returned, you have the self-managed type of the add-on installed on your cluster. Complete the remaining steps in this procedure. If a version number is returned, you have the Amazon EKS type of the add-on installed on your cluster. To update the Amazon EKS type of the add-on, use the procedure in [Update the CoreDNS Amazon EKS add-on](coredns-add-on-update.md), rather than using this procedure. If you’re not familiar with the differences between the add-on types, see [Amazon EKS add-ons](eks-add-ons.md).

1. See which version of the container image is currently installed on your cluster.

   ```
   kubectl describe deployment coredns -n kube-system | grep Image | cut -d ":" -f 3
   ```

   An example output is as follows.

   ```
   v1.8.7-eksbuild.2
   ```

1. If your current CoreDNS version is `v1.5.0` or later, but earlier than the version listed in the [CoreDNS versions](managing-coredns.md#coredns-versions) table, then skip this step. If your current version is earlier than `1.5.0`, then you need to modify the `ConfigMap` for CoreDNS to use the forward add-on, rather than the proxy add-on.

   1. Open the `ConfigMap` with the following command.

      ```
      kubectl edit configmap coredns -n kube-system
      ```

   1. Replace `proxy` in the following line with `forward`. Save the file and exit the editor.

      ```
      proxy . /etc/resolv.conf
      ```

1. If you originally deployed your cluster on Kubernetes `1.17` or earlier, then you may need to remove a discontinued line from your CoreDNS manifest.
**Important**  
You must complete this step before updating to CoreDNS version `1.7.0`, but it’s recommended that you complete this step even if you’re updating to an earlier version.

   1. Check to see if your CoreDNS manifest has the line.

      ```
      kubectl get configmap coredns -n kube-system -o jsonpath='{$.data.Corefile}' | grep upstream
      ```

      If no output is returned, your manifest doesn’t have the line and you can skip to the next step to update CoreDNS. If output is returned, then you need to remove the line.

   1. Edit the `ConfigMap` with the following command, removing the line in the file that has the word `upstream` in it. Do not change anything else in the file. Once the line is removed, save the changes.

      ```
      kubectl edit configmap coredns -n kube-system -o yaml
      ```

1. Retrieve your current CoreDNS image version:

   ```
   kubectl describe deployment coredns -n kube-system | grep Image
   ```

   An example output is as follows.

   ```
   602401143452.dkr.ecr.region-code.amazonaws.com/eks/coredns:v1.8.7-eksbuild.2
   ```

1. If you’re updating to CoreDNS `1.8.3` or later, then you need to add the `endpointslices` permission to the `system:coredns` Kubernetes `clusterrole`.

   ```
   kubectl edit clusterrole system:coredns -n kube-system
   ```

   Add the following lines under the existing permissions lines in the `rules` section of the file.

   ```
   [...]
   - apiGroups:
     - discovery.k8s.io
     resources:
     - endpointslices
     verbs:
     - list
     - watch
   [...]
   ```

1. Update the CoreDNS add-on by replacing *602401143452* and *region-code* with the values from the output returned in a previous step. Replace *v1.11.3-eksbuild.1* with the CoreDNS version listed in the [latest versions table](managing-coredns.md#coredns-versions) for your Kubernetes version.

   ```
   kubectl set image deployment.apps/coredns -n kube-system  coredns=602401143452.dkr.ecr.region-code.amazonaws.com/eks/coredns:v1.11.3-eksbuild.1
   ```

   An example output is as follows.

   ```
   deployment.apps/coredns image updated
   ```

1. Check the container image version again to confirm that it was updated to the version that you specified in the previous step.

   ```
   kubectl describe deployment coredns -n kube-system | grep Image | cut -d ":" -f 3
   ```

   An example output is as follows.

   ```
   v1.11.3-eksbuild.1
   ```

# Scale CoreDNS Pods for high DNS traffic
<a name="coredns-autoscaling"></a>

When you launch an Amazon EKS cluster with at least one node, a Deployment of two replicas of the CoreDNS image are deployed by default, regardless of the number of nodes deployed in your cluster. The CoreDNS Pods provide name resolution for all Pods in the cluster. Applications use name resolution to connect to pods and services in the cluster as well as connecting to services outside the cluster. As the number of requests for name resolution (queries) from pods increase, the CoreDNS pods can get overwhelmed and slow down, and reject requests that the pods can’t handle.

To handle the increased load on the CoreDNS pods, consider an autoscaling system for CoreDNS. Amazon EKS can manage the autoscaling of the CoreDNS Deployment in the EKS Add-on version of CoreDNS. This CoreDNS autoscaler continuously monitors the cluster state, including the number of nodes and CPU cores. Based on that information, the controller will dynamically adapt the number of replicas of the CoreDNS deployment in an EKS cluster. This feature works for CoreDNS `v1.9` and later. For more information about which versions are compatible with CoreDNS Autoscaling, see the following section.

The system automatically manages CoreDNS replicas using a dynamic formula based on both the number of nodes and CPU cores in the cluster, calculated as the maximum of (numberOfNodes divided by 16) and (numberOfCPUCores divided by 256). It evaluates demand over 10-minute peak periods, scaling up immediately when needed to handle increased DNS query load, while scaling down gradually by reducing replicas by 33% every 3 minutes to maintain system stability and avoid disruption.

We recommend using this feature in conjunction with other [EKS Cluster Autoscaling best practices](https://aws.github.io/aws-eks-best-practices/cluster-autoscaling/) to improve overall application availability and cluster scalability.

## Prerequisites
<a name="coredns-autoscaling-prereqs"></a>

For Amazon EKS to scale your CoreDNS deployment, there are three prerequisites:
+ You must be using the *EKS Add-on* version of CoreDNS.
+ Your cluster must be running at least the minimum cluster versions and platform versions.
+ Your cluster must be running at least the minimum version of the EKS Add-on of CoreDNS.

### Minimum cluster version
<a name="coredns-autoscaling-cluster-version"></a>

Autoscaling of CoreDNS is done by a new component in the cluster control plane, managed by Amazon EKS. Because of this, you must upgrade your cluster to an EKS release that supports the minimum platform version that has the new component.

A new Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md). The cluster must be running one of the Kubernetes versions and platform versions listed in the following table or a later version. Note that any Kubernetes and platform versions later than those listed are also supported. You can check your current Kubernetes version by replacing *my-cluster* in the following command with the name of your cluster and then running the modified command:

```
aws eks describe-cluster --name my-cluster --query cluster.version --output text
```


| Kubernetes version | Platform version | 
| --- | --- | 
|  Not Listed  |  All Platform Versions  | 
|   `1.29.3`   |   `eks.7`   | 
|   `1.28.8`   |   `eks.13`   | 
|   `1.27.12`   |   `eks.17`   | 
|   `1.26.15`   |   `eks.18`   | 

**Note**  
Every platform version of later Kubernetes versions are also supported, for example Kubernetes version `1.30` from `eks.1` and on.

### Minimum EKS Add-on version
<a name="coredns-autoscaling-coredns-version"></a>


| Kubernetes version | 1.29 | 1.28 | 
| --- | --- | --- | 
|  |   `v1.11.1-eksbuild.9`   |   `v1.10.1-eksbuild.11`   | 

#### Configuring CoreDNS autoscaling in the AWS Management Console
<a name="coredns-autoscaling-console"></a>

1. Ensure that your cluster is at or above the minimum cluster version.

   Amazon EKS upgrades clusters between platform versions of the same Kubernetes version automatically, and you can’t start this process yourself. Instead, you can upgrade your cluster to the next Kubernetes version, and the cluster will be upgraded to that K8s version and the latest platform version.

   New Kubernetes versions sometimes introduce significant changes. Therefore, we recommend that you test the behavior of your applications by using a separate cluster of the new Kubernetes version before you update your production clusters.

   To upgrade a cluster to a new Kubernetes version, follow the procedure in [Update existing cluster to new Kubernetes version](update-cluster.md).

1. Ensure that you have the EKS Add-on for CoreDNS, not the self-managed CoreDNS Deployment.

   Depending on the tool that you created your cluster with, you might not currently have the Amazon EKS add-on type installed on your cluster. To see which type of the add-on is installed on your cluster, you can run the following command. Replace `my-cluster` with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name coredns --query addon.addonVersion --output text
   ```

   If a version number is returned, you have the Amazon EKS type of the add-on installed on your cluster and you can continue with the next step. If an error is returned, you don’t have the Amazon EKS type of the add-on installed on your cluster. Complete the remaining steps of the procedure [Create the CoreDNS Amazon EKS add-on](coredns-add-on-create.md) to replace the self-managed version with the Amazon EKS add-on.

1. Ensure that your EKS Add-on for CoreDNS is at a version the same or higher than the minimum EKS Add-on version.

   See which version of the add-on is installed on your cluster. You can check in the AWS Management Console or run the following command:

   ```
   kubectl describe deployment coredns --namespace kube-system | grep coredns: | cut -d : -f 3
   ```

   An example output is as follows.

   ```
   v1.10.1-eksbuild.13
   ```

   Compare this version with the minimum EKS Add-on version in the previous section. If needed, upgrade the EKS Add-on to a higher version by following the procedure [Update the CoreDNS Amazon EKS add-on](coredns-add-on-update.md).

1. Add the autoscaling configuration to the **Optional configuration settings** of the EKS Add-on.

   1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

   1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the add-on for.

   1. Choose the **Add-ons** tab.

   1. Select the box in the top right of the CoreDNS add-on box and then choose **Edit**.

   1. On the **Configure CoreDNS** page:

      1. Select the **Version** that you’d like to use. We recommend that you keep the same version as the previous step, and update the version and configuration in separate actions.

      1. Expand the **Optional configuration settings**.

      1. Enter the JSON key `"autoscaling":` and value of a nested JSON object with a key `"enabled":` and value `true` in **Configuration values**. The resulting text must be a valid JSON object. If this key and value are the only data in the text box, surround the key and value with curly braces `{ }`. The following example shows autoscaling is enabled:

         ```
         {
           "autoScaling": {
             "enabled": true
           }
         }
         ```

      1. (Optional) You can provide minimum and maximum values that autoscaling can scale the number of CoreDNS pods to.

         The following example shows autoscaling is enabled and all of the optional keys have values. We recommend that the minimum number of CoreDNS pods is always greater than 2 to provide resilience for the DNS service in the cluster.

         ```
         {
           "autoScaling": {
             "enabled": true,
             "minReplicas": 2,
             "maxReplicas": 10
           }
         }
         ```

   1. To apply the new configuration by replacing the CoreDNS pods, choose **Save changes**.

      Amazon EKS applies changes to the EKS Add-ons by using a *rollout* of the Kubernetes Deployment for CoreDNS. You can track the status of the rollout in the **Update history** of the add-on in the AWS Management Console and with `kubectl rollout status deployment/coredns --namespace kube-system`.

       `kubectl rollout` has the following commands:

      ```
      kubectl rollout
      
      history  -- View rollout history
      pause    -- Mark the provided resource as paused
      restart  -- Restart a resource
      resume   -- Resume a paused resource
      status   -- Show the status of the rollout
      undo     -- Undo a previous rollout
      ```

      If the rollout takes too long, Amazon EKS will undo the rollout, and a message with the type of **Addon Update** and a status of **Failed** will be added to the **Update history** of the add-on. To investigate any issues, start from the history of the rollout, and run `kubectl logs` on a CoreDNS pod to see the logs of CoreDNS.

1. If the new entry in the **Update history** has a status of **Successful**, then the rollout has completed and the add-on is using the new configuration in all of the CoreDNS pods. As you change the number of nodes and CPU cores of nodes in the cluster, Amazon EKS scales the number of replicas of the CoreDNS deployment.

#### Configuring CoreDNS autoscaling in the AWS Command Line Interface
<a name="coredns-autoscaling-cli"></a>

1. Ensure that your cluster is at or above the minimum cluster version.

   Amazon EKS upgrades clusters between platform versions of the same Kubernetes version automatically, and you can’t start this process yourself. Instead, you can upgrade your cluster to the next Kubernetes version, and the cluster will be upgraded to that K8s version and the latest platform version.

   New Kubernetes versions sometimes introduce significant changes. Therefore, we recommend that you test the behavior of your applications by using a separate cluster of the new Kubernetes version before you update your production clusters.

   To upgrade a cluster to a new Kubernetes version, follow the procedure in [Update existing cluster to new Kubernetes version](update-cluster.md).

1. Ensure that you have the EKS Add-on for CoreDNS, not the self-managed CoreDNS Deployment.

   Depending on the tool that you created your cluster with, you might not currently have the Amazon EKS add-on type installed on your cluster. To see which type of the add-on is installed on your cluster, you can run the following command. Replace `my-cluster` with the name of your cluster.

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name coredns --query addon.addonVersion --output text
   ```

   If a version number is returned, you have the Amazon EKS type of the add-on installed on your cluster. If an error is returned, you don’t have the Amazon EKS type of the add-on installed on your cluster. Complete the remaining steps of the procedure [Create the CoreDNS Amazon EKS add-on](coredns-add-on-create.md) to replace the self-managed version with the Amazon EKS add-on.

1. Ensure that your EKS Add-on for CoreDNS is at a version the same or higher than the minimum EKS Add-on version.

   See which version of the add-on is installed on your cluster. You can check in the AWS Management Console or run the following command:

   ```
   kubectl describe deployment coredns --namespace kube-system | grep coredns: | cut -d : -f 3
   ```

   An example output is as follows.

   ```
   v1.10.1-eksbuild.13
   ```

   Compare this version with the minimum EKS Add-on version in the previous section. If needed, upgrade the EKS Add-on to a higher version by following the procedure [Update the CoreDNS Amazon EKS add-on](coredns-add-on-update.md).

1. Add the autoscaling configuration to the **Optional configuration settings** of the EKS Add-on.

   Run the following AWS CLI command. Replace `my-cluster` with the name of your cluster and the IAM role ARN with the role that you are using.

   ```
   aws eks update-addon --cluster-name my-cluster --addon-name coredns \
       --resolve-conflicts PRESERVE --configuration-values '{"autoScaling":{"enabled":true}}'
   ```

   Amazon EKS applies changes to the EKS Add-ons by using a *rollout* of the Kubernetes Deployment for CoreDNS. You can track the status of the rollout in the **Update history** of the add-on in the AWS Management Console and with `kubectl rollout status deployment/coredns --namespace kube-system`.

    `kubectl rollout` has the following commands:

   ```
   kubectl rollout
   
   history  -- View rollout history
   pause    -- Mark the provided resource as paused
   restart  -- Restart a resource
   resume   -- Resume a paused resource
   status   -- Show the status of the rollout
   undo     -- Undo a previous rollout
   ```

   If the rollout takes too long, Amazon EKS will undo the rollout, and a message with the type of **Addon Update** and a status of **Failed** will be added to the **Update history** of the add-on. To investigate any issues, start from the history of the rollout, and run `kubectl logs` on a CoreDNS pod to see the logs of CoreDNS.

1. (Optional) You can provide minimum and maximum values that autoscaling can scale the number of CoreDNS pods to.

   The following example shows autoscaling is enabled and all of the optional keys have values. We recommend that the minimum number of CoreDNS pods is always greater than 2 to provide resilience for the DNS service in the cluster.

   ```
   aws eks update-addon --cluster-name my-cluster --addon-name coredns \
       --resolve-conflicts PRESERVE --configuration-values '{"autoScaling":{"enabled":true,"minReplicas":2,"maxReplicas":10}}'
   ```

1. Check the status of the update to the add-on by running the following command:

   ```
   aws eks describe-addon --cluster-name my-cluster --addon-name coredns
   ```

   If you see this line: `"status": "ACTIVE"`, then the rollout has completed and the add-on is using the new configuration in all of the CoreDNS pods. As you change the number of nodes and CPU cores of nodes in the cluster, Amazon EKS scales the number of replicas of the CoreDNS deployment.

# Monitor Kubernetes DNS resolution with CoreDNS metrics
<a name="coredns-metrics"></a>

CoreDNS as an EKS add-on exposes the metrics from CoreDNS on port `9153` in the Prometheus format in the `kube-dns` service. You can use Prometheus, the Amazon CloudWatch agent, or any other compatible system to scrape (collect) these metrics.

For an example *scrape configuration* that is compatible with both Prometheus and the CloudWatch agent, see [CloudWatch agent configuration for Prometheus](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights-Prometheus-Setup-configure.html) in the *Amazon CloudWatch User Guide*.

# Manage `kube-proxy` in Amazon EKS clusters
<a name="managing-kube-proxy"></a>

**Tip**  
With Amazon EKS Auto Mode, you don’t need to install or upgrade networking add-ons. Auto Mode includes pod networking and load balancing capabilities.  
For more information, see [Automate cluster infrastructure with EKS Auto Mode](automode.md).

We recommend adding the Amazon EKS type of the add-on to your cluster instead of using the self-managed type of the add-on. If you’re not familiar with the difference between the types, see [Amazon EKS add-ons](eks-add-ons.md). For more information about adding an Amazon EKS add-on to your cluster, see [Create an Amazon EKS add-on](creating-an-add-on.md). If you’re unable to use the Amazon EKS add-on, we encourage you to submit an issue about why you can’t to the [Containers roadmap GitHub repository](https://github.com/aws/containers-roadmap/issues).

The `kube-proxy` add-on is deployed on each Amazon EC2 node in your Amazon EKS cluster. It maintains network rules on your nodes and enables network communication to your Pods. The add-on isn’t deployed to Fargate nodes in your cluster. For more information, see [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) in the Kubernetes documentation.

## Install as Amazon EKS Add-on
<a name="_install_as_amazon_eks_add_on"></a>

## `kube-proxy` versions
<a name="kube-proxy-versions"></a>

The following table lists the latest version of the Amazon EKS add-on type for each Kubernetes version.


| Kubernetes version |  `kube-proxy` version | 
| --- | --- | 
|  1.35  |  v1.35.3-eksbuild.2  | 
|  1.34  |  v1.34.6-eksbuild.2  | 
|  1.33  |  v1.33.10-eksbuild.2  | 
|  1.32  |  v1.32.13-eksbuild.5  | 
|  1.31  |  v1.31.14-eksbuild.9  | 
|  1.30  |  v1.30.14-eksbuild.28  | 

**Note**  
An earlier version of the documentation was incorrect. `kube-proxy` versions `v1.28.5`, `v1.27.9`, and `v1.26.12` aren’t available.  
If you’re self-managing this add-on, the versions in the table might not be the same as the available self-managed versions.

## `kube-proxy` container image
<a name="managing-kube-proxy-images"></a>

The `kube-proxy` container image is based on a [minimal base image](https://gallery.ecr.aws/eks-distro-build-tooling/eks-distro-minimal-base-iptables) maintained by Amazon EKS Distro, which contains minimal packages and doesn’t have shells. For more information, see [Amazon EKS Distro](https://distro.eks.amazonaws.com/).

The following table lists the latest available self-managed `kube-proxy` container image version for each Amazon EKS cluster version.


| Version | kube-proxy | 
| --- | --- | 
|  1.35  |  v1.35.3-eksbuild.2  | 
|  1.34  |  v1.34.6-eksbuild.2  | 
|  1.33  |  v1.33.10-minimal-eksbuild.2  | 
|  1.32  |  v1.32.13-minimal-eksbuild.5  | 
|  1.31  |  v1.31.14-minimal-eksbuild.9  | 
|  1.30  |  v1.30.14-minimal-eksbuild.28  | 

When you [update an Amazon EKS add-on type](updating-an-add-on.md), you specify a valid Amazon EKS add-on version, which might not be a version listed in this table. This is because [Amazon EKS add-on](workloads-add-ons-available-eks.md#add-ons-kube-proxy) versions don’t always match container image versions specified when updating the self-managed type of this add-on. When you update the self-managed type of this add-on, you specify a valid container image version listed in this table.

# Update the Kubernetes `kube-proxy` self-managed add-on
<a name="kube-proxy-add-on-self-managed-update"></a>

**Important**  
We recommend adding the Amazon EKS type of the add-on to your cluster instead of using the self-managed type of the add-on. If you’re not familiar with the difference between the types, see [Amazon EKS add-ons](eks-add-ons.md). For more information about adding an Amazon EKS add-on to your cluster, see [Create an Amazon EKS add-on](creating-an-add-on.md). If you’re unable to use the Amazon EKS add-on, we encourage you to submit an issue about why you can’t to the [Containers roadmap GitHub repository](https://github.com/aws/containers-roadmap/issues).

## Prerequisites
<a name="managing-kube-proxy-prereqs"></a>
+ An existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md).

## Considerations
<a name="managing-kube-proxy-considerations"></a>
+  `Kube-proxy` on an Amazon EKS cluster has the same [compatibility and skew policy as Kubernetes](https://kubernetes.io/releases/version-skew-policy/#kube-proxy). Learn how to [Verifying Amazon EKS add-on version compatibility with a cluster](addon-compat.md).

  1. Confirm that you have the self-managed type of the add-on installed on your cluster. Replace *my-cluster* with the name of your cluster.

     ```
     aws eks describe-addon --cluster-name my-cluster --addon-name kube-proxy --query addon.addonVersion --output text
     ```

     If an error message is returned, you have the self-managed type of the add-on installed on your cluster. The remaining steps in this topic are for updating the self-managed type of the add-on. If a version number is returned, you have the Amazon EKS type of the add-on installed on your cluster. To update it, use the procedure in [Updating an Amazon EKS add-on](updating-an-add-on.md), rather than using the procedure in this topic. If you’re not familiar with the differences between the add-on types, see [Amazon EKS add-ons](eks-add-ons.md).

  1. See which version of the container image is currently installed on your cluster.

     ```
     kubectl describe daemonset kube-proxy -n kube-system | grep Image
     ```

     An example output is as follows.

     ```
     Image:    602401143452.dkr.ecr.region-code.amazonaws.com/eks/kube-proxy:v1.29.1-eksbuild.2
     ```

     In the example output, *v1.29.1-eksbuild.2* is the version installed on the cluster.

  1. Update the `kube-proxy` add-on by replacing *602401143452* and *region-code* with the values from your output in the previous step. Replace *v1.30.6-eksbuild.3* with the `kube-proxy` version listed in the [Latest available self-managed kube-proxy container image version for each Amazon EKS cluster version](managing-kube-proxy.md#managing-kube-proxy-images) table.
**Important**  
The manifests for each image type are different and not compatible between the *default* or *minimal* image types. You must use the same image type as the previous image, so that the entrypoint and arguments match.

     ```
     kubectl set image daemonset.apps/kube-proxy -n kube-system kube-proxy=602401143452.dkr.ecr.region-code.amazonaws.com/eks/kube-proxy:v1.30.6-eksbuild.3
     ```

     An example output is as follows.

     ```
     daemonset.apps/kube-proxy image updated
     ```

  1. Confirm that the new version is now installed on your cluster.

     ```
     kubectl describe daemonset kube-proxy -n kube-system | grep Image | cut -d ":" -f 3
     ```

     An example output is as follows.

     ```
     v1.30.0-eksbuild.3
     ```

  1. If you’re using `x86` and `Arm` nodes in the same cluster and your cluster was deployed before August 17, 2020. Then, edit your `kube-proxy` manifest to include a node selector for multiple hardware architectures with the following command. This is a one-time operation. After you’ve added the selector to your manifest, you don’t need to add it each time you update the add-on. If your cluster was deployed on or after August 17, 2020, then `kube-proxy` is already multi-architecture capable.

     ```
     kubectl edit -n kube-system daemonset/kube-proxy
     ```

     Add the following node selector to the file in the editor and then save the file. For an example of where to include this text in the editor, see the [CNI manifest](https://github.com/aws/amazon-vpc-cni-k8s/blob/release-1.11/config/master/aws-k8s-cni.yaml#L265-#L269) file on GitHub. This enables Kubernetes to pull the correct hardware image based on the node’s hardware architecture.

     ```
     - key: "kubernetes.io/arch"
       operator: In
       values:
       - amd64
       - arm64
     ```

  1. If your cluster was originally created with Kubernetes version `1.14` or later, then you can skip this step because `kube-proxy` already includes this `Affinity Rule`. If you originally created an Amazon EKS cluster with Kubernetes version `1.13` or earlier and intend to use Fargate nodes in your cluster, then edit your `kube-proxy` manifest to include a `NodeAffinity` rule to prevent `kube-proxy` Pods from scheduling on Fargate nodes. This is a one-time edit. Once you’ve added the `Affinity Rule` to your manifest, you don’t need to add it each time that you update the add-on. Edit your `kube-proxy` DaemonSet.

     ```
     kubectl edit -n kube-system daemonset/kube-proxy
     ```

     Add the following `Affinity Rule` to the DaemonSet `spec` section of the file in the editor and then save the file. For an example of where to include this text in the editor, see the [CNI manifest](https://github.com/aws/amazon-vpc-cni-k8s/blob/release-1.11/config/master/aws-k8s-cni.yaml#L270-#L273) file on GitHub.

     ```
     - key: eks.amazonaws.com/compute-type
       operator: NotIn
       values:
       - fargate
     ```