

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Learn how access control works in Amazon EKS
<a name="cluster-auth"></a>

Learn how to manage access to your Amazon EKS cluster. Using Amazon EKS requires knowledge of how both Kubernetes and AWS Identity and Access Management (AWS IAM) handle access control.

 **This section includes:** 

 ** [Grant IAM users and roles access to Kubernetes APIs](grant-k8s-access.md) ** — Learn how to enable applications or users to authenticate to the Kubernetes API. You can use access entries, the aws-auth ConfigMap, or an external OIDC provider.

 ** [View Kubernetes resources in the AWS Management Console](view-kubernetes-resources.md) ** — Learn how to configure the AWS Management Console to communicate with your Amazon EKS cluster. Use the console to view Kubernetes resources in the cluster, such as namespaces, nodes, and Pods.

 ** [Grant AWS services write access to Kubernetes APIs](mutate-kubernetes-resources.md) ** — Learn about the permissions required to modify Kubernetes resources.

 ** [Connect kubectl to an EKS cluster by creating a kubeconfig file](create-kubeconfig.md) ** — Learn how to configure kubectl to communicate with your Amazon EKS cluster. Use the AWS CLI to create a kubeconfig file.

 ** [Grant Kubernetes workloads access to AWS using Kubernetes Service Accounts](service-accounts.md) ** — Learn how to associate a Kubernetes service account with AWS IAM Roles. You can use Pod Identity or IAM Roles for Service Accounts (IRSA).

## Common Tasks
<a name="_common_tasks"></a>
+ Grant developers access to the Kubernetes API. View Kubernetes resources in the AWS Management Console.
  + Solution: [Use access entries](access-entries.md) to associate Kubernetes RBAC permissions with AWS IAM Users or Roles.
+ Configure kubectl to talk to an Amazon EKS cluster using AWS Credentials.
  + Solution: Use the AWS CLI to [create a kubeconfig file](create-kubeconfig.md).
+ Use an external identity provider, such as Ping Identity, to authenticate users to the Kubernetes API.
  + Solution: [Link an external OIDC provider](authenticate-oidc-identity-provider.md).
+ Grant workloads on your Kubernetes cluster the ability to call AWS APIs.
  + Solution: [Use Pod Identity](pod-identities.md) to associate an AWS IAM Role to a Kubernetes Service Account.

## Background
<a name="_background"></a>
+  [Learn how Kubernetes Service Accounts work.](https://kubernetes.io/docs/concepts/security/service-accounts/) 
+  [Review the Kubernetes Role Based Access Control (RBAC) Model](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) 
+ For more information about managing access to AWS resources, see the [AWS IAM User Guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/intro-structure.html). Alternatively, take a free [introductory training on using AWS IAM](https://explore.skillbuilder.aws/learn/course/external/view/elearning/120/introduction-to-aws-identity-and-access-management-iam).

## Considerations for EKS Auto Mode
<a name="_considerations_for_eks_auto_mode"></a>

EKS Auto Mode integrates with EKS Pod Identity and EKS access entries.
+ EKS Auto Mode uses access entries to grant the EKS control plane Kubernetes permissions. For example, the access policies enable EKS Auto Mode to read information about network endpoints and services.
  + You cannot disable access entries on an EKS Auto Mode cluster.
  + You can optionally enable the `aws-auth` `ConfigMap`.
  + The access entries for EKS Auto Mode are automatically configured. You can view these access entries, but you cannot modify them.
  + If you use a NodeClass to create a custom Node IAM Role, you need to create an access entry for the role using the AmazonEKSAutoNodePolicy access policy.
+ If you want to grant workloads permissions for AWS services, use EKS Pod Identity.
  + You do not need to install the Pod Identity agent on EKS Auto Mode clusters.

# Grant IAM users and roles access to Kubernetes APIs
<a name="grant-k8s-access"></a>

Your cluster has an Kubernetes API endpoint. Kubectl uses this API. You can authenticate to this API using two types of identities:
+  **An AWS Identity and Access Management (IAM) *principal* (role or user)** – This type requires authentication to IAM. Users can sign in to AWS as an [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) user or with a [federated identity](https://aws.amazon.com/identity/federation/) by using credentials provided through an identity source. Users can only sign in with a federated identity if your administrator previously set up identity federation using IAM roles. When users access AWS by using federation, they’re indirectly [assuming a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/when-to-use-iam.html#security-iam-authentication-iamrole). When users use this type of identity, you:
  + Can assign them Kubernetes permissions so that they can work with Kubernetes objects on your cluster. For more information about how to assign permissions to your IAM principals so that they’re able to access Kubernetes objects on your cluster, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md).
  + Can assign them IAM permissions so that they can work with your Amazon EKS cluster and its resources using the Amazon EKS API, AWS CLI, AWS CloudFormation, AWS Management Console, or `eksctl`. For more information, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions) in the Service Authorization Reference.
  + Nodes join your cluster by assuming an IAM role. The ability to access your cluster using IAM principals is provided by the [AWS IAM Authenticator for Kubernetes](https://github.com/kubernetes-sigs/aws-iam-authenticator#readme), which runs on the Amazon EKS control plane.
+  **A user in your own OpenID Connect (OIDC) provider** – This type requires authentication to your [OIDC](https://openid.net/connect/) provider. For more information about setting up your own OIDC provider with your Amazon EKS cluster, see [Grant users access to Kubernetes with an external OIDC provider](authenticate-oidc-identity-provider.md). When users use this type of identity, you:
  + Can assign them Kubernetes permissions so that they can work with Kubernetes objects on your cluster.
  + Can’t assign them IAM permissions so that they can work with your Amazon EKS cluster and its resources using the Amazon EKS API, AWS CLI, AWS CloudFormation, AWS Management Console, or `eksctl`.

You can use both types of identities with your cluster. The IAM authentication method cannot be disabled. The OIDC authentication method is optional.

## Associate IAM Identities with Kubernetes Permissions
<a name="authentication-modes"></a>

The [AWS IAM Authenticator for Kubernetes](https://github.com/kubernetes-sigs/aws-iam-authenticator#readme) is installed on your cluster’s control plane. It enables [AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) (IAM) principals (roles and users) that you allow to access Kubernetes resources on your cluster. You can allow IAM principals to access Kubernetes objects on your cluster using one of the following methods:
+  **Creating access entries** – If your cluster is at or later than the platform version listed in the [Prerequisites](access-entries.md) section for your cluster’s Kubernetes version, we recommend that you use this option.

  Use *access entries* to manage the Kubernetes permissions of IAM principals from outside the cluster. You can add and manage access to the cluster by using the EKS API, AWS Command Line Interface, AWS SDKs, AWS CloudFormation, and AWS Management Console. This means you can manage users with the same tools that you created the cluster with.

  To get started, follow [Change authentication mode to use access entries](setting-up-access-entries.md), then [Migrating existing aws-auth ConfigMap entries to access entries](migrating-access-entries.md).
+  **Adding entries to the `aws-auth` `ConfigMap` ** – If your cluster’s platform version is earlier than the version listed in the [Prerequisites](access-entries.md) section, then you must use this option. If your cluster’s platform version is at or later than the platform version listed in the [Prerequisites](access-entries.md) section for your cluster’s Kubernetes version, and you’ve added entries to the `ConfigMap`, then we recommend that you migrate those entries to access entries. You can’t migrate entries that Amazon EKS added to the `ConfigMap` however, such as entries for IAM roles used with managed node groups or Fargate profiles. For more information, see [Grant IAM users and roles access to Kubernetes APIs](#grant-k8s-access).
  + If you have to use the `aws-auth` `ConfigMap` option, you can add entries to the `ConfigMap` using the `eksctl create iamidentitymapping` command. For more information, see [Manage IAM users and roles](https://eksctl.io/usage/iam-identity-mappings/) in the `eksctl` documentation.

## Set Cluster Authentication Mode
<a name="set-cam"></a>

Each cluster has an *authentication mode*. The authentication mode determines which methods you can use to allow IAM principals to access Kubernetes objects on your cluster. There are three authentication modes.

**Important**  
Once the access entry method is enabled, it cannot be disabled.  
If the `ConfigMap` method is not enabled during cluster creation, it cannot be enabled later. All clusters created before the introduction of access entries have the `ConfigMap` method enabled.  
If you are using hybrid nodes with your cluster, you must use the `API` or `API_AND_CONFIG_MAP` cluster authentication modes.

 **The `aws-auth` `ConfigMap` inside the cluster**   
This is the original authentication mode for Amazon EKS clusters. The IAM principal that created the cluster is the initial user that can access the cluster by using `kubectl`. The initial user must add other users to the list in the `aws-auth` `ConfigMap` and assign permissions that affect the other users within the cluster. These other users can’t manage or remove the initial user, as there isn’t an entry in the `ConfigMap` to manage.

 **Both the `ConfigMap` and access entries**   
With this authentication mode, you can use both methods to add IAM principals to the cluster. Note that each method stores separate entries; for example, if you add an access entry from the AWS CLI, the `aws-auth` `ConfigMap` is not updated.

 **Access entries only**   
With this authentication mode, you can use the EKS API, AWS Command Line Interface, AWS SDKs, AWS CloudFormation, and AWS Management Console to manage access to the cluster for IAM principals.  
Each access entry has a *type* and you can use the combination of an *access scope* to limit the principal to a specific namespace and an *access policy* to set preconfigured reusable permissions policies. Alternatively, you can use the STANDARD type and Kubernetes RBAC groups to assign custom permissions.


| Authentication mode | Methods | 
| --- | --- | 
|   `ConfigMap` only (`CONFIG_MAP`)  |   `aws-auth` `ConfigMap`   | 
|  EKS API and `ConfigMap` (`API_AND_CONFIG_MAP`)  |  access entries in the EKS API, AWS Command Line Interface, AWS SDKs, AWS CloudFormation, and AWS Management Console and `aws-auth` `ConfigMap`   | 
|  EKS API only (`API`)  |  access entries in the EKS API, AWS Command Line Interface, AWS SDKs, AWS CloudFormation, and AWS Management Console   | 

**Note**  
Amazon EKS Auto Mode requires Access entries.

# Grant IAM users access to Kubernetes with EKS access entries
<a name="access-entries"></a>

This section is designed to show you how to manage IAM principal access to Kubernetes clusters in Amazon Elastic Kubernetes Service (EKS) using access entries and policies. You’ll find details on changing authentication modes, migrating from legacy `aws-auth` ConfigMap entries, creating, updating, and deleting access entries, associating policies with entries, reviewing predefined policy permissions, and key prerequisites and considerations for secure access management.

## Overview
<a name="_overview"></a>

EKS access entries are the best way to grant users access to the Kubernetes API. For example, you can use access entries to grant developers access to use kubectl. Fundamentally, an EKS access entry associates a set of Kubernetes permissions with an IAM identity, such as an IAM role. For example, a developer may assume an IAM role and use that to authenticate to an EKS Cluster.

## Features
<a name="_features"></a>
+  **Centralized Authentication and Authorization**: Controls access to Kubernetes clusters directly via Amazon EKS APIs, eliminating the need to switch between AWS and Kubernetes APIs for user permissions.
+  **Granular Permissions Management**: Uses access entries and policies to define fine-grained permissions for AWS IAM principals, including modifying or revoking cluster-admin access from the creator.
+  **IaC Tool Integration**: Supports infrastructure as code tools like AWS CloudFormation, Terraform, and AWS CDK to define access configurations during cluster creation.
+  **Misconfiguration Recovery**: Allows restoring cluster access through the Amazon EKS API without direct Kubernetes API access.
+  **Reduced Overhead and Enhanced Security**: Centralizes operations to lower overhead while leveraging AWS IAM features like CloudTrail audit logging and multi-factor authentication.

## How to attach permissions
<a name="_how_to_attach_permissions"></a>

You can attach Kubernetes permissions to access entries in two ways:
+ Use an access policy. Access policies are pre-defined Kubernetes permissions templates maintained by AWS. For more information, see [Review access policy permissions](access-policy-permissions.md).
+ Reference a Kubernetes group. If you associate an IAM Identity with a Kubernetes group, you can create Kubernetes resources that grant the group permissions. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

## Considerations
<a name="_considerations"></a>

When enabling EKS access entries on existing clusters, keep the following in mind:
+  **Legacy Cluster Behavior**: For clusters created before the introduction of access entries (those with initial platform versions earlier than specified in [Platform version requirements](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html)), EKS automatically creates an access entry reflecting pre-existing permissions. This entry includes the IAM identity that originally created the cluster and the administrative permissions granted to that identity during cluster creation.
+  **Handling Legacy `aws-auth` ConfigMap**: If your cluster relies on the legacy `aws-auth` ConfigMap for access management, only the access entry for the original cluster creator is automatically created upon enabling access entries. Additional roles or permissions added to the ConfigMap (e.g., custom IAM roles for developers or services) are not automatically migrated. To address this, manually create corresponding access entries.

## Get started
<a name="_get_started"></a>

1. Determine the IAM Identity and Access policy you want to use.
   +  [Review access policy permissions](access-policy-permissions.md) 

1. Enable EKS Access Entries on your cluster. Confirm you have a supported platform version.
   +  [Change authentication mode to use access entries](setting-up-access-entries.md) 

1. Create an access entry that associates an IAM Identity with Kubernetes permission.
   +  [Create access entries](creating-access-entries.md) 

1. Authenticate to the cluster using the IAM identity.
   +  [Set up AWS CLI](install-awscli.md) 
   +  [Set up `kubectl` and `eksctl`](install-kubectl.md) 

# Associate access policies with access entries
<a name="access-policies"></a>

You can assign one or more access policies to *access entries* of *type* `STANDARD`. Amazon EKS automatically grants the other types of access entries the permissions required to function properly in your cluster. Amazon EKS access policies include Kubernetes permissions, not IAM permissions. Before associating an access policy to an access entry, make sure that you’re familiar with the Kubernetes permissions included in each access policy. For more information, see [Review access policy permissions](access-policy-permissions.md). If none of the access policies meet your requirements, then don’t associate an access policy to an access entry. Instead, specify one or more *group names* for the access entry and create and manage Kubernetes role-based access control objects. For more information, see [Create access entries](creating-access-entries.md).
+ An existing access entry. To create one, see [Create access entries](creating-access-entries.md).
+ An AWS Identity and Access Management role or user with the following permissions: `ListAccessEntries`, `DescribeAccessEntry`, `UpdateAccessEntry`, `ListAccessPolicies`, `AssociateAccessPolicy`, and `DisassociateAccessPolicy`. For more information, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions) in the *Service Authorization Reference*.

Before associating access policies with access entries, consider the following requirements:
+ You can associate multiple access policies to each access entry, but you can only associate each policy to an access entry once. If you associate multiple access policies, the access entry’s IAM principal has all permissions included in all associated access policies.
+ You can scope an access policy to all resources on a cluster or by specifying the name of one or more Kubernetes namespaces. You can use wildcard characters for a namespace name. For example, if you want to scope an access policy to all namespaces that start with `dev-`, you can specify `dev-*` as a namespace name. Make sure that the namespaces exist on your cluster and that your spelling matches the actual namespace name on the cluster. Amazon EKS doesn’t confirm the spelling or existence of the namespaces on your cluster.
+ You can change the *access scope* for an access policy after you associate it to an access entry. If you’ve scoped the access policy to Kubernetes namespaces, you can add and remove namespaces for the association, as necessary.
+ If you associate an access policy to an access entry that also has *group names* specified, then the IAM principal has all the permissions in all associated access policies. It also has all the permissions in any Kubernetes `Role` or `ClusterRole` object that is specified in any Kubernetes `Role` and `RoleBinding` objects that specify the group names.
+ If you run the `kubectl auth can-i --list` command, you won’t see any Kubernetes permissions assigned by access policies associated with an access entry for the IAM principal you’re using when you run the command. The command only shows Kubernetes permissions if you’ve granted them in Kubernetes `Role` or `ClusterRole` objects that you’ve bound to the group names or username that you specified for an access entry.
+ If you impersonate a Kubernetes user or group when interacting with Kubernetes objects on your cluster, such as using the `kubectl` command with `--as username ` or `--as-group group-name `, you’re forcing the use of Kubernetes RBAC authorization. As a result, the IAM principal has no permissions assigned by any access policies associated to the access entry. The only Kubernetes permissions that the user or group that the IAM principal is impersonating has are the Kubernetes permissions that you’ve granted them in Kubernetes `Role` or `ClusterRole` objects that you’ve bound to the group names or user name. For your IAM principal to have the permissions in associated access policies, don’t impersonate a Kubernetes user or group. The IAM principal will still also have any permissions that you’ve granted them in the Kubernetes `Role` or `ClusterRole` objects that you’ve bound to the group names or user name that you specified for the access entry. For more information, see [User impersonation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) in the Kubernetes documentation.

You can associate an access policy to an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-associate-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that has an access entry that you want to associate an access policy to.

1. Choose the **Access** tab.

1. If the type of the access entry is **Standard**, you can associate or disassociate Amazon EKS **access policies**. If the type of your access entry is anything other than **Standard**, then this option isn’t available.

1. Choose **Associate access policy**.

1. For **Policy name**, select the policy with the permissions you want the IAM principal to have. To view the permissions included in each policy, see [Review access policy permissions](access-policy-permissions.md).

1. For **Access scope**, choose an access scope. If you choose **Cluster**, the permissions in the access policy are granted to the IAM principal for resources in all Kubernetes namespaces. If you choose **Kubernetes namespace**, you can then choose **Add new namespace**. In the **Namespace** field that appears, you can enter the name of a Kubernetes namespace on your cluster. If you want the IAM principal to have the permissions across multiple namespaces, then you can enter multiple namespaces.

1. Choose **Add access policy**.

## AWS CLI
<a name="access-associate-cli"></a>

1. Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.

1. View the available access policies.

   ```
   aws eks list-access-policies --output table
   ```

   An example output is as follows.

   ```
   ---------------------------------------------------------------------------------------------------------
   |                                          ListAccessPolicies                                           |
   +-------------------------------------------------------------------------------------------------------+
   ||                                           accessPolicies                                            ||
   |+---------------------------------------------------------------------+-------------------------------+|
   ||                                 arn                                 |             name              ||
   |+---------------------------------------------------------------------+-------------------------------+|
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSAdminPolicy        |  AmazonEKSAdminPolicy         ||
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy |  AmazonEKSClusterAdminPolicy  ||
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSEditPolicy         |  AmazonEKSEditPolicy          ||
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSViewPolicy         |  AmazonEKSViewPolicy          ||
   |+---------------------------------------------------------------------+-------------------------------+|
   ```

   To view the permissions included in each policy, see [Review access policy permissions](access-policy-permissions.md).

1. View your existing access entries. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks list-access-entries --cluster-name my-cluster
   ```

   An example output is as follows.

   ```
   {
       "accessEntries": [
           "arn:aws:iam::111122223333:role/my-role",
           "arn:aws:iam::111122223333:user/my-user"
       ]
   }
   ```

1. Associate an access policy to an access entry. The following example associates the `AmazonEKSViewPolicy` access policy to an access entry. Whenever the *my-role* IAM role attempts to access Kubernetes objects on the cluster, Amazon EKS will authorize the role to use the permissions in the policy to access Kubernetes objects in the *my-namespace1* and *my-namespace2* Kubernetes namespaces only. Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *my-role* with the name of the IAM role that you want Amazon EKS to authorize access to Kubernetes cluster objects for.

   ```
   aws eks associate-access-policy --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role \
       --access-scope type=namespace,namespaces=my-namespace1,my-namespace2 --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
   ```

   If you want the IAM principal to have the permissions cluster-wide, replace `type=namespace,namespaces=my-namespace1,my-namespace2 ` with `type=cluster`. If you want to associate multiple access policies to the access entry, run the command multiple times, each with a unique access policy. Each associated access policy has its own scope.
**Note**  
If you later want to change the scope of an associated access policy, run the previous command again with the new scope. For example, if you wanted to remove *my-namespace2*, you’d run the command again using `type=namespace,namespaces=my-namespace1 ` only. If you wanted to change the scope from `namespace` to `cluster`, you’d run the command again using `type=cluster`, removing `type=namespace,namespaces=my-namespace1,my-namespace2 `.

1. Determine which access policies are associated to an access entry.

   ```
   aws eks list-associated-access-policies --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role
   ```

   An example output is as follows.

   ```
   {
       "clusterName": "my-cluster",
       "principalArn": "arn:aws:iam::111122223333",
       "associatedAccessPolicies": [
           {
               "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy",
               "accessScope": {
                   "type": "cluster",
                   "namespaces": []
               },
               "associatedAt": "2023-04-17T15:25:21.675000-04:00",
               "modifiedAt": "2023-04-17T15:25:21.675000-04:00"
           },
           {
               "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy",
               "accessScope": {
                   "type": "namespace",
                   "namespaces": [
                       "my-namespace1",
                       "my-namespace2"
                   ]
               },
               "associatedAt": "2023-04-17T15:02:06.511000-04:00",
               "modifiedAt": "2023-04-17T15:02:06.511000-04:00"
           }
       ]
   }
   ```

   In the previous example, the IAM principal for this access entry has view permissions across all namespaces on the cluster, and administrator permissions to two Kubernetes namespaces.

1. Disassociate an access policy from an access entry. In this example, the `AmazonEKSAdminPolicy` policy is disassociated from an access entry. The IAM principal retains the permissions in the `AmazonEKSViewPolicy` access policy for objects in the *my-namespace1* and *my-namespace2* namespaces however, because that access policy is not disassociated from the access entry.

   ```
   aws eks disassociate-access-policy --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role \
       --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy
   ```

To list available access policies, see [Review access policy permissions](access-policy-permissions.md).

# Migrating existing `aws-auth ConfigMap` entries to access entries
<a name="migrating-access-entries"></a>

If you’ve added entries to the `aws-auth` `ConfigMap` on your cluster, we recommend that you create access entries for the existing entries in your `aws-auth` `ConfigMap`. After creating the access entries, you can remove the entries from your `ConfigMap`. You can’t associate [access policies](access-policies.md) to entries in the `aws-auth` `ConfigMap`. If you want to associate access polices to your IAM principals, create access entries.

**Important**  
When a cluster is in `API_AND_CONFIGMAP` authentication mode and there’s a mapping for the same IAM role in both the `aws-auth` `ConfigMap` and in access entries, the role will use the access entry’s mapping for authentication. Access entries take precedence over `ConfigMap` entries for the same IAM principal.
Before removing existing `aws-auth` `ConfigMap` entries that were created by Amazon EKS for [managed node group](managed-node-groups.md) or a [Fargate profile](fargate-profile.md) to your cluster, double check if the correct access entries for those specific resources exist in your Amazon EKS cluster. If you remove entries that Amazon EKS created in the `ConfigMap` without having the equivalent access entries, your cluster won’t function properly.

## Prerequisites
<a name="migrating_access_entries_prereq"></a>
+ Familiarity with access entries and access policies. For more information, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md) and [Associate access policies with access entries](access-policies.md).
+ An existing cluster with a platform version that is at or later than the versions listed in the Prerequisites of the [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md) topic.
+ Version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.
+ Kubernetes permissions to modify the `aws-auth` `ConfigMap` in the `kube-system` namespace.
+ An AWS Identity and Access Management role or user with the following permissions: `CreateAccessEntry` and `ListAccessEntries`. For more information, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions) in the Service Authorization Reference.

## `eksctl`
<a name="migrating_access_entries_eksctl"></a>

1. View the existing entries in your `aws-auth ConfigMap`. Replace *my-cluster* with the name of your cluster.

   ```
   eksctl get iamidentitymapping --cluster my-cluster
   ```

   An example output is as follows.

   ```
   ARN                                                                                             USERNAME                                GROUPS                                                  ACCOUNT
   arn:aws:iam::111122223333:role/EKS-my-cluster-Admins                                            Admins                                  system:masters
   arn:aws:iam::111122223333:role/EKS-my-cluster-my-namespace-Viewers                              my-namespace-Viewers                    Viewers
   arn:aws:iam::111122223333:role/EKS-my-cluster-self-managed-ng-1                                 system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   arn:aws:iam::111122223333:user/my-user                                                          my-user
   arn:aws:iam::111122223333:role/EKS-my-cluster-fargateprofile1                                   system:node:{{SessionName}}             system:bootstrappers,system:nodes,system:node-proxier
   arn:aws:iam::111122223333:role/EKS-my-cluster-managed-ng                                        system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   ```

1.  [Create access entries](creating-access-entries.md) for any of the `ConfigMap` entries that you created returned in the previous output. When creating the access entries, make sure to specify the same values for `ARN`, `USERNAME`, `GROUPS`, and `ACCOUNT` returned in your output. In the example output, you would create access entries for all entries except the last two entries, since those entries were created by Amazon EKS for a Fargate profile and a managed node group.

1. Delete the entries from the `ConfigMap` for any access entries that you created. If you don’t delete the entry from the `ConfigMap`, the settings for the access entry for the IAM principal ARN override the `ConfigMap` entry. Replace *111122223333* with your AWS account ID and *EKS-my-cluster-my-namespace-Viewers* with the name of the role in the entry in your `ConfigMap`. If the entry you’re removing is for an IAM user, rather than an IAM role, replace `role` with `user` and *EKS-my-cluster-my-namespace-Viewers* with the user name.

   ```
   eksctl delete iamidentitymapping --arn arn:aws:iam::111122223333:role/EKS-my-cluster-my-namespace-Viewers --cluster my-cluster
   ```

# Review access policy permissions
<a name="access-policy-permissions"></a>

Access policies include `rules` that contain Kubernetes `verbs` (permissions) and `resources`. Access policies don’t include IAM permissions or resources. Similar to Kubernetes `Role` and `ClusterRole` objects, access policies only include `allow` `rules`. You can’t modify the contents of an access policy. You can’t create your own access policies. If the permissions in the access policies don’t meet your needs, then create Kubernetes RBAC objects and specify *group names* for your access entries. For more information, see [Create access entries](creating-access-entries.md). The permissions contained in access policies are similar to the permissions in the Kubernetes user-facing cluster roles. For more information, see [User-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) in the Kubernetes documentation.

## List all policies
<a name="access-policies-cli-command"></a>

Use any one of the access policies listed on this page, or retrieve a list of all available access policies using the AWS CLI:

```
aws eks list-access-policies
```

The expected output should look like this (abbreviated for brevity):

```
{
    "accessPolicies": [
        {
            "name": "AmazonAIOpsAssistantPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonAIOpsAssistantPolicy"
        },
        {
            "name": "AmazonARCRegionSwitchScalingPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonARCRegionSwitchScalingPolicy"
        },
        {
            "name": "AmazonEKSAdminPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
        },
        {
            "name": "AmazonEKSAdminViewPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy"
        },
        {
            "name": "AmazonEKSAutoNodePolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAutoNodePolicy"
        }
        // Additional policies omitted
    ]
}
```

## AmazonEKSAdminPolicy
<a name="access-policy-permissions-amazoneksadminpolicy"></a>

This access policy includes permissions that grant an IAM principal most permissions to resources. When associated to an access entry, its access scope is typically one or more Kubernetes namespaces. If you want an IAM principal to have administrator access to all resources on your cluster, associate the [AmazonEKSClusterAdminPolicy](#access-policy-permissions-amazoneksclusteradminpolicy) access policy to your access entry instead.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `apps`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `replicasets`, `replicasets/scale`, `statefulsets`, `statefulsets/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `apps`   |   `controllerrevisions`, `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `replicasets`, `replicasets/scale`, `replicasets/status`, `statefulsets`, `statefulsets/scale`, `statefulsets/status`   |   `get`, `list`, `watch`   | 
|   `authorization.k8s.io`   |   `localsubjectaccessreviews`   |   `create`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`, `horizontalpodautoscalers/status`   |   `get`, `list`, `watch`   | 
|   `batch`   |   `cronjobs`, `jobs`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `batch`   |   `cronjobs`, `cronjobs/status`, `jobs`, `jobs/status`   |   `get`, `list`, `watch`   | 
|   `discovery.k8s.io`   |   `endpointslices`   |   `get`, `list`, `watch`   | 
|   `extensions`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `ingresses`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicationcontrollers/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `extensions`   |   `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `ingresses`, `ingresses/status`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicasets/status`, `replicationcontrollers/scale`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `ingresses/status`, `networkpolicies`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `networkpolicies`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `policy`   |   `poddisruptionbudgets`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `policy`   |   `poddisruptionbudgets`, `poddisruptionbudgets/status`   |   `get`, `list`, `watch`   | 
|   `rbac.authorization.k8s.io`   |   `rolebindings`, `roles`   |   `create`, `delete`, `deletecollection`, `get`, `list`, `patch`, `update`, `watch`   | 
|  |   `configmaps`, `endpoints`, `persistentvolumeclaims`, `persistentvolumeclaims/status`, `pods`, `replicationcontrollers`, `replicationcontrollers/scale`, `serviceaccounts`, `services`, `services/status`   |   `get`,`list`, `watch`   | 
|  |   `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`, `secrets`, `services/proxy`   |   `get`, `list`, `watch`   | 
|  |   `configmaps`, `events`, `persistentvolumeclaims`, `replicationcontrollers`, `replicationcontrollers/scale`, `secrets`, `serviceaccounts`, `services`, `services/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `pods`, `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `serviceaccounts`   |   `impersonate`   | 
|  |   `bindings`, `events`, `limitranges`, `namespaces/status`, `pods/log`, `pods/status`, `replicationcontrollers/status`, `resourcequotas`, `resourcequotas/status`   |   `get`, `list`, `watch`   | 
|  |   `namespaces`   |   `get`,`list`, `watch`   | 

## AmazonEKSClusterAdminPolicy
<a name="access-policy-permissions-amazoneksclusteradminpolicy"></a>

This access policy includes permissions that grant an IAM principal administrator access to a cluster. When associated to an access entry, its access scope is typically the cluster, rather than a Kubernetes namespace. If you want an IAM principal to have a more limited administrative scope, consider associating the [AmazonEKSAdminPolicy](#access-policy-permissions-amazoneksadminpolicy) access policy to your access entry instead.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy` 


| Kubernetes API groups | Kubernetes nonResourceURLs | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | --- | 
|   `*`   |  |   `*`   |   `*`   | 
|  |   `*`   |  |   `*`   | 

## AmazonEKSAdminViewPolicy
<a name="access-policy-permissions-amazoneksadminviewpolicy"></a>

This access policy includes permissions that grant an IAM principal access to list/view all resources in a cluster. Note this includes [Kubernetes Secrets.](https://kubernetes.io/docs/concepts/configuration/secret/) 

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `*`   |   `*`   |   `get`, `list`, `watch`   | 

## AmazonEKSEditPolicy
<a name="access-policy-permissions-amazonekseditpolicy"></a>

This access policy includes permissions that allow an IAM principal to edit most Kubernetes resources.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `apps`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `replicasets`, `replicasets/scale`, `statefulsets`, `statefulsets/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `apps`   |   `controllerrevisions`, `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `replicasets`, `replicasets/scale`, `replicasets/status`, `statefulsets`, `statefulsets/scale`, `statefulsets/status`   |   `get`, `list`, `watch`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`, `horizontalpodautoscalers/status`   |   `get`, `list`, `watch`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `batch`   |   `cronjobs`, `jobs`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `batch`   |   `cronjobs`, `cronjobs/status`, `jobs`, `jobs/status`   |   `get`, `list`, `watch`   | 
|   `discovery.k8s.io`   |   `endpointslices`   |   `get`, `list`, `watch`   | 
|   `extensions`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `ingresses`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicationcontrollers/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `extensions`   |   `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `ingresses`, `ingresses/status`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicasets/status`, `replicationcontrollers/scale`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `networkpolicies`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `networking.k8s.io`   |   `ingresses`, `ingresses/status`, `networkpolicies`   |   `get`, `list`, `watch`   | 
|   `policy`   |   `poddisruptionbudgets`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `policy`   |   `poddisruptionbudgets`, `poddisruptionbudgets/status`   |   `get`, `list`, `watch`   | 
|  |   `namespaces`   |   `get`, `list`, `watch`   | 
|  |   `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`, `secrets`, `services/proxy`   |   `get`, `list`, `watch`   | 
|  |   `serviceaccounts`   |   `impersonate`   | 
|  |   `pods`, `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `configmaps`, `events`, `persistentvolumeclaims`, `replicationcontrollers`, `replicationcontrollers/scale`, `secrets`, `serviceaccounts`, `services`, `services/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `configmaps`, `endpoints`, `persistentvolumeclaims`, `persistentvolumeclaims/status`, `pods`, `replicationcontrollers`, `replicationcontrollers/scale`, `serviceaccounts`, `services`, `services/status`   |   `get`, `list`, `watch`   | 
|  |   `bindings`, `events`, `limitranges`, `namespaces/status`, `pods/log`, `pods/status`, `replicationcontrollers/status`, `resourcequotas`, `resourcequotas/status`   |   `get`, `list`, `watch`   | 

## AmazonEKSViewPolicy
<a name="access-policy-permissions-amazoneksviewpolicy"></a>

This access policy includes permissions that allow an IAM principal to view most Kubernetes resources.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `apps`   |   `controllerrevisions`, `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `replicasets`, `replicasets/scale`, `replicasets/status`, `statefulsets`, `statefulsets/scale`, `statefulsets/status`   |   `get`, `list`, `watch`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`, `horizontalpodautoscalers/status`   |   `get`, `list`, `watch`   | 
|   `batch`   |   `cronjobs`, `cronjobs/status`, `jobs`, `jobs/status`   |   `get`, `list`, `watch`   | 
|   `discovery.k8s.io`   |   `endpointslices`   |   `get`, `list`, `watch`   | 
|   `extensions`   |   `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `ingresses`, `ingresses/status`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicasets/status`, `replicationcontrollers/scale`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `ingresses/status`, `networkpolicies`   |   `get`, `list`, `watch`   | 
|   `policy`   |   `poddisruptionbudgets`, `poddisruptionbudgets/status`   |   `get`, `list`, `watch`   | 
|  |   `configmaps`, `endpoints`, `persistentvolumeclaims`, `persistentvolumeclaims/status`, `pods`, `replicationcontrollers`, `replicationcontrollers/scale`, `serviceaccounts`, `services`, `services/status`   |   `get`, `list`, `watch`   | 
|  |   `bindings`, `events`, `limitranges`, `namespaces/status`, `pods/log`, `pods/status`, `replicationcontrollers/status`, `resourcequotas`, r`esourcequotas/status`   |   `get`, `list`, `watch`   | 
|  |   `namespaces`   |   `get`, `list`, `watch`   | 

## AmazonEKSSecretReaderPolicy
<a name="_amazonekssecretreaderpolicy"></a>

This access policy includes permissions that allow an IAM principal to read [Kubernetes Secrets.](https://kubernetes.io/docs/concepts/configuration/secret/) 

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSSecretReaderPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|  |   `secrets`   |   `get`, `list`, `watch`   | 

## AmazonEKSAutoNodePolicy
<a name="_amazoneksautonodepolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSAutoNodePolicy` 

This policy includes the following permissions that allow Amazon EKS components to complete the following tasks:
+  `kube-proxy` – Monitor network endpoints and services, and manage related events. This enables cluster-wide network proxy functionality.
+  `ipamd` – Manage AWS VPC networking resources and container network interfaces (CNI). This allows the IP address management daemon to handle pod networking.
+  `coredns` – Access service discovery resources like endpoints and services. This enables DNS resolution within the cluster.
+  `ebs-csi-driver` – Work with storage-related resources for Amazon EBS volumes. This allows dynamic provisioning and attachment of persistent volumes.
+  `neuron` – Monitor nodes and pods for AWS Neuron devices. This enables management of AWS Inferentia and Trainium accelerators.
+  `node-monitoring-agent` – Access node diagnostics and events. This enables cluster health monitoring and diagnostics collection.

Each component uses a dedicated service account and is restricted to only the permissions required for its specific function.

If you manually specify a Node IAM role in a NodeClass, you need to create an Access Entry that associates the new Node IAM role with this Access Policy.

## AmazonEKSBlockStoragePolicy
<a name="_amazoneksblockstoragepolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSBlockStoragePolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election and coordination resources for storage operations:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS storage components to coordinate their activities across the cluster through a leader election mechanism.

The policy is scoped to specific lease resources used by the EKS storage components to prevent conflicting access to other coordination resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the block storage capability to function properly.

## AmazonEKSLoadBalancingPolicy
<a name="_amazoneksloadbalancingpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSLoadBalancingPolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election resources for load balancing:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS load balancing components to coordinate activities across multiple replicas by electing a leader.

The policy is scoped specifically to load balancing lease resources to ensure proper coordination while preventing access to other lease resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSNetworkingPolicy
<a name="_amazoneksnetworkingpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSNetworkingPolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election resources for networking:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS networking components to coordinate IP address allocation activities by electing a leader.

The policy is scoped specifically to networking lease resources to ensure proper coordination while preventing access to other lease resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSComputePolicy
<a name="_amazonekscomputepolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSComputePolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election resources for compute operations:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS compute components to coordinate node scaling activities by electing a leader.

The policy is scoped specifically to compute management lease resources while allowing basic read access (`get`, `watch`) to all lease resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSBlockStorageClusterPolicy
<a name="_amazoneksblockstorageclusterpolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSBlockStorageClusterPolicy` 

This policy grants permissions necessary for the block storage capability of Amazon EKS Auto Mode. It enables efficient management of block storage resources within Amazon EKS clusters. The policy includes the following permissions:

CSI Driver Management:
+ Create, read, update, and delete CSI drivers, specifically for block storage.

Volume Management:
+ List, watch, create, update, patch, and delete persistent volumes.
+ List, watch, and update persistent volume claims.
+ Patch persistent volume claim statuses.

Node and Pod Interaction:
+ Read node and pod information.
+ Manage events related to storage operations.

Storage Classes and Attributes:
+ Read storage classes and CSI nodes.
+ Read volume attribute classes.

Volume Attachments:
+ List, watch, and modify volume attachments and their statuses.

Snapshot Operations:
+ Manage volume snapshots, snapshot contents, and snapshot classes.
+ Handle operations for volume group snapshots and related resources.

This policy is designed to support comprehensive block storage management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including provisioning, attaching, resizing, and snapshotting of block storage volumes.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the block storage capability to function properly.

## AmazonEKSComputeClusterPolicy
<a name="_amazonekscomputeclusterpolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSComputeClusterPolicy` 

This policy grants permissions necessary for the compute management capability of Amazon EKS Auto Mode. It enables efficient orchestration and scaling of compute resources within Amazon EKS clusters. The policy includes the following permissions:

Node Management:
+ Create, read, update, delete, and manage status of NodePools and NodeClaims.
+ Manage NodeClasses, including creation, modification, and deletion.

Scheduling and Resource Management:
+ Read access to pods, nodes, persistent volumes, persistent volume claims, replication controllers, and namespaces.
+ Read access to storage classes, CSI nodes, and volume attachments.
+ List and watch deployments, daemon sets, replica sets, and stateful sets.
+ Read pod disruption budgets.

Event Handling:
+ Create, read, and manage cluster events.

Node Deprovisioning and Pod Eviction:
+ Update, patch, and delete nodes.
+ Create pod evictions and delete pods when necessary.

Custom Resource Definition (CRD) Management:
+ Create new CRDs.
+ Manage specific CRDs related to node management (NodeClasses, NodePools, NodeClaims, and NodeDiagnostics).

This policy is designed to support comprehensive compute management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including node provisioning, scheduling, scaling, and resource optimization.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the compute management capability to function properly.

## AmazonEKSLoadBalancingClusterPolicy
<a name="_amazoneksloadbalancingclusterpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSLoadBalancingClusterPolicy` 

This policy grants permissions necessary for the load balancing capability of Amazon EKS Auto Mode. It enables efficient management and configuration of load balancing resources within Amazon EKS clusters. The policy includes the following permissions:

Event and Resource Management:
+ Create and patch events.
+ Read access to pods, nodes, endpoints, and namespaces.
+ Update pod statuses.

Service and Ingress Management:
+ Full management of services and their statuses.
+ Comprehensive control over ingresses and their statuses.
+ Read access to endpoint slices and ingress classes.

Target Group Bindings:
+ Create and modify target group bindings and their statuses.
+ Read access to ingress class parameters.

Custom Resource Definition (CRD) Management:
+ Create and read all CRDs.
+ Specific management of targetgroupbindings.eks.amazonaws.com and ingressclassparams.eks.amazonaws.com CRDs.

Webhook Configuration:
+ Create and read mutating and validating webhook configurations.
+ Manage the eks-load-balancing-webhook configuration.

This policy is designed to support comprehensive load balancing management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including service exposure, ingress routing, and integration with AWS load balancing services.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the load balancing capability to function properly.

## AmazonEKSNetworkingClusterPolicy
<a name="_amazoneksnetworkingclusterpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSNetworkingClusterPolicy` 

AmazonEKSNetworkingClusterPolicy

This policy grants permissions necessary for the networking capability of Amazon EKS Auto Mode. It enables efficient management and configuration of networking resources within Amazon EKS clusters. The policy includes the following permissions:

Node and Pod Management:
+ Read access to NodeClasses and their statuses.
+ Read access to NodeClaims and their statuses.
+ Read access to pods.

CNI Node Management:
+ Permissions for CNINodes and their statuses, including create, read, update, delete, and patch.

Custom Resource Definition (CRD) Management:
+ Create and read all CRDs.
+ Specific management (update, patch, delete) of the cninodes.eks.amazonaws.com CRD.

Event Management:
+ Create and patch events.

This policy is designed to support comprehensive networking management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including node networking configuration, CNI (Container Network Interface) management, and related custom resource handling.

The policy allows the networking components to interact with node-related resources, manage CNI-specific node configurations, and handle custom resources critical for networking operations in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSHybridPolicy
<a name="access-policy-permissions-amazonekshybridpolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

This access policy includes permissions that grant EKS access to the nodes of a cluster. When associated to an access entry, its access scope is typically the cluster, rather than a Kubernetes namespace. This policy is used by Amazon EKS hybrid nodes.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSHybridPolicy` 


| Kubernetes API groups | Kubernetes nonResourceURLs | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | --- | 
|   `*`   |  |   `nodes`   |   `list`   | 

## AmazonEKSClusterInsightsPolicy
<a name="access-policy-permissions-AmazonEKSClusterInsightsPolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterInsightsPolicy` 

This policy grants read-only permissions for Amazon EKS Cluster Insights functionality. The policy includes the following permissions:

Node Access: - List and view cluster nodes - Read node status information

DaemonSet Access: - Read access to kube-proxy configuration

This policy is automatically managed by the EKS service for Cluster Insights. For more information, see [Prepare for Kubernetes version upgrades and troubleshoot misconfigurations with cluster insights](cluster-insights.md).

## AWSBackupFullAccessPolicyForBackup
<a name="_awsbackupfullaccesspolicyforbackup"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AWSBackupFullAccessPolicyForBackup` 

AWSBackupFullAccessPolicyForBackup

This policy grants the permissions necessary for AWS Backup to manage and create backups of the EKS Cluster. This policy includes the following permissions:


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `*`   |   `*`   |   `list`, `get`   | 

## AWSBackupFullAccessPolicyForRestore
<a name="_awsbackupfullaccesspolicyforrestore"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AWSBackupFullAccessPolicyForRestore` 

AWSBackupFullAccessPolicyForRestore

This policy grants the permissions necessary for AWS Backup to manage and restore backups of the EKS Cluster. This policy includes the following permissions:


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `*`   |   `*`   |   `list`, `get`, `create`   | 

## AmazonEKSACKPolicy
<a name="_amazoneksackpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSACKPolicy` 

This policy grants permissions necessary for the AWS Controllers for Kubernetes (ACK) capability to manage AWS resources from Kubernetes. The policy includes the following permissions:

ACK Custom Resource Management:
+ Full access to all ACK service custom resources across 50\$1 AWS services including S3, RDS, DynamoDB, Lambda, EC2, and more.
+ Create, read, update, and delete ACK custom resource definitions.

Namespace Access:
+ Read access to namespaces for resource organization.

Leader Election:
+ Create and read coordination leases for leader election.
+ Update and delete specific ACK service controller leases.

Event Management:
+ Create and patch events for ACK operations.

This policy is designed to support comprehensive AWS resource management through Kubernetes APIs. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the ACK capability is created.


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|  |   `namespaces`   |   `get`, `watch`, `list`   | 
|   `services.k8s.aws`, `acm.services.k8s.aws`, `acmpca.services.k8s.aws`, `apigateway.services.k8s.aws`, `apigatewayv2.services.k8s.aws`, `applicationautoscaling.services.k8s.aws`, `athena.services.k8s.aws`, `bedrock.services.k8s.aws`, `bedrockagent.services.k8s.aws`, `bedrockagentcorecontrol.services.k8s.aws`, `cloudfront.services.k8s.aws`, `cloudtrail.services.k8s.aws`, `cloudwatch.services.k8s.aws`, `cloudwatchlogs.services.k8s.aws`, `codeartifact.services.k8s.aws`, `cognitoidentityprovider.services.k8s.aws`, `documentdb.services.k8s.aws`, `dynamodb.services.k8s.aws`, `ec2.services.k8s.aws`, `ecr.services.k8s.aws`, `ecrpublic.services.k8s.aws`, `ecs.services.k8s.aws`, `efs.services.k8s.aws`, `eks.services.k8s.aws`, `elasticache.services.k8s.aws`, `elbv2.services.k8s.aws`, `emrcontainers.services.k8s.aws`, `eventbridge.services.k8s.aws`, `iam.services.k8s.aws`, `kafka.services.k8s.aws`, `keyspaces.services.k8s.aws`, `kinesis.services.k8s.aws`, `kms.services.k8s.aws`, `lambda.services.k8s.aws`, `memorydb.services.k8s.aws`, `mq.services.k8s.aws`, `networkfirewall.services.k8s.aws`, `opensearchservice.services.k8s.aws`, `organizations.services.k8s.aws`, `pipes.services.k8s.aws`, `prometheusservice.services.k8s.aws`, `ram.services.k8s.aws`, `rds.services.k8s.aws`, `recyclebin.services.k8s.aws`, `route53.services.k8s.aws`, `route53resolver.services.k8s.aws`, `s3.services.k8s.aws`, `s3control.services.k8s.aws`, `sagemaker.services.k8s.aws`, `secretsmanager.services.k8s.aws`, `ses.services.k8s.aws`, `sfn.services.k8s.aws`, `sns.services.k8s.aws`, `sqs.services.k8s.aws`, `ssm.services.k8s.aws`, `wafv2.services.k8s.aws`   |   `*`   |   `*`   | 
|   `coordination.k8s.io`   |   `leases`   |   `create`, `get`, `list`, `watch`   | 
|   `coordination.k8s.io`   |   `leases` (specific ACK service controller leases only)  |   `delete`, `update`, `patch`   | 
|  |   `events`   |   `create`, `patch`   | 
|   `apiextensions.k8s.io`   |   `customresourcedefinitions`   |   `*`   | 

## AmazonEKSArgoCDClusterPolicy
<a name="_amazoneksargocdclusterpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSArgoCDClusterPolicy` 

This policy grants cluster-level permissions necessary for the Argo CD capability to discover resources and manage cluster-scoped objects. The policy includes the following permissions:

Namespace Management:
+ Create, read, update, and delete namespaces for application namespace management.

Custom Resource Definition Management:
+ Manage Argo CD-specific CRDs (Applications, AppProjects, ApplicationSets).

API Discovery:
+ Read access to Kubernetes API endpoints for resource discovery.

This policy is designed to support cluster-level Argo CD operations including namespace management and CRD installation. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the Argo CD capability is created.


| Kubernetes API groups | Kubernetes nonResourceURLs | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | --- | 
|  |  |   `namespaces`   |   `create`, `get`, `update`, `patch`, `delete`   | 
|   `apiextensions.k8s.io`   |  |   `customresourcedefinitions`   |   `create`   | 
|   `apiextensions.k8s.io`   |  |   `customresourcedefinitions` (Argo CD CRDs only)  |   `get`, `update`, `patch`, `delete`   | 
|  |   `/api`, `/api/*`, `/apis`, `/apis/*`   |  |   `get`   | 

## AmazonEKSArgoCDPolicy
<a name="_amazoneksargocdpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSArgoCDPolicy` 

This policy grants namespace-level permissions necessary for the Argo CD capability to deploy and manage applications. The policy includes the following permissions:

Secret Management:
+ Full access to secrets for Git credentials and cluster secrets.

ConfigMap Access:
+ Read access to ConfigMaps to send warnings if customers try to use unsupported Argo CD ConfigMaps.

Event Management:
+ Read and create events for application lifecycle tracking.

Argo CD Resource Management:
+ Full access to Applications, ApplicationSets, and AppProjects.
+ Manage finalizers and status for Argo CD resources.

This policy is designed to support namespace-level Argo CD operations including application deployment and management. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the Argo CD capability is created, scoped to the Argo CD namespace.


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|  |   `secrets`   |   `*`   | 
|  |   `configmaps`   |   `get`, `list`, `watch`   | 
|  |   `events`   |   `get`, `list`, `watch`, `patch`, `create`   | 
|   `argoproj.io`   |   `applications`, `applications/finalizers`, `applications/status`, `applicationsets`, `applicationsets/finalizers`, `applicationsets/status`, `appprojects`, `appprojects/finalizers`, `appprojects/status`   |   `*`   | 

## AmazonEKSKROPolicy
<a name="_amazonekskropolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSKROPolicy` 

This policy grants permissions necessary for the kro (Kube Resource Orchestrator) capability to create and manage custom Kubernetes APIs. The policy includes the following permissions:

kro Resource Management:
+ Full access to all kro resources including ResourceGraphDefinitions and custom resource instances.

Custom Resource Definition Management:
+ Create, read, update, and delete CRDs for custom APIs defined by ResourceGraphDefinitions.

Leader Election:
+ Create and read coordination leases for leader election.
+ Update and delete the kro controller lease.

Event Management:
+ Create and patch events for kro operations.

This policy is designed to support comprehensive resource composition and custom API management through kro. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the kro capability is created.


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `kro.run`   |   `*`   |   `*`   | 
|   `apiextensions.k8s.io`   |   `customresourcedefinitions`   |   `*`   | 
|   `coordination.k8s.io`   |   `leases`   |   `create`, `get`, `list`, `watch`   | 
|   `coordination.k8s.io`   |   `leases` (kro controller lease only)  |   `delete`, `update`, `patch`   | 
|  |   `events`   |   `create`, `patch`   | 

## Access policy updates
<a name="access-policy-updates"></a>

View details about updates to access policies, since they were introduced. For automatic alerts about changes to this page, subscribe to the RSS feed in [Document history](doc-history.md).


| Change | Description | Date | 
| --- | --- | --- | 
|  Add policies for EKS Capabilities  |  Publish `AmazonEKSACKPolicy`, `AmazonEKSArgoCDClusterPolicy`, `AmazonEKSArgoCDPolicy`, and `AmazonEKSKROPolicy` for managing EKS Capabilities  |  November 22, 2025  | 
|  Add `AmazonEKSSecretReaderPolicy`   |  Add a new policy for read-only access to secrets  |  November 6, 2025  | 
|  Add policy for EKS Cluster Insights  |  Publish `AmazonEKSClusterInsightsPolicy`   |  December 2, 2024  | 
|  Add policies for Amazon EKS Hybrid  |  Publish `AmazonEKSHybridPolicy`   |  December 2, 2024  | 
|  Add policies for Amazon EKS Auto Mode  |  These access policies give the Cluster IAM Role and Node IAM Role permission to call Kubernetes APIs. AWS uses these to automate routine tasks for storage, compute, and networking resources.  |  December 2, 2024  | 
|  Add `AmazonEKSAdminViewPolicy`   |  Add a new policy for expanded view access, including resources like Secrets.  |  April 23, 2024  | 
|  Access policies introduced.  |  Amazon EKS introduced access policies.  |  May 29, 2023  | 

# Change authentication mode to use access entries
<a name="setting-up-access-entries"></a>

To begin using access entries, you must change the authentication mode of the cluster to either the `API_AND_CONFIG_MAP` or `API` modes. This adds the API for access entries.

## AWS Console
<a name="access-entries-setup-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create an access entry in.

1. Choose the **Access** tab.

1. The **Authentication mode** shows the current authentication mode of the cluster. If the mode says EKS API, you can already add access entries and you can skip the remaining steps.

1. Choose **Manage access**.

1. For **Cluster authentication mode**, select a mode with the EKS API. Note that you can’t change the authentication mode back to a mode that removes the EKS API and access entries.

1. Choose **Save changes**. Amazon EKS begins to update the cluster, the status of the cluster changes to Updating, and the change is recorded in the **Update history** tab.

1. Wait for the status of the cluster to return to Active. When the cluster is Active, you can follow the steps in [Create access entries](creating-access-entries.md) to add access to the cluster for IAM principals.

## AWS CLI
<a name="access-setup-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the * AWS Command Line Interface User Guide*.

1. Run the following command. Replace *my-cluster* with the name of your cluster. If you want to disable the `ConfigMap` method permanently, replace `API_AND_CONFIG_MAP` with `API`.

   Amazon EKS begins to update the cluster, the status of the cluster changes to UPDATING, and the change is recorded in the ** aws eks list-updates **.

   ```
   aws eks update-cluster-config --name my-cluster --access-config authenticationMode=API_AND_CONFIG_MAP
   ```

1. Wait for the status of the cluster to return to Active. When the cluster is Active, you can follow the steps in [Create access entries](creating-access-entries.md) to add access to the cluster for IAM principals.

## Required platform version
<a name="_required_platform_version"></a>

To use *access entries*, the cluster must have a platform version that is the same or later than the version listed in the following table, or a Kubernetes version that is later than the versions listed in the table. If your Kubernetes version is not listed, all platform versions support access entries.


| Kubernetes version | Platform version | 
| --- | --- | 
|  Not Listed  |  All Supported  | 
|   `1.30`   |   `eks.2`   | 
|   `1.29`   |   `eks.1`   | 
|   `1.28`   |   `eks.6`   | 

For more information, see [platform-versions](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html).

# Create access entries
<a name="creating-access-entries"></a>

Before creating access entries, consider the following:
+ A properly set authentication mode. See [Change authentication mode to use access entries](setting-up-access-entries.md).
+ An *access entry* includes the Amazon Resource Name (ARN) of one, and only one, existing IAM principal. An IAM principal can’t be included in more than one access entry. Additional considerations for the ARN that you specify:
  + IAM best practices recommend accessing your cluster using IAM *roles* that have short-term credentials, rather than IAM *users* that have long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.
  + If the ARN is for an IAM role, it *can* include a path. ARNs in `aws-auth` `ConfigMap` entries, *can’t* include a path. For example, your ARN can be ` arn:aws:iam::<111122223333>:role/<development/apps/my-role>` or ` arn:aws:iam::<111122223333>:role/<my-role>`.
  + If the type of the access entry is anything other than `STANDARD` (see next consideration about types), the ARN must be in the same AWS account that your cluster is in. If the type is `STANDARD`, the ARN can be in the same, or different, AWS account than the account that your cluster is in.
  + You can’t change the IAM principal after the access entry is created.
  + If you ever delete the IAM principal with this ARN, the access entry isn’t automatically deleted. We recommend that you delete the access entry with an ARN for an IAM principal that you delete. If you don’t delete the access entry and ever recreate the IAM principal, even if it has the same ARN, the access entry won’t work. This is because even though the ARN is the same for the recreated IAM principal, the `roleID` or `userID` (you can see this with the `aws sts get-caller-identity` AWS CLI command) is different for the recreated IAM principal than it was for the original IAM principal. Even though you don’t see the IAM principal’s `roleID` or `userID` for an access entry, Amazon EKS stores it with the access entry.
+ Each access entry has a *type*. The type of the access entry depends on the type of resource it is associated with, and does not define the permissions. If you don’t specify a type, Amazon EKS automatically sets the type to `STANDARD` 
  +  `EC2_LINUX` - For an IAM role used with Linux or Bottlerocket self-managed nodes
  +  `EC2_WINDOWS` - For an IAM role used with Windows self-managed nodes
  +  `FARGATE_LINUX` - For an IAM role used with AWS Fargate (Fargate)
  +  `HYBRID_LINUX` - For an IAM role used with hybrid nodes
  +  `STANDARD` - Default type if none specified
  +  `EC2` - For EKS Auto Mode custom node classes. For more information, see [Create node class access entry](create-node-class.md#auto-node-access-entry).
  + You can’t change the type after the access entry is created.
+ It’s unnecessary to create an access entry for an IAM role that’s used for a managed node group or a Fargate profile. EKS will create access entries (if enabled), or update the auth config map (if access entries are unavailable)
+ If the type of the access entry is `STANDARD`, you can specify a *username* for the access entry. If you don’t specify a value for username, Amazon EKS sets one of the following values for you, depending on the type of the access entry and whether the IAM principal that you specified is an IAM role or IAM user. Unless you have a specific reason for specifying your own username, we recommend that don’t specify one and let Amazon EKS auto-generate it for you. If you specify your own username:
  + It can’t start with `system:`, `eks:`, `aws:`, `amazon:`, or `iam:`.
  + If the username is for an IAM role, we recommend that you add `{{SessionName}}` or `{{SessionNameRaw}}` to the end of your username. If you add either `{{SessionName}}` or `{{SessionNameRaw}}` to your username, the username must include a colon *before* \$1\$1SessionName\$1\$1. When this role is assumed, the name of the AWS STS session name that is specified when assuming the role is automatically passed to the cluster and will appear in CloudTrail logs. For example, you can’t have a username of `john{{SessionName}}`. The username would have to be `:john{{SessionName}}` or `jo:hn{{SessionName}}`. The colon only has to be before `{{SessionName}}`. The username generated by Amazon EKS in the following table includes an ARN. Since an ARN includes colons, it meets this requirement. The colon isn’t required if you don’t include `{{SessionName}}` in your username. Note that in `{{SessionName}}` the special character "@" is replaced with "-" in the session name. `{{SessionNameRaw}}` keeps all special characters in the session name.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/creating-access-entries.html)

    You can change the username after the access entry is created.
+ If an access entry’s type is `STANDARD`, and you want to use Kubernetes RBAC authorization, you can add one or more *group names* to the access entry. After you create an access entry you can add and remove group names. For the IAM principal to have access to Kubernetes objects on your cluster, you must create and manage Kubernetes role-based authorization (RBAC) objects. Create Kubernetes `RoleBinding` or `ClusterRoleBinding` objects on your cluster that specify the group name as a `subject` for `kind: Group`. Kubernetes authorizes the IAM principal access to any cluster objects that you’ve specified in a Kubernetes `Role` or `ClusterRole` object that you’ve also specified in your binding’s `roleRef`. If you specify group names, we recommend that you’re familiar with the Kubernetes role-based authorization (RBAC) objects. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.
**Important**  
Amazon EKS doesn’t confirm that any Kubernetes RBAC objects that exist on your cluster include any of the group names that you specify. For example, if you create an access entry for group that currently doesn’t exist, EKS will create the group instead of returning an error.

  Instead of, or in addition to, Kubernetes authorizing the IAM principal access to Kubernetes objects on your cluster, you can associate Amazon EKS *access policies* to an access entry. Amazon EKS authorizes IAM principals to access Kubernetes objects on your cluster with the permissions in the access policy. You can scope an access policy’s permissions to Kubernetes namespaces that you specify. Use of access policies don’t require you to manage Kubernetes RBAC objects. For more information, see [Associate access policies with access entries](access-policies.md).
+ If you create an access entry with type `EC2_LINUX` or `EC2_Windows`, the IAM principal creating the access entry must have the `iam:PassRole` permission. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.
+ Similar to standard [IAM behavior](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency), access entry creation and updates are eventually consistent, and may take several seconds to be effective after the initial API call returns successfully. You must design your applications to account for these potential delays. We recommend that you don’t include access entry creates or updates in the critical, high- availability code paths of your application. Instead, make changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them.
+ Access entries do not support [service linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). You cannot create access entries where the principal ARN is a service linked role. You can identify service linked roles by their ARN, which is in the format ` arn:aws:iam::*:role/aws-service-role/*`.

You can create an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-create-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create an access entry in.

1. Choose the **Access** tab.

1. Choose **Create access entry**.

1. For **IAM principal**, select an existing IAM role or user. IAM best practices recommend accessing your cluster using IAM *roles* that have short-term credentials, rather than IAM *users* that have long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

1. For **Type**, if the access entry is for the node role used for self-managed Amazon EC2 nodes, select **EC2 Linux** or **EC2 Windows**. Otherwise, accept the default (**Standard**).

1. If the **Type** you chose is **Standard** and you want to specify a **Username**, enter the username.

1. If the **Type** you chose is **Standard** and you want to use Kubernetes RBAC authorization for the IAM principal, specify one or more names for **Groups**. If you don’t specify any group names and want to use Amazon EKS authorization, you can associate an access policy in a later step, or after the access entry is created.

1. (Optional) For **Tags**, assign labels to the access entry. For example, to make it easier to find all resources with the same tag.

1. Choose **Next**.

1. On the **Add access policy** page, if the type you chose was **Standard** and you want Amazon EKS to authorize the IAM principal to have permissions to the Kubernetes objects on your cluster, complete the following steps. Otherwise, choose **Next**.

   1. For **Policy name**, choose an access policy. You can’t view the permissions of the access policies, but they include similar permissions to those in the Kubernetes user-facing `ClusterRole` objects. For more information, see [User-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) in the Kubernetes documentation.

   1. Choose one of the following options:
      +  **Cluster** – Choose this option if you want Amazon EKS to authorize the IAM principal to have the permissions in the access policy for all Kubernetes objects on your cluster.
      +  **Kubernetes namespace** – Choose this option if you want Amazon EKS to authorize the IAM principal to have the permissions in the access policy for all Kubernetes objects in a specific Kubernetes namespace on your cluster. For **Namespace**, enter the name of the Kubernetes namespace on your cluster. If you want to add additional namespaces, choose **Add new namespace** and enter the namespace name.

   1. If you want to add additional policies, choose **Add policy**. You can scope each policy differently, but you can add each policy only once.

   1. Choose **Next**.

1. Review the configuration for your access entry. If anything looks incorrect, choose **Previous** to go back through the steps and correct the error. If the configuration is correct, choose **Create**.

## AWS CLI
<a name="access-create-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

1. To create an access entry You can use any of the following examples to create access entries:
   + Create an access entry for a self-managed Amazon EC2 Linux node group. Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *EKS-my-cluster-self-managed-ng-1* with the name of your [node IAM role](create-node-role.md). If your node group is a Windows node group, then replace *EC2\$1LINUX* with `EC2_Windows`.

     ```
     aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/EKS-my-cluster-self-managed-ng-1 --type EC2_LINUX
     ```

     You can’t use the `--kubernetes-groups` option when you specify a type other than `STANDARD`. You can’t associate an access policy to this access entry, because its type is a value other than `STANDARD`.
   + Create an access entry that allows an IAM role that’s not used for an Amazon EC2 self-managed node group, that you want Kubernetes to authorize access to your cluster with. Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *my-role* with the name of your IAM role. Replace *Viewers* with the name of a group that you’ve specified in a Kubernetes `RoleBinding` or `ClusterRoleBinding` object on your cluster.

     ```
     aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role --type STANDARD --user Viewers --kubernetes-groups Viewers
     ```
   + Create an access entry that allows an IAM user to authenticate to your cluster. This example is provided because this is possible, though IAM best practices recommend accessing your cluster using IAM *roles* that have short-term credentials, rather than IAM *users* that have long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

     ```
     aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:user/my-user --type STANDARD --username my-user
     ```

     If you want this user to have more access to your cluster than the permissions in the Kubernetes API discovery roles, then you need to associate an access policy to the access entry, since the `--kubernetes-groups` option isn’t used. For more information, see [Associate access policies with access entries](access-policies.md) and [API discovery roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles) in the Kubernetes documentation.

# Update access entries
<a name="updating-access-entries"></a>

You can update an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-update-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create an access entry in.

1. Choose the **Access** tab.

1. Choose the access entry that you want to update.

1. Choose **Edit**.

1. For **Username**, you can change the existing value.

1. For **Groups**, you can remove existing group names or add new group names. If the following groups names exist, don’t remove them: **system:nodes** or **system:bootstrappers**. Removing these groups can cause your cluster to function improperly. If you don’t specify any group names and want to use Amazon EKS authorization, associate an [access policy](access-policies.md) in a later step.

1. For **Tags**, you can assign labels to the access entry. For example, to make it easier to find all resources with the same tag. You can also remove existing tags.

1. Choose **Save changes**.

1. If you want to associate an access policy to the entry, see [Associate access policies with access entries](access-policies.md).

## AWS CLI
<a name="access-update-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

1. To update an access entry Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *EKS-my-cluster-my-namespace-Viewers* with the name of an IAM role.

   ```
   aws eks update-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/EKS-my-cluster-my-namespace-Viewers --kubernetes-groups Viewers
   ```

   You can’t use the `--kubernetes-groups` option if the type of the access entry is a value other than `STANDARD`. You also can’t associate an access policy to an access entry with a type other than `STANDARD`.

# Delete access entries
<a name="deleting-access-entries"></a>

If you discover that you deleted an access entry in error, you can always recreate it. If the access entry that you’re deleting is associated to any access policies, the associations are automatically deleted. You don’t have to disassociate access policies from an access entry before deleting the access entry.

You can delete an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-delete-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to delete an access entry from.

1. Choose the **Access** tab.

1. In the **Access entries** list, choose the access entry that you want to delete.

1. Choose Delete.

1. In the confirmation dialog box, choose **Delete**.

## AWS CLI
<a name="access-delete-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

1. To delete an access entry Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *my-role* with the name of the IAM role that you no longer want to have access to your cluster.

   ```
   aws eks delete-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role
   ```

# Set a custom username for EKS access entries
<a name="set-custom-username"></a>

When creating access entries for Amazon EKS, you can either use the automatically generated username or specify a custom username. This page explains both options and guides you through setting a custom username.

## Overview
<a name="_overview"></a>

The username in an access entry is used to identify the IAM principal in Kubernetes logs and audit trails. By default, Amazon EKS generates a username based on the IAM identity’s ARN, but you can specify a custom username if needed.

## Default username generation
<a name="_default_username_generation"></a>

If you don’t specify a value for username, Amazon EKS automatically generates a username based on the IAM Identity:
+  **For IAM Users**:
  + EKS sets the Kubernetes username to the ARN of the IAM User
  + Example:

    ```
    {arn-aws}iam::<111122223333>:user/<my-user>
    ```
+  **For IAM Roles**:
  + EKS sets the Kubernetes username based on the ARN of the IAM Role
  + The STS ARN of the role when it’s assumed. Amazon EKS appends `{{SessionName}}` to the role. If the ARN of the role that you specified contained a path, Amazon EKS removes it in the generated username.
  + Example:

    ```
    {arn-aws}sts::<111122223333>:assumed-role/<my-role>/{{SessionName}}
    ```

Unless you have a specific reason for specifying your own username, we recommend that you don’t specify one and let Amazon EKS auto-generate it for you.

## Setting a custom username
<a name="_setting_a_custom_username"></a>

When creating an access entry, you can specify a custom username using the `--username` parameter:

```
aws eks create-access-entry --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --type STANDARD --username <custom-username>
```

### Requirements for custom usernames
<a name="_requirements_for_custom_usernames"></a>

If you specify a custom username:
+ The username can’t start with `system:`, `eks:`, `aws:`, `amazon:`, or `iam:`.
+ If the username is for an IAM role, we recommend that you add `{{SessionName}}` or `{{SessionNameRaw}}` to the end of your username.
  + If you add either `{{SessionName}}` or `{{SessionNameRaw}}` to your username, the username must include a colon *before* \$1\$1SessionName\$1\$1.

# Create an access entry for an IAM role or user using an access policy and the AWS CLI
<a name="create-standard-access-entry-policy"></a>

Create Amazon EKS access entries that use AWS-managed EKS access policies to grant IAM identities standardized permissions for accessing and managing Kubernetes clusters.

## Overview
<a name="_overview"></a>

Access entries in Amazon EKS define how IAM identities (users and roles) can access and interact with your Kubernetes clusters. By creating access entries with EKS access policies, you can:
+ Grant specific IAM users or roles permission to access your EKS cluster
+ Control permissions using AWS-managed EKS access policies that provide standardized, predefined permission sets
+ Scope permissions to specific namespaces or cluster-wide
+ Simplify access management without modifying the `aws-auth` ConfigMap or creating Kubernetes RBAC resources
+ Use AWS-integrated approach to Kubernetes access control that covers common use cases while maintaining security best practices

This approach is recommended for most use cases because it provides AWS-managed, standardized permissions without requiring manual Kubernetes RBAC configuration. EKS access policies eliminate the need to manually configure Kubernetes RBAC resources and offer predefined permission sets that cover common use cases.

## Prerequisites
<a name="_prerequisites"></a>
+ The *authentication mode* of your cluster must be configured to enable *access entries*. For more information, see [Change authentication mode to use access entries](setting-up-access-entries.md).
+ Install and configure the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

## Step 1: Define access entry
<a name="ap1-s1"></a>

1. Find the ARN of IAM identity, such as a user or role, that you want to grant permissions to.
   + Each IAM identity can have only one EKS access entry.

1. Determine if you want the Amazon EKS access policy permissions to apply to only a specific Kubernetes namespace, or across the entire cluster.
   + If you want to limit the permissions to a specific namespace, make note of the namespace name.

1. Select the EKS access policy you want for the IAM identity. This policy gives in-cluster permissions. Note the ARN of the policy.
   + For a list of policies, see [available access policies](access-policy-permissions.md).

1. Determine if the auto-generated username is appropriate for the access entry, or if you need to manually specify a username.
   +  AWS auto-generates this value based on the IAM identity. You can set a custom username. This is visible in Kubernetes logs.
   + For more information, see [Set a custom username for EKS access entries](set-custom-username.md).

## Step 2: Create access entry
<a name="ap1-s2"></a>

After planning the access entry, use the AWS CLI to create it.

The following example covers most use cases. [View the CLI reference for all configuration options](https://docs.aws.amazon.com/cli/latest/reference/eks/create-access-entry.html).

You will attach the access policy in the next step.

```
aws eks create-access-entry --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --type STANDARD
```

## Step 3: Associate access policy
<a name="_step_3_associate_access_policy"></a>

The command differs based on whether you want the policy to be limited to a specified Kubernetes namespace.

You need the ARN of the access policy. Review the [available access policies](access-policy-permissions.md).

### Create policy without namespace scope
<a name="_create_policy_without_namespace_scope"></a>

```
aws eks associate-access-policy --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --policy-arn <access-policy-arn>
```

### Create with namespace scope
<a name="_create_with_namespace_scope"></a>

```
aws eks associate-access-policy --cluster-name <cluster-name> --principal-arn <iam-identity-arn> \
    --access-scope type=namespace,namespaces=my-namespace1,my-namespace2 --policy-arn <access-policy-arn>
```

## Next steps
<a name="_next_steps"></a>
+  [Create a kubeconfig so you can use kubectl with an IAM identity](create-kubeconfig.md) 

# Create an access entry using Kubernetes groups with the AWS CLI
<a name="create-k8s-group-access-entry"></a>

Create Amazon EKS access entries that use Kubernetes groups for authorization and require manual RBAC configuration.

**Note**  
For most use cases, we recommend using EKS Access Policies instead of the Kubernetes groups approach described on this page. EKS Access Policies provide a simpler, more AWS-integrated way to manage access without requiring manual RBAC configuration. Use the Kubernetes groups approach only when you need more granular control than what EKS Access Policies offer.

## Overview
<a name="_overview"></a>

Access entries define how IAM identities (users and roles) access your Kubernetes clusters. The Kubernetes groups approach grants IAM users or roles permission to access your EKS cluster through standard Kubernetes RBAC groups. This method requires creating and managing Kubernetes RBAC resources (Roles, RoleBindings, ClusterRoles, and ClusterRoleBindings) and is recommended when you need highly customized permission sets, complex authorization requirements, or want to maintain consistent access control patterns across hybrid Kubernetes environments.

This topic does not cover creating access entries for IAM identities used for Amazon EC2 instances to join EKS clusters.

## Prerequisites
<a name="_prerequisites"></a>
+ The *authentication mode* of your cluster must be configured to enable *access entries*. For more information, see [Change authentication mode to use access entries](setting-up-access-entries.md).
+ Install and configure the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.
+ Familiarity with Kubernetes RBAC is recommended. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

## Step 1: Define access entry
<a name="k8s-group-s1"></a>

1. Find the ARN of the IAM identity, such as a user or role, that you want to grant permissions to.
   + Each IAM identity can have only one EKS access entry.

1. Determine which Kubernetes groups you want to associate with this IAM identity.
   + You will need to create or use existing Kubernetes `Role`/`ClusterRole` and `RoleBinding`/`ClusterRoleBinding` resources that reference these groups.

1. Determine if the auto-generated username is appropriate for the access entry, or if you need to manually specify a username.
   +  AWS auto-generates this value based on the IAM identity. You can set a custom username. This is visible in Kubernetes logs.
   + For more information, see [Set a custom username for EKS access entries](set-custom-username.md).

## Step 2: Create access entry with Kubernetes groups
<a name="k8s-group-s2"></a>

After planning the access entry, use the AWS CLI to create it with the appropriate Kubernetes groups.

```
aws eks create-access-entry --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --type STANDARD --kubernetes-groups <groups>
```

Replace:
+  `<cluster-name>` with your EKS cluster name
+  `<iam-identity-arn>` with the ARN of the IAM user or role
+  `<groups>` with a comma-separated list of Kubernetes groups (e.g., "system:developers,system:readers")

 [View the CLI reference for all configuration options](https://docs.aws.amazon.com/cli/latest/reference/eks/create-access-entry.html).

## Step 3: Configure Kubernetes RBAC
<a name="_step_3_configure_kubernetes_rbac"></a>

For the IAM principal to have access to Kubernetes objects on your cluster, you must create and manage Kubernetes role-based access control (RBAC) objects:

1. Create Kubernetes `Role` or `ClusterRole` objects that define the permissions.

1. Create Kubernetes `RoleBinding` or `ClusterRoleBinding` objects on your cluster that specify the group name as a `subject` for `kind: Group`.

For detailed information about configuring groups and permissions in Kubernetes, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

## Next steps
<a name="_next_steps"></a>
+  [Create a kubeconfig so you can use kubectl with an IAM identity](create-kubeconfig.md) 

# Grant IAM users access to Kubernetes with a ConfigMap
<a name="auth-configmap"></a>

**Important**  
The `aws-auth ConfigMap` is deprecated. For the recommended method to manage access to Kubernetes APIs, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md).

Access to your cluster using [IAM principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) is enabled by the [AWS IAM Authenticator for Kubernetes](https://github.com/kubernetes-sigs/aws-iam-authenticator#readme), which runs on the Amazon EKS control plane. The authenticator gets its configuration information from the `aws-auth` `ConfigMap`. For all `aws-auth` `ConfigMap` settings, see [Full Configuration Format](https://github.com/kubernetes-sigs/aws-iam-authenticator#full-configuration-format) on GitHub.

## Add IAM principals to your Amazon EKS cluster
<a name="aws-auth-users"></a>

When you create an Amazon EKS cluster, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that creates the cluster is automatically granted `system:masters` permissions in the cluster’s role-based access control (RBAC) configuration in the Amazon EKS control plane. This principal doesn’t appear in any visible configuration, so make sure to keep track of which principal originally created the cluster. To grant additional IAM principals the ability to interact with your cluster, edit the `aws-auth ConfigMap` within Kubernetes and create a Kubernetes `rolebinding` or `clusterrolebinding` with the name of a `group` that you specify in the `aws-auth ConfigMap`.

**Note**  
For more information about Kubernetes role-based access control (RBAC) configuration, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

1. Determine which credentials `kubectl` is using to access your cluster. On your computer, you can see which credentials `kubectl` uses with the following command. Replace *\$1/.kube/config* with the path to your `kubeconfig` file if you don’t use the default path.

   ```
   cat ~/.kube/config
   ```

   An example output is as follows.

   ```
   [...]
   contexts:
   - context:
       cluster: my-cluster.region-code.eksctl.io
       user: admin@my-cluster.region-code.eksctl.io
     name: admin@my-cluster.region-code.eksctl.io
   current-context: admin@my-cluster.region-code.eksctl.io
   [...]
   ```

   In the previous example output, the credentials for a user named *admin* are configured for a cluster named *my-cluster*. If this is the user that created the cluster, then it already has access to your cluster. If it’s not the user that created the cluster, then you need to complete the remaining steps to enable cluster access for other IAM principals. [IAM best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) recommend that you grant permissions to roles instead of users. You can see which other principals currently have access to your cluster with the following command:

   ```
   kubectl describe -n kube-system configmap/aws-auth
   ```

   An example output is as follows.

   ```
   Name:         aws-auth
   Namespace:    kube-system
   Labels:       <none>
   Annotations:  <none>
   
   Data
   ====
   mapRoles:
   ----
   - groups:
     - system:bootstrappers
     - system:nodes
     rolearn: arn:aws:iam::111122223333:role/my-node-role
     username: system:node:{{EC2PrivateDNSName}}
   
   
   BinaryData
   ====
   
   Events:  <none>
   ```

   The previous example is a default `aws-auth` `ConfigMap`. Only the node instance role has access to the cluster.

1. Make sure that you have existing Kubernetes `roles` and `rolebindings` or `clusterroles` and `clusterrolebindings` that you can map IAM principals to. For more information about these resources, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

   1. View your existing Kubernetes `roles` or `clusterroles`. `Roles` are scoped to a `namespace`, but `clusterroles` are scoped to the cluster.

      ```
      kubectl get roles -A
      ```

      ```
      kubectl get clusterroles
      ```

   1. View the details of any `role` or `clusterrole` returned in the previous output and confirm that it has the permissions (`rules`) that you want your IAM principals to have in your cluster.

      Replace *role-name* with a `role` name returned in the output from the previous command. Replace *kube-system* with the namespace of the `role`.

      ```
      kubectl describe role role-name -n kube-system
      ```

      Replace *cluster-role-name* with a `clusterrole` name returned in the output from the previous command.

      ```
      kubectl describe clusterrole cluster-role-name
      ```

   1. View your existing Kubernetes `rolebindings` or `clusterrolebindings`. `Rolebindings` are scoped to a `namespace`, but `clusterrolebindings` are scoped to the cluster.

      ```
      kubectl get rolebindings -A
      ```

      ```
      kubectl get clusterrolebindings
      ```

   1. View the details of any `rolebinding` or `clusterrolebinding` and confirm that it has a `role` or `clusterrole` from the previous step listed as a `roleRef` and a group name listed for `subjects`.

      Replace *role-binding-name* with a `rolebinding` name returned in the output from the previous command. Replace *kube-system* with the `namespace` of the `rolebinding`.

      ```
      kubectl describe rolebinding role-binding-name -n kube-system
      ```

      An example output is as follows.

      ```
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: eks-console-dashboard-restricted-access-role-binding
        namespace: default
      subjects:
      - kind: Group
        name: eks-console-dashboard-restricted-access-group
        apiGroup: rbac.authorization.k8s.io
      roleRef:
        kind: Role
        name: eks-console-dashboard-restricted-access-role
        apiGroup: rbac.authorization.k8s.io
      ```

      Replace *cluster-role-binding-name* with a `clusterrolebinding` name returned in the output from the previous command.

      ```
      kubectl describe clusterrolebinding cluster-role-binding-name
      ```

      An example output is as follows.

      ```
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: eks-console-dashboard-full-access-binding
      subjects:
      - kind: Group
        name: eks-console-dashboard-full-access-group
        apiGroup: rbac.authorization.k8s.io
      roleRef:
        kind: ClusterRole
        name: eks-console-dashboard-full-access-clusterrole
        apiGroup: rbac.authorization.k8s.io
      ```

1. Edit the `aws-auth` `ConfigMap`. You can use a tool such as `eksctl` to update the `ConfigMap` or you can update it manually by editing it.
**Important**  
We recommend using `eksctl`, or another tool, to edit the `ConfigMap`. For information about other tools you can use, see [Use tools to make changes to the aws-authConfigMap](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#use-tools-to-make-changes-to-the-aws-auth-configmap) in the Amazon EKS best practices guides. An improperly formatted `aws-auth` `ConfigMap` can cause you to lose access to your cluster.
   + View steps to [edit configmap with eksctl](#configmap-eksctl).
   + View steps to [edit configmap manually](#configmap-manual).

### Edit Configmap with Eksctl
<a name="configmap-eksctl"></a>

1. You need version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

1. View the current mappings in the `ConfigMap`. Replace *my-cluster* with the name of your cluster. Replace *region-code* with the AWS Region that your cluster is in.

   ```
   eksctl get iamidentitymapping --cluster my-cluster --region=region-code
   ```

   An example output is as follows.

   ```
   ARN                                                                                             USERNAME                                GROUPS                          ACCOUNT
   arn:aws:iam::111122223333:role/eksctl-my-cluster-my-nodegroup-NodeInstanceRole-1XLS7754U3ZPA    system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   ```

1. Add a mapping for a role. Replace *my-role* with your role name. Replace *eks-console-dashboard-full-access-group* with the name of the group specified in your Kubernetes `RoleBinding` or `ClusterRoleBinding` object. Replace *111122223333* with your account ID. You can replace *admin* with any name you choose.

   ```
   eksctl create iamidentitymapping --cluster my-cluster --region=region-code \
       --arn arn:aws:iam::111122223333:role/my-role --username admin --group eks-console-dashboard-full-access-group \
       --no-duplicate-arns
   ```
**Important**  
The role ARN can’t include a path such as `role/my-team/developers/my-role`. The format of the ARN must be ` arn:aws:iam::111122223333:role/my-role `. In this example, `my-team/developers/` needs to be removed.

   An example output is as follows.

   ```
   [...]
   2022-05-09 14:51:20 [ℹ]  adding identity "{arn-aws}iam::111122223333:role/my-role" to auth ConfigMap
   ```

1. Add a mapping for a user. [IAM best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) recommend that you grant permissions to roles instead of users. Replace *my-user* with your user name. Replace *eks-console-dashboard-restricted-access-group* with the name of the group specified in your Kubernetes `RoleBinding` or `ClusterRoleBinding` object. Replace *111122223333* with your account ID. You can replace *my-user* with any name you choose.

   ```
   eksctl create iamidentitymapping --cluster my-cluster --region=region-code \
       --arn arn:aws:iam::111122223333:user/my-user --username my-user --group eks-console-dashboard-restricted-access-group \
       --no-duplicate-arns
   ```

   An example output is as follows.

   ```
   [...]
   2022-05-09 14:53:48 [ℹ]  adding identity "arn:aws:iam::111122223333:user/my-user" to auth ConfigMap
   ```

1. View the mappings in the `ConfigMap` again.

   ```
   eksctl get iamidentitymapping --cluster my-cluster --region=region-code
   ```

   An example output is as follows.

   ```
   ARN                                                                                             USERNAME                                GROUPS                                  ACCOUNT
   arn:aws:iam::111122223333:role/eksctl-my-cluster-my-nodegroup-NodeInstanceRole-1XLS7754U3ZPA    system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   arn:aws:iam::111122223333:role/admin                                                            my-role                                 eks-console-dashboard-full-access-group
   arn:aws:iam::111122223333:user/my-user                                                          my-user                                 eks-console-dashboard-restricted-access-group
   ```

### Edit Configmap manually
<a name="configmap-manual"></a>

1. Open the `ConfigMap` for editing.

   ```
   kubectl edit -n kube-system configmap/aws-auth
   ```
**Note**  
If you receive an error stating " `Error from server (NotFound): configmaps "aws-auth" not found` ", then use the procedure in [Apply the aws-auth   ConfigMap to your cluster](#aws-auth-configmap) to apply the stock `ConfigMap`.

1. Add your IAM principals to the `ConfigMap`. An IAM group isn’t an IAM principal, so it can’t be added to the `ConfigMap`.
   +  **To add an IAM role (for example, for [federated users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html)):** Add the role details to the `mapRoles` section of the `ConfigMap`, under `data`. Add this section if it does not already exist in the file. Each entry supports the following parameters:
     +  **rolearn**: The ARN of the IAM role to add. This value can’t include a path. For example, you can’t specify an ARN such as ` arn:aws:iam::111122223333:role/my-team/developers/role-name `. The ARN needs to be ` arn:aws:iam::111122223333:role/role-name ` instead.
     +  **username**: The user name within Kubernetes to map to the IAM role.
     +  **groups**: The group or list of Kubernetes groups to map the role to. The group can be a default group, or a group specified in a `clusterrolebinding` or `rolebinding`. For more information, see [Default roles and role bindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings) in the Kubernetes documentation.
   +  **To add an IAM user:** [IAM best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) recommend that you grant permissions to roles instead of users. Add the user details to the `mapUsers` section of the `ConfigMap`, under `data`. Add this section if it does not already exist in the file. Each entry supports the following parameters:
     +  **userarn**: The ARN of the IAM user to add.
     +  **username**: The user name within Kubernetes to map to the IAM user.
     +  **groups**: The group, or list of Kubernetes groups to map the user to. The group can be a default group, or a group specified in a `clusterrolebinding` or `rolebinding`. For more information, see [Default roles and role bindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings) in the Kubernetes documentation.

1. For example, the following YAML block contains:
   + A `mapRoles` section that maps the IAM node instance to Kubernetes groups so that nodes can register themselves with the cluster and the `my-console-viewer-role` IAM role that is mapped to a Kubernetes group that can view all Kubernetes resources for all clusters. For a list of the IAM and Kubernetes group permissions required for the `my-console-viewer-role` IAM role, see [Required permissions](view-kubernetes-resources.md#view-kubernetes-resources-permissions).
   + A `mapUsers` section that maps the `admin` IAM user from the default AWS account to the `system:masters` Kubernetes group and the `my-user` user from a different AWS account that is mapped to a Kubernetes group that can view Kubernetes resources for a specific namespace. For a list of the IAM and Kubernetes group permissions required for the `my-user` IAM user, see [Required permissions](view-kubernetes-resources.md#view-kubernetes-resources-permissions).

     Add or remove lines as necessary and replace all example values with your own values.

     ```
     # Please edit the object below. Lines beginning with a '#' will be ignored,
     # and an empty file will abort the edit. If an error occurs while saving this file will be
     # reopened with the relevant failures.
     #
     apiVersion: v1
     data:
       mapRoles: |
         - groups:
           - system:bootstrappers
           - system:nodes
           rolearn: arn:aws:iam::111122223333:role/my-role
           username: system:node:{{EC2PrivateDNSName}}
         - groups:
           - eks-console-dashboard-full-access-group
           rolearn: arn:aws:iam::111122223333:role/my-console-viewer-role
           username: my-console-viewer-role
       mapUsers: |
         - groups:
           - system:masters
           userarn: arn:aws:iam::111122223333:user/admin
           username: admin
         - groups:
           - eks-console-dashboard-restricted-access-group
           userarn: arn:aws:iam::444455556666:user/my-user
           username: my-user
     ```

1. Save the file and exit your text editor.

## Apply the `aws-auth`   `ConfigMap` to your cluster
<a name="aws-auth-configmap"></a>

The `aws-auth` `ConfigMap` is automatically created and applied to your cluster when you create a managed node group or when you create a node group using `eksctl`. It is initially created to allow nodes to join your cluster, but you also use this `ConfigMap` to add role-based access control (RBAC) access to IAM principals. If you’ve launched self-managed nodes and haven’t applied the `aws-auth` `ConfigMap` to your cluster, you can do so with the following procedure.

1. Check to see if you’ve already applied the `aws-auth` `ConfigMap`.

   ```
   kubectl describe configmap -n kube-system aws-auth
   ```

   If you receive an error stating " `Error from server (NotFound): configmaps "aws-auth" not found` ", then proceed with the following steps to apply the stock `ConfigMap`.

1. Download, edit, and apply the AWS authenticator configuration map.

   1. Download the configuration map.

      ```
      curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml
      ```

   1. In the `aws-auth-cm.yaml` file, set the `rolearn` to the Amazon Resource Name (ARN) of the IAM role associated with your nodes. You can do this with a text editor, or by replacing *my-node-instance-role* and running the following command:

      ```
      sed -i.bak -e 's|<ARN of instance role (not instance profile)>|my-node-instance-role|' aws-auth-cm.yaml
      ```

      Don’t modify any other lines in this file.
**Important**  
The role ARN can’t include a path such as `role/my-team/developers/my-role`. The format of the ARN must be ` arn:aws:iam::111122223333:role/my-role `. In this example, `my-team/developers/` needs to be removed.

      You can inspect the AWS CloudFormation stack outputs for your node groups and look for the following values:
      +  **InstanceRoleARN** – For node groups that were created with `eksctl` 
      +  **NodeInstanceRole** – For node groups that were created with Amazon EKS vended AWS CloudFormation templates in the AWS Management Console 

   1. Apply the configuration. This command may take a few minutes to finish.

      ```
      kubectl apply -f aws-auth-cm.yaml
      ```
**Note**  
If you receive any authorization or resource type errors, see [Unauthorized or access denied (`kubectl`)](troubleshooting.md#unauthorized) in the troubleshooting topic.

1. Watch the status of your nodes and wait for them to reach the `Ready` status.

   ```
   kubectl get nodes --watch
   ```

   Enter `Ctrl`\$1`C` to return to a shell prompt.

# Grant users access to Kubernetes with an external OIDC provider
<a name="authenticate-oidc-identity-provider"></a>

Amazon EKS supports using OpenID Connect (OIDC) identity providers as a method to authenticate users to your cluster. OIDC identity providers can be used with, or as an alternative to AWS Identity and Access Management (IAM). For more information about using IAM, see [Grant IAM users and roles access to Kubernetes APIs](grant-k8s-access.md). After configuring authentication to your cluster, you can create Kubernetes `roles` and `clusterroles` to assign permissions to the roles, and then bind the roles to the identities using Kubernetes `rolebindings` and `clusterrolebindings`. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.
+ You can associate one OIDC identity provider to your cluster.
+ Kubernetes doesn’t provide an OIDC identity provider. You can use an existing public OIDC identity provider, or you can run your own identity provider. For a list of certified providers, see [OpenID Certification](https://openid.net/certification/) on the OpenID site.
+ The issuer URL of the OIDC identity provider must be publicly accessible, so that Amazon EKS can discover the signing keys. Amazon EKS doesn’t support OIDC identity providers with self-signed certificates.
+ You can’t disable IAM authentication to your cluster, because it’s still required for joining nodes to a cluster.
+ An Amazon EKS cluster must still be created by an AWS [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal), rather than an OIDC identity provider user. This is because the cluster creator interacts with the Amazon EKS APIs, rather than the Kubernetes APIs.
+ OIDC identity provider-authenticated users are listed in the cluster’s audit log if CloudWatch logs are turned on for the control plane. For more information, see [Enable or disable control plane logs](control-plane-logs.md#enabling-control-plane-log-export).
+ You can’t sign in to the AWS Management Console with an account from an OIDC provider. You can only [View Kubernetes resources in the AWS Management Console](view-kubernetes-resources.md) by signing into the AWS Management Console with an AWS Identity and Access Management account.

## Associate an OIDC identity provider
<a name="associate-oidc-identity-provider"></a>

Before you can associate an OIDC identity provider with your cluster, you need the following information from your provider:

 **Issuer URL**   
The URL of the OIDC identity provider that allows the API server to discover public signing keys for verifying tokens. The URL must begin with `https://` and should correspond to the `iss` claim in the provider’s OIDC ID tokens. In accordance with the OIDC standard, path components are allowed but query parameters are not. Typically the URL consists of only a host name, like `https://server.example.org` or `https://example.com`. This URL should point to the level below `.well-known/openid-configuration` and must be publicly accessible over the internet.

 **Client ID (also known as *audience*)**   
The ID for the client application that makes authentication requests to the OIDC identity provider.

You can associate an identity provider using `eksctl` or the AWS Management Console.

### Associate an identity provider using eksctl
<a name="identity-associate-eksctl"></a>

1. Create a file named `associate-identity-provider.yaml` with the following contents. Replace the example values with your own. The values in the `identityProviders` section are obtained from your OIDC identity provider. Values are only required for the `name`, `type`, `issuerUrl`, and `clientId` settings under `identityProviders`.

   ```
   ---
   apiVersion: eksctl.io/v1alpha5
   kind: ClusterConfig
   
   metadata:
     name: my-cluster
     region: your-region-code
   
   identityProviders:
     - name: my-provider
       type: oidc
       issuerUrl: https://example.com
       clientId: kubernetes
       usernameClaim: email
       usernamePrefix: my-username-prefix
       groupsClaim: my-claim
       groupsPrefix: my-groups-prefix
       requiredClaims:
         string: string
       tags:
         env: dev
   ```
**Important**  
Don’t specify `system:`, or any portion of that string, for `groupsPrefix` or `usernamePrefix`.

1. Create the provider.

   ```
   eksctl associate identityprovider -f associate-identity-provider.yaml
   ```

1. To use `kubectl` to work with your cluster and OIDC identity provider, see [Using kubectl](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-kubectl) in the Kubernetes documentation.

### Associate an identity provider using the AWS Console
<a name="identity-associate-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Select your cluster, and then select the **Access** tab.

1. In the **OIDC Identity Providers** section, select\$1 Associate Identity Provider\$1.

1. On the **Associate OIDC Identity Provider** page, enter or select the following options, and then select **Associate**.
   + For **Name**, enter a unique name for the provider.
   + For **Issuer URL**, enter the URL for your provider. This URL must be accessible over the internet.
   + For **Client ID**, enter the OIDC identity provider’s client ID (also known as **audience**).
   + For **Username claim**, enter the claim to use as the username.
   + For **Groups claim**, enter the claim to use as the user’s group.
   + (Optional) Select **Advanced options**, enter or select the following information.
     +  **Username prefix** – Enter a prefix to prepend to username claims. The prefix is prepended to username claims to prevent clashes with existing names. If you do not provide a value, and the username is a value other than `email`, the prefix defaults to the value for **Issuer URL**. You can use the value` -` to disable all prefixing. Don’t specify `system:` or any portion of that string.
     +  **Groups prefix** – Enter a prefix to prepend to groups claims. The prefix is prepended to group claims to prevent clashes with existing names (such as` system: groups`). For example, the value `oidc:` creates group names like `oidc:engineering` and `oidc:infra`. Don’t specify `system:` or any portion of that string..
     +  **Required claims** – Select **Add claim** and enter one or more key value pairs that describe required claims in the client ID token. The pairs describe required claims in the ID Token. If set, each claim is verified to be present in the ID token with a matching value.

       1. To use `kubectl` to work with your cluster and OIDC identity provider, see [Using kubectl](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-kubectl) in the Kubernetes documentation.

## Example IAM policy
<a name="oidc-identity-provider-iam-policy"></a>

If you want to prevent an OIDC identity provider from being associated with a cluster, create and associate the following IAM policy to the IAM accounts of your Amazon EKS administrators. For more information, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and [Adding IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console) in the *IAM User Guide* and [Actions](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelasticcontainerserviceforkubernetes.html) in the Service Authorization Reference.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "denyOIDC",
            "Effect": "Deny",
            "Action": [
                "eks:AssociateIdentityProviderConfig"
            ],
            "Resource": "arn:aws:eks:us-west-2:111122223333:cluster/*"

        },
        {
            "Sid": "eksAdmin",
            "Effect": "Allow",
            "Action": [
                "eks:*"
            ],
            "Resource": "*"
        }
    ]
}
```

The following example policy allows OIDC identity provider association if the `clientID` is `kubernetes` and the `issuerUrl` is `https://cognito-idp.us-west-2.amazonaws.com/*`.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowCognitoOnly",
            "Effect": "Deny",
            "Action": "eks:AssociateIdentityProviderConfig",
            "Resource": "arn:aws:eks:us-west-2:111122223333:cluster/my-instance",
            "Condition": {
                "StringNotLikeIfExists": {
                    "eks:issuerUrl": "https://cognito-idp.us-west-2.amazonaws.com/*"
                }
            }
        },
        {
            "Sid": "DenyOtherClients",
            "Effect": "Deny",
            "Action": "eks:AssociateIdentityProviderConfig",
            "Resource": "arn:aws:eks:us-west-2:111122223333:cluster/my-instance",
            "Condition": {
                "StringNotEquals": {
                    "eks:clientId": "kubernetes"
                }
            }
        },
        {
            "Sid": "AllowOthers",
            "Effect": "Allow",
            "Action": "eks:*",
            "Resource": "*"
        }
    ]
}
```

# Disassociate an OIDC identity provider from your cluster
<a name="disassociate-oidc-identity-provider"></a>

If you disassociate an OIDC identity provider from your cluster, users included in the provider can no longer access the cluster. However, you can still access the cluster with [IAM principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal).

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the **OIDC Identity Providers** section, select **Disassociate**, enter the identity provider name, and then select `Disassociate`.

# View Kubernetes resources in the AWS Management Console
<a name="view-kubernetes-resources"></a>

You can view the Kubernetes resources deployed to your cluster with the AWS Management Console. You can’t view Kubernetes resources with the AWS CLI or [eksctl](https://eksctl.io/). To view Kubernetes resources using a command-line tool, use [kubectl](install-kubectl.md).

**Note**  
To view the **Resources** tab and **Nodes** section on the **Compute** tab in the AWS Management Console, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that you’re using must have specific IAM and Kubernetes permissions. For more information, see [Required permissions](#view-kubernetes-resources-permissions).

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the **Clusters** list, select the cluster that contains the Kubernetes resources that you want to view.

1. Select the **Resources** tab.

1. Select a **Resource type** group that you want to view resources for, such as **Workloads**. You see a list of resource types in that group.

1. Select a resource type, such as **Deployments**, in the **Workloads** group. You see a description of the resource type, a link to the Kubernetes documentation for more information about the resource type, and a list of resources of that type that are deployed on your cluster. If the list is empty, then there are no resources of that type deployed to your cluster.

1. Select a resource to view more information about it. Try the following examples:
   + Select the **Workloads** group, select the **Deployments** resource type, and then select the **coredns** resource. When you select a resource, you are in **Structured view**, by default. For some resource types, you see a **Pods** section in **Structured view**. This section lists the Pods managed by the workload. You can select any Pod listed to view information about the Pod. Not all resource types display information in **Structured View**. If you select **Raw view** in the top right corner of the page for the resource, you see the complete JSON response from the Kubernetes API for the resource.
   + Select the **Cluster** group and then select the **Nodes** resource type. You see a list of all nodes in your cluster. The nodes can be any [Amazon EKS node type](eks-compute.md). This is the same list that you see in the **Nodes** section when you select the **Compute** tab for your cluster. Select a node resource from the list. In **Structured view**, you also see a **Pods** section. This section shows you all Pods running on the node.

## Required permissions
<a name="view-kubernetes-resources-permissions"></a>

To view the **Resources** tab and **Nodes** section on the **Compute** tab in the AWS Management Console, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that you’re using must have specific minimum IAM and Kubernetes permissions. You must have both IAM and Kubernetes RBAC permissions configured correctly. Complete the following steps to assign the required permissions to your IAM principals.

1. Make sure that the `eks:AccessKubernetesApi`, and other necessary IAM permissions to view Kubernetes resources, are assigned to the IAM principal that you’re using. For more information about how to edit permissions for an IAM principal, see [Controlling access for principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_controlling.html#access_controlling-principals) in the IAM User Guide. For more information about how to edit permissions for a role, see [Modifying a role permissions policy (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-managingrole-editing-console.html#roles-modify_permissions-policy) in the IAM User Guide.

   The following example policy includes the necessary permissions for a principal to view Kubernetes resources for all clusters in your account. Replace *111122223333* with your AWS account ID.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "eks:ListFargateProfiles",
                   "eks:DescribeNodegroup",
                   "eks:ListNodegroups",
                   "eks:ListUpdates",
                   "eks:AccessKubernetesApi",
                   "eks:ListAddons",
                   "eks:DescribeCluster",
                   "eks:DescribeAddonVersions",
                   "eks:ListClusters",
                   "eks:ListIdentityProviderConfigs",
                   "iam:ListRoles"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": "ssm:GetParameter",
               "Resource": "arn:aws:ssm:*:111122223333:parameter/*"
           }
       ]
   }
   ```

   To view nodes in [connected clusters](eks-connector.md), the [Amazon EKS connector IAM role](connector-iam-role.md) should be able to impersonate the principal in the cluster. This allows the [Amazon EKS Connector](eks-connector.md) to map the principal to a Kubernetes user.

1. Configure Kubernetes RBAC permissions using EKS access entries.

    **What are EKS Access Entries?** 

   EKS access entries are a streamlined way to grant IAM principals (users and roles) access to your Kubernetes cluster. Instead of manually managing Kubernetes RBAC resources and the `aws-auth` ConfigMap, access entries automatically handle the mapping between IAM and Kubernetes permissions using managed policies provided by AWS. For detailed information about access entries, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md). For information about available access policies and their permissions, see [Access policy permissions](https://docs.aws.amazon.com/eks/latest/userguide/access-policy-permissions.html).

   You can attach Kubernetes permissions to access entries in two ways:
   +  **Use an access policy:** Access policies are pre-defined Kubernetes permissions templates maintained by AWS. These provide standardized permission sets for common use cases.
   +  **Reference a Kubernetes group:** If you associate an IAM identity with a Kubernetes group, you can create Kubernetes resources that grant the group permissions. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

     1. Create an access entry for your IAM principal using the AWS CLI. Replace *my-cluster* with the name of your cluster. Replace *111122223333* with your account ID.

        ```
        aws eks create-access-entry \
            --cluster-name my-cluster \
            --principal-arn arn:aws:iam::111122223333:role/my-console-viewer-role
        ```

        An example output is as follows.

        ```
        {
            "accessEntry": {
                "clusterName": "my-cluster",
                "principalArn": "arn:aws:iam::111122223333:role/my-console-viewer-role",
                "kubernetesGroups": [],
                "accessEntryArn": "arn:aws:eks:region-code:111122223333:access-entry/my-cluster/role/111122223333/my-console-viewer-role/abc12345-1234-1234-1234-123456789012",
                "createdAt": "2024-03-15T10:30:45.123000-07:00",
                "modifiedAt": "2024-03-15T10:30:45.123000-07:00",
                "tags": {},
                "username": "arn:aws:iam::111122223333:role/my-console-viewer-role",
                "type": "STANDARD"
            }
        }
        ```

     1. Associate a policy with the access entry. For viewing Kubernetes resources, use the `AmazonEKSViewPolicy`:

        ```
        aws eks associate-access-policy \
            --cluster-name my-cluster \
            --principal-arn arn:aws:iam::111122223333:role/my-console-viewer-role \
            --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
            --access-scope type=cluster
        ```

        An example output is as follows.

        ```
        {
            "clusterName": "my-cluster",
            "principalArn": "arn:aws:iam::111122223333:role/my-console-viewer-role",
            "associatedAt": "2024-03-15T10:31:15.456000-07:00"
        }
        ```

        For namespace-specific access, you can scope the policy to specific namespaces:

        ```
        aws eks associate-access-policy \
            --cluster-name my-cluster \
            --principal-arn arn:aws:iam::111122223333:role/my-console-viewer-role \
            --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
            --access-scope type=namespace,namespaces=default,kube-system
        ```

     1. Verify the access entry was created successfully:

        ```
        aws eks describe-access-entry \
            --cluster-name my-cluster \
            --principal-arn arn:aws:iam::111122223333:role/my-console-viewer-role
        ```

     1. List the associated policies to confirm the policy association:

        ```
        aws eks list-associated-access-policies \
            --cluster-name my-cluster \
            --principal-arn arn:aws:iam::111122223333:role/my-console-viewer-role
        ```

        An example output is as follows.

        ```
        {
            "associatedAccessPolicies": [
                {
                    "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy",
                    "accessScope": {
                        "type": "cluster"
                    },
                    "associatedAt": "2024-03-15T10:31:15.456000-07:00",
                    "modifiedAt": "2024-03-15T10:31:15.456000-07:00"
                }
            ]
        }
        ```

## CloudTrail visibility
<a name="cloudtrail-visibility"></a>

When viewing Kubernetes resources, you will see the following operation name in your CloudTrail logs:
+  `AccessKubernetesApi` - When reading or viewing resources

This CloudTrail event provides an audit trail of read access to your Kubernetes resources.

**Note**  
This operation name appears in CloudTrail logs for auditing purposes only. It is not an IAM action and cannot be used in IAM policy statements. To control read access to Kubernetes resources through IAM policies, use the `eks:AccessKubernetesApi` permission as shown in the [Required permissions](#view-kubernetes-resources-permissions) section.

# Grant AWS services write access to Kubernetes APIs
<a name="mutate-kubernetes-resources"></a>

## Required permissions
<a name="mutate-kubernetes-resources-permissions"></a>

To enable AWS services to perform write operations on Kubernetes resources in your Amazon EKS cluster, you must grant both the `eks:AccessKubernetesApi` and `eks:MutateViaKubernetesApi` IAM permissions.

For example, Amazon SageMaker HyperPod uses these permissions to enable model deployment from SageMaker AI Studio. For more information, see [Set up optional JavaScript SDK permissions](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-model-deployment-setup.html#sagemaker-hyperpod-model-deployment-setup-optional-js) in the Amazon SageMaker AI Developer Guide.

**Important**  
Write operations such as create, update, and delete require both permissions—if either permission is missing, write operations will fail.

## CloudTrail visibility
<a name="cloudtrail-visibility"></a>

While perform write operations on Kubernetes resources, you will see specific operation names in your CloudTrail logs:
+  `createKubernetesObject` - When creating new resources
+  `updateKubernetesObject` - When modifying existing resources
+  `deleteKubernetesObject` - When removing resources

These CloudTrail events provide detailed audit trails of all modifications made to your Kubernetes resources.

**Note**  
These operation names appear in CloudTrail logs for auditing purposes only. They are not IAM actions and cannot be used in IAM policy statements. To control write access to Kubernetes resources through IAM policies, use the `eks:MutateViaKubernetesApi` permission as shown in the [Required permissions](#mutate-kubernetes-resources-permissions) section.

# Connect kubectl to an EKS cluster by creating a kubeconfig file
<a name="create-kubeconfig"></a>

**Tip**  
 [Register](https://aws-experience.com/emea/smb/events/series/get-hands-on-with-amazon-eks?trk=4a9b4147-2490-4c63-bc9f-f8a84b122c8c&sc_channel=el) for upcoming Amazon EKS workshops.

In this topic, you create a `kubeconfig` file for your cluster (or update an existing one).

The `kubectl` command-line tool uses configuration information in `kubeconfig` files to communicate with the API server of a cluster. For more information, see [Organizing Cluster Access Using kubeconfig Files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) in the Kubernetes documentation.

Amazon EKS uses the `aws eks get-token` command with `kubectl` for cluster authentication. By default, the AWS CLI uses the same credentials that are returned with the following command:

```
aws sts get-caller-identity
```
+ An existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md).
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ An IAM user or role with permission to use the `eks:DescribeCluster` API action for the cluster that you specify. For more information, see [Amazon EKS identity-based policy examples](security-iam-id-based-policy-examples.md). If you use an identity from your own OpenID Connect provider to access your cluster, then see [Using kubectl](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-kubectl) in the Kubernetes documentation to create or update your `kube config` file.

## Create `kubeconfig` file automatically
<a name="create-kubeconfig-automatically"></a>
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ Permission to use the `eks:DescribeCluster` API action for the cluster that you specify. For more information, see [Amazon EKS identity-based policy examples](security-iam-id-based-policy-examples.md).

  1. Create or update a `kubeconfig` file for your cluster. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

     ```
     aws eks update-kubeconfig --region region-code --name my-cluster
     ```

     By default, the resulting configuration file is created at the default `kubeconfig` path (`.kube`) in your home directory or merged with an existing `config` file at that location. You can specify another path with the `--kubeconfig` option.

     You can specify an IAM role ARN with the `--role-arn` option to use for authentication when you issue `kubectl` commands. Otherwise, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) in your default AWS CLI or SDK credential chain is used. You can view your default AWS CLI or SDK identity by running the `aws sts get-caller-identity` command.

     For all available options, run the `aws eks update-kubeconfig help` command or see [update-kubeconfig](https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html) in the * AWS CLI Command Reference*.

  1. Test your configuration.

     ```
     kubectl get svc
     ```

     An example output is as follows.

     ```
     NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
     svc/kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   1m
     ```

     If you receive any authorization or resource type errors, see [Unauthorized or access denied (`kubectl`)](troubleshooting.md#unauthorized) in the troubleshooting topic.

# Grant Kubernetes workloads access to AWS using Kubernetes Service Accounts
<a name="service-accounts"></a>[Managing Service Accounts](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin)[IAM roles for service accounts](iam-roles-for-service-accounts.md)[Learn how EKS Pod Identity grants pods access to AWS services](pod-identities.md)

## Service account tokens
<a name="service-account-tokens"></a>

The [BoundServiceAccountTokenVolume](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume) feature is enabled by default in Kubernetes versions. This feature improves the security of service account tokens by allowing workloads running on Kubernetes to request JSON web tokens that are audience, time, and key bound. Service account tokens have an expiration of one hour. In earlier Kubernetes versions, the tokens didn’t have an expiration. This means that clients that rely on these tokens must refresh the tokens within an hour. The following [Kubernetes client SDKs](https://kubernetes.io/docs/reference/using-api/client-libraries/) refresh tokens automatically within the required time frame:
+ Go version `0.15.7` and later
+ Python version `12.0.0` and later
+ Java version `9.0.0` and later
+ JavaScript version `0.10.3` and later
+ Ruby `master` branch
+ Haskell version `0.3.0.0` 
+ C\$1 version `7.0.5` and later

If your workload is using an earlier client version, then you must update it. To enable a smooth migration of clients to the newer time-bound service account tokens, Kubernetes adds an extended expiry period to the service account token over the default one hour. For Amazon EKS clusters, the extended expiry period is 90 days. Your Amazon EKS cluster’s Kubernetes API server rejects requests with tokens that are greater than 90 days old. We recommend that you check your applications and their dependencies to make sure that the Kubernetes client SDKs are the same or later than the versions listed previously.

When the API server receives requests with tokens that are greater than one hour old, it annotates the API audit log event with `annotations.authentication.k8s.io/stale-token`. The value of the annotation looks like the following example:

```
subject: system:serviceaccount:common:fluent-bit, seconds after warning threshold: 4185802.
```

If your cluster has [control plane logging](control-plane-logs.md) enabled, then the annotations are in the audit logs. You can use the following [CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) query to identify all the Pods in your Amazon EKS cluster that are using stale tokens:

```
fields @timestamp
|filter @logStream like /kube-apiserver-audit/
|filter @message like /seconds after warning threshold/
|parse @message "subject: *, seconds after warning threshold:*\"" as subject, elapsedtime
```

The `subject` refers to the service account that the Pod used. The `elapsedtime` indicates the elapsed time (in seconds) after reading the latest token. The requests to the API server are denied when the `elapsedtime` exceeds 90 days (7,776,000 seconds). You should proactively update your applications' Kubernetes client SDK to use one of the version listed previously that automatically refresh the token. If the service account token used is close to 90 days and you don’t have sufficient time to update your client SDK versions before token expiration, then you can terminate existing Pods and create new ones. This results in refetching of the service account token, giving you an additional 90 days to update your client version SDKs.

If the Pod is part of a deployment, the suggested way to terminate Pods while keeping high availability is to perform a roll out with the following command. Replace *my-deployment* with the name of your deployment.

```
kubectl rollout restart deployment/my-deployment
```

## Cluster add-ons
<a name="boundserviceaccounttoken-validated-add-on-versions"></a>

The following cluster add-ons have been updated to use the Kubernetes client SDKs that automatically refetch service account tokens. We recommend making sure that the listed versions, or later versions, are installed on your cluster.
+ Amazon VPC CNI plugin for Kubernetes and metrics helper plugins version `1.8.0` and later. To check your current version or update it, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md) and [cni-metrics-helper](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/cni-metrics-helper/README.md).
+ CoreDNS version `1.8.4` and later. To check your current version or update it, see [Manage CoreDNS for DNS in Amazon EKS clusters](managing-coredns.md).
+  AWS Load Balancer Controller version `2.0.0` and later. To check your current version or update it, see [Route internet traffic with AWS Load Balancer Controller](aws-load-balancer-controller.md).
+ A current `kube-proxy` version. To check your current version or update it, see [Manage `kube-proxy` in Amazon EKS clusters](managing-kube-proxy.md).
+  AWS for Fluent Bit version `2.25.0` or later. To update your current version, see [Releases](https://github.com/aws/aws-for-fluent-bit/releases) on GitHub.
+ Fluentd image version [1.14.6-1.2](https://hub.docker.com/r/fluent/fluentd/tags?page=1&name=v1.14.6-1.2) or later and Fluentd filter plugin for Kubernetes metadata version [2.11.1](https://rubygems.org/gems/fluent-plugin-kubernetes_metadata_filter/versions/2.11.1) or later.

## Granting AWS Identity and Access Management permissions to workloads on Amazon Elastic Kubernetes Service clusters
<a name="service-accounts-iam"></a>

Amazon EKS provides two ways to grant AWS Identity and Access Management permissions to workloads that run in Amazon EKS clusters: *IAM roles for service accounts*, and *EKS Pod Identities*.

 **IAM roles for service accounts**   
 *IAM roles for service accounts (IRSA)* configures Kubernetes applications running on AWS with fine-grained IAM permissions to access various other AWS resources such as Amazon S3 buckets, Amazon DynamoDB tables, and more. You can run multiple applications together in the same Amazon EKS cluster, and ensure each application has only the minimum set of permissions that it needs. IRSA was built to support various Kubernetes deployment options supported by AWS such as Amazon EKS, Amazon EKS Anywhere, Red Hat OpenShift Service on AWS, and self managed Kubernetes clusters on Amazon EC2 instances. Thus, IRSA was build using foundational AWS service like IAM, and did not take any direct dependency on the Amazon EKS service and the EKS API. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md).

 **EKS Pod Identities**   
EKS Pod Identity offers cluster administrators a simplified workflow for authenticating applications to access various other AWS resources such as Amazon S3 buckets, Amazon DynamoDB tables, and more. EKS Pod Identity is for EKS only, and as a result, it simplifies how cluster administrators can configure Kubernetes applications to obtain IAM permissions. These permissions can now be easily configured with fewer steps directly through AWS Management Console, EKS API, and AWS CLI, and there isn’t any action to take inside the cluster in any Kubernetes objects. Cluster administrators don’t need to switch between the EKS and IAM services, or use privileged IAM operations to configure permissions required by your applications. IAM roles can now be used across multiple clusters without the need to update the role trust policy when creating new clusters. IAM credentials supplied by EKS Pod Identity include role session tags, with attributes such as cluster name, namespace, service account name. Role session tags enable administrators to author a single role that can work across service accounts by allowing access to AWS resources based on matching tags. For more information, see [Learn how EKS Pod Identity grants pods access to AWS services](pod-identities.md).

### Comparing EKS Pod Identity and IRSA
<a name="service-accounts-iam-compare"></a>

At a high level, both EKS Pod Identity and IRSA enables you to grant IAM permissions to applications running on Kubernetes clusters. But they are fundamentally different in how you configure them, the limits supported, and features enabled. Below, we compare some of the key facets of both solutions.

**Note**  
 AWS recommends using EKS Pod Identities to grant access to AWS resources to your pods whenever possible. For more information, see [Learn how EKS Pod Identity grants pods access to AWS services](pod-identities.md).


| Attribute | EKS Pod Identity | IRSA | 
| --- | --- | --- | 
|  Role extensibility  |  You have to setup each role once to establish trust with the newly-introduced Amazon EKS service principal `pods.eks.amazonaws.com`. After this one-time step, you don’t need to update the role’s trust policy each time that it is used in a new cluster.  |  You have to update the IAM role’s trust policy with the new EKS cluster OIDC provider endpoint each time you want to use the role in a new cluster.  | 
|  Cluster scalability  |  EKS Pod Identity doesn’t require users to setup IAM OIDC provider, so this limit doesn’t apply.  |  Each EKS cluster has an OpenID Connect (OIDC) issuer URL associated with it. To use IRSA, a unique OpenID Connect provider needs to be created for each EKS cluster in IAM. IAM has a default global limit of 100 OIDC providers for each AWS account. If you plan to have more than 100 EKS clusters for each AWS account with IRSA, then you will reach the IAM OIDC provider limit.  | 
|  Role scalability  |  EKS Pod Identity doesn’t require users to define trust relationship between IAM role and service account in the trust policy, so this limit doesn’t apply.  |  In IRSA, you define the trust relationship between an IAM role and service account in the role’s trust policy. By default, the length of trust policy size is `2048`. This means that you can typically define 4 trust relationships in a single trust policy. While you can get the trust policy length limit increased, you are typically limited to a max of 8 trust relationships within a single trust policy.  | 
|  STS API Quota Usage  |  EKS Pod Identity simplifies delivery of AWS credentials to your pods, and does not require your code make calls with the AWS Security Token Service (STS) directly. The EKS service handles role assumption, and delivers credentials to applications written using the AWS SDK in your pods without your pods communicating with AWS STS or using STS API Quota.  |  In IRSA, applications written using the AWS SDK in your pods use tokens to call the `AssumeRoleWithWebIdentity` API on the AWS Security Token Service (STS). Depending on the logic of your code on the AWS SDK, it is possible for your code to make unnecessary calls to AWS STS and receive throttling errors.  | 
|  Role reusability  |   AWS STS temporary credentials supplied by EKS Pod Identity include role session tags, such as cluster name, namespace, service account name. Role session tags enable administrators to author a single IAM role that can be used with multiple service accounts, with different effective permission, by allowing access to AWS resources based on tags attached to them. This is also called attribute-based access control (ABAC). For more information, see [Grant Pods access to AWS resources based on tags](pod-id-abac.md).  |   AWS STS session tags are not supported. You can reuse a role between clusters but every pod receives all of the permissions of the role.  | 
|  Environments supported  |  EKS Pod Identity is only available on Amazon EKS.  |  IRSA can be used such as Amazon EKS, Amazon EKS Anywhere, Red Hat OpenShift Service on AWS, and self managed Kubernetes clusters on Amazon EC2 instances.  | 
|  EKS versions supported  |  All of the supported EKS cluster versions. For the specific platform versions, see [EKS Pod Identity cluster versions](pod-identities.md#pod-id-cluster-versions).  |  All of the supported EKS cluster versions.  | 

# IAM roles for service accounts
<a name="iam-roles-for-service-accounts"></a>

**Tip**  
 [Register](https://aws-experience.com/emea/smb/events/series/get-hands-on-with-amazon-eks?trk=4a9b4147-2490-4c63-bc9f-f8a84b122c8c&sc_channel=el) for upcoming Amazon EKS workshops.

Applications in a Pod’s containers can use an AWS SDK or the AWS CLI to make API requests to AWS services using AWS Identity and Access Management (IAM) permissions. Applications must sign their AWS API requests with AWS credentials. **IAM roles for service accounts (IRSA)** provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instance’s role, you associate an IAM role with a Kubernetes service account and configure your Pods to use the service account. You can’t use IAM roles for service accounts with [local clusters for Amazon EKS on AWS Outposts](eks-outposts-local-cluster-overview.md).

IAM roles for service accounts provide the following benefits:
+  **Least privilege** – You can scope IAM permissions to a service account, and only Pods that use that service account have access to those permissions. This feature also eliminates the need for third-party solutions such as `kiam` or `kube2iam`.
+  **Credential isolation** – When access to the [Amazon EC2 Instance Metadata Service (IMDS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html) is restricted, a Pod’s containers can only retrieve credentials for the IAM role that’s associated with the service account that the container uses. A container never has access to credentials that are used by other containers in other Pods. If IMDS is not restricted, the Pod’s containers also have access to the [Amazon EKS node IAM role](create-node-role.md) and the containers may be able to gain access to credentials of IAM roles of other Pods on the same node. For more information, see [Restrict access to the instance profile assigned to the worker node](https://docs.aws.amazon.com/eks/latest/best-practices/identity-and-access-management.html#_identities_and_credentials_for_eks_pods_recommendations).

**Note**  
Pods configured with `hostNetwork: true` will always have IMDS access, but the AWS SDKs and CLI will use IRSA credentials when enabled.
+  **Auditability** – Access and event logging is available through AWS CloudTrail to help ensure retrospective auditing.

**Important**  
Containers are not a security boundary, and the use of IAM roles for service accounts does not change this. Pods assigned to the same node will share a kernel and potentially other resources depending on your Pod configuration. While Pods running on separate nodes will be isolated at the compute layer, there are node applications that have additional permissions in the Kubernetes API beyond the scope of an individual instance. Some examples are `kubelet`, `kube-proxy`, CSI storage drivers, or your own Kubernetes applications.

Enable IAM roles for service accounts by completing the following procedures:

1.  [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md) – You only complete this procedure once for each cluster.
**Note**  
If you enabled the EKS VPC endpoint, the EKS OIDC service endpoint couldn’t be accessed from inside that VPC. Consequently, your operations such as creating an OIDC provider with `eksctl` in the VPC will not work and will result in a timeout when attempting to request `https://oidc.eks.region.amazonaws.com`. An example error message follows:  

   ```
   server cant find oidc.eks.region.amazonaws.com: NXDOMAIN
   ```
To complete this step, you can run the command outside the VPC, for example in AWS CloudShell or on a computer connected to the internet. Alternatively, you can create a split-horizon conditional resolver in the VPC, such as Route 53 Resolver to use a different resolver for the OIDC Issuer URL and not use the VPC DNS for it. For an example of conditional forwarding in CoreDNS, see the [Amazon EKS feature request](https://github.com/aws/containers-roadmap/issues/2038) on GitHub.

1.  [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md) – Complete this procedure for each unique set of permissions that you want an application to have.

1.  [Configure Pods to use a Kubernetes service account](pod-configuration.md) – Complete this procedure for each Pod that needs access to AWS services.

1.  [Use IRSA with the AWS SDK](iam-roles-for-service-accounts-minimum-sdk.md) – Confirm that the workload uses an AWS SDK of a supported version and that the workload uses the default credential chain.

## IAM, Kubernetes, and OpenID Connect (OIDC) background information
<a name="irsa-oidc-background"></a>

In 2014, AWS Identity and Access Management added support for federated identities using OpenID Connect (OIDC). This feature allows you to authenticate AWS API calls with supported identity providers and receive a valid OIDC JSON web token (JWT). You can pass this token to the AWS STS `AssumeRoleWithWebIdentity` API operation and receive IAM temporary role credentials. You can use these credentials to interact with any AWS service, including Amazon S3 and DynamoDB.

Each JWT token is signed by a signing key pair. The keys are served on the OIDC provider managed by Amazon EKS and the private key rotates every 7 days. Amazon EKS keeps the public keys until they expire. If you connect external OIDC clients, be aware that you need to refresh the signing keys before the public key expires. Learn how to [Fetch signing keys to validate OIDC tokens](irsa-fetch-keys.md).

Kubernetes has long used service accounts as its own internal identity system. Pods can authenticate with the Kubernetes API server using an auto-mounted token (which was a non-OIDC JWT) that only the Kubernetes API server could validate. These legacy service account tokens don’t expire, and rotating the signing key is a difficult process. In Kubernetes version `1.12`, support was added for a new `ProjectedServiceAccountToken` feature. This feature is an OIDC JSON web token that also contains the service account identity and supports a configurable audience.

Amazon EKS hosts a public OIDC discovery endpoint for each cluster that contains the signing keys for the `ProjectedServiceAccountToken` JSON web tokens so external systems, such as IAM, can validate and accept the OIDC tokens that are issued by Kubernetes.

# Create an IAM OIDC provider for your cluster
<a name="enable-iam-roles-for-service-accounts"></a>

Your cluster has an [OpenID Connect](https://openid.net/connect/) (OIDC) issuer URL associated with it. To use AWS Identity and Access Management (IAM) roles for service accounts, an IAM OIDC provider must exist for your cluster’s OIDC issuer URL.

## Prerequisites
<a name="_prerequisites"></a>
+ An existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md).
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ An existing `kubectl` `config` file that contains your cluster configuration. To create a `kubectl` `config` file, see [Connect kubectl to an EKS cluster by creating a kubeconfig file](create-kubeconfig.md).

You can create an IAM OIDC provider for your cluster using `eksctl` or the AWS Management Console.

## Create OIDC provider (eksctl)
<a name="_create_oidc_provider_eksctl"></a>

1. Version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

1. Determine the OIDC issuer ID for your cluster.

   Retrieve your cluster’s OIDC issuer ID and store it in a variable. Replace `<my-cluster>` with your own value.

   ```
   cluster_name=<my-cluster>
   oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
   echo $oidc_id
   ```

1. Determine whether an IAM OIDC provider with your cluster’s issuer ID is already in your account.

   ```
   aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4
   ```

   If output is returned, then you already have an IAM OIDC provider for your cluster and you can skip the next step. If no output is returned, then you must create an IAM OIDC provider for your cluster.

1. Create an IAM OIDC identity provider for your cluster with the following command.

   ```
   eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve
   ```
**Note**  
If you enabled the EKS VPC endpoint, the EKS OIDC service endpoint couldn’t be accessed from inside that VPC. Consequently, your operations such as creating an OIDC provider with `eksctl` in the VPC will not work and will result in a timeout. An example error message follows:  

   ```
   ** server cant find oidc.eks.<region-code>.amazonaws.com: NXDOMAIN
   ```

   To complete this step, you can run the command outside the VPC, for example in AWS CloudShell or on a computer connected to the internet. Alternatively, you can create a split-horizon conditional resolver in the VPC, such as Route 53 Resolver to use a different resolver for the OIDC Issuer URL and not use the VPC DNS for it. For an example of conditional forwarding in CoreDNS, see the [Amazon EKS feature request](https://github.com/aws/containers-roadmap/issues/2038) on GitHub.

## Create OIDC provider (AWS Console)
<a name="create_oidc_provider_shared_aws_console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left pane, select **Clusters**, and then select the name of your cluster on the **Clusters** page.

1. In the **Details** section on the **Overview** tab, note the value of the **OpenID Connect provider URL**.

1. Open the IAM console at https://console.aws.amazon.com/iam/.

1. In the left navigation pane, choose **Identity Providers** under **Access management**. If a **Provider** is listed that matches the URL for your cluster, then you already have a provider for your cluster. If a provider isn’t listed that matches the URL for your cluster, then you must create one.

1. To create a provider, choose **Add provider**.

1. For **Provider type**, select **OpenID Connect**.

1. For **Provider URL**, enter the OIDC provider URL for your cluster.

1. For **Audience**, enter `sts.amazonaws.com`.

1. (Optional) Add any tags, for example a tag to identify which cluster is for this provider.

1. Choose **Add provider**.

Next step: [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md) 

# Assign IAM roles to Kubernetes service accounts
<a name="associate-service-account-role"></a>

This topic covers how to configure a Kubernetes service account to assume an AWS Identity and Access Management (IAM) role. Any Pods that are configured to use the service account can then access any AWS service that the role has permissions to access.

## Prerequisites
<a name="_prerequisites"></a>
+ An existing cluster. If you don’t have one, you can create one by following one of the guides in [Get started with Amazon EKS](getting-started.md).
+ An existing IAM OpenID Connect (OIDC) provider for your cluster. To learn if you already have one or how to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ An existing `kubectl` `config` file that contains your cluster configuration. To create a `kubectl` `config` file, see [Connect kubectl to an EKS cluster by creating a kubeconfig file](create-kubeconfig.md).

## Step 1: Create IAM Policy
<a name="irsa-associate-role-procedure"></a>

If you want to associate an existing IAM policy to your IAM role, skip to the next step.

1. Create an IAM policy. You can create your own policy, or copy an AWS managed policy that already grants some of the permissions that you need and customize it to your specific requirements. For more information, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

1. Create a file that includes the permissions for the AWS services that you want your Pods to access. For a list of all actions for all AWS services, see the [Service Authorization Reference](https://docs.aws.amazon.com/service-authorization/latest/reference/).

   You can run the following command to create an example policy file that allows read-only access to an Amazon S3 bucket. You can optionally store configuration information or a bootstrap script in this bucket, and the containers in your Pod can read the file from the bucket and load it into your application. If you want to create this example policy, copy the following contents to your device. Replace *my-pod-secrets-bucket* with your bucket name and run the command.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "s3:GetObject",
               "Resource": "arn:aws:s3:::my-pod-secrets-bucket"
           }
       ]
   }
   ```

1. Create the IAM policy.

   ```
   aws iam create-policy --policy-name my-policy --policy-document file://my-policy.json
   ```

## Step 2: Create and associate IAM Role
<a name="_step_2_create_and_associate_iam_role"></a>

Create an IAM role and associate it with a Kubernetes service account. You can use either `eksctl` or the AWS CLI.

### Create and associate role (eksctl)
<a name="_create_and_associate_role_eksctl"></a>

This `eksctl` command creates a Kubernetes service account in the specified namespace, creates an IAM role (if it doesn’t exist) with the specified name, attaches an existing IAM policy ARN to the role, and annotates the service account with the IAM role ARN. Be sure to replace the sample placeholder values in this command with your specific values. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

```
eksctl create iamserviceaccount --name my-service-account --namespace default --cluster my-cluster --role-name my-role \
    --attach-policy-arn arn:aws:iam::111122223333:policy/my-policy --approve
```

**Important**  
If the role or service account already exist, the previous command might fail. `eksctl` has different options that you can provide in those situations. For more information run `eksctl create iamserviceaccount --help`.

### Create and associate role (AWS CLI)
<a name="create_and_associate_role_shared_aws_cli"></a>

If you have an existing Kubernetes service account that you want to assume an IAM role, then you can skip this step.

1. Create a Kubernetes service account. Copy the following contents to your device. Replace *my-service-account* with your desired name and *default* with a different namespace, if necessary. If you change *default*, the namespace must already exist.

   ```
   cat >my-service-account.yaml <<EOF
   apiVersion: v1
   kind: ServiceAccount
   metadata:
     name: my-service-account
     namespace: default
   EOF
   kubectl apply -f my-service-account.yaml
   ```

1. Set your AWS account ID to an environment variable with the following command.

   ```
   account_id=$(aws sts get-caller-identity --query "Account" --output text)
   ```

1. Set your cluster’s OIDC identity provider to an environment variable with the following command. Replace *my-cluster* with the name of your cluster.

   ```
   oidc_provider=$(aws eks describe-cluster --name my-cluster --region $AWS_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
   ```

1. Set variables for the namespace and name of the service account. Replace *my-service-account* with the Kubernetes service account that you want to assume the role. Replace *default* with the namespace of the service account.

   ```
   export namespace=default
   export service_account=my-service-account
   ```

1. Run the following command to create a trust policy file for the IAM role. If you want to allow all service accounts within a namespace to use the role, then copy the following contents to your device. Replace *StringEquals* with `StringLike` and replace *\$1service\$1account* with `*`. You can add multiple entries in the `StringEquals` or `StringLike` conditions to allow multiple service accounts or namespaces to assume the role. To allow roles from a different AWS account than the account that your cluster is in to assume the role, see [Authenticate to another account with IRSA](cross-account-access.md) for more information.

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "Federated": "arn:aws:iam::123456789012:oidc-provider/$oidc_provider"
         },
         "Action": "sts:AssumeRoleWithWebIdentity",
         "Condition": {
           "StringEquals": {
             "$oidc_provider:aud": "sts.amazonaws.com",
             "$oidc_provider:sub": "system:serviceaccount:$namespace:$service_account"
           }
         }
       }
     ]
   }
   ```

1. Create the role. Replace *my-role* with a name for your IAM role, and *my-role-description* with a description for your role.

   ```
   aws iam create-role --role-name my-role --assume-role-policy-document file://trust-relationship.json --description "my-role-description"
   ```

1. Attach an IAM policy to your role. Replace *my-role* with the name of your IAM role and *my-policy* with the name of an existing policy that you created.

   ```
   aws iam attach-role-policy --role-name my-role --policy-arn=arn:aws:iam::$account_id:policy/my-policy
   ```

1. Annotate your service account with the Amazon Resource Name (ARN) of the IAM role that you want the service account to assume. Replace *my-role* with the name of your existing IAM role. Suppose that you allowed a role from a different AWS account than the account that your cluster is in to assume the role in a previous step. Then, make sure to specify the AWS account and role from the other account. For more information, see [Authenticate to another account with IRSA](cross-account-access.md).

   ```
   kubectl annotate serviceaccount -n $namespace $service_account eks.amazonaws.com/role-arn=arn:aws:iam::$account_id:role/my-role
   ```

1. (Optional) [Configure the AWS Security Token Service endpoint for a service account](configure-sts-endpoint.md). AWS recommends using a regional AWS STS endpoint instead of the global endpoint. This reduces latency, provides built-in redundancy, and increases session token validity.

## Step 3: Confirm configuration
<a name="irsa-confirm-role-configuration"></a>

1. Confirm that the IAM role’s trust policy is configured correctly.

   ```
   aws iam get-role --role-name my-role --query Role.AssumeRolePolicyDocument
   ```

   An example output is as follows.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
               },
               "Action": "sts:AssumeRoleWithWebIdentity",
               "Condition": {
                   "StringEquals": {
                       "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:default:my-service-account",
                       "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
                   }
               }
           }
       ]
   }
   ```

1. Confirm that the policy that you attached to your role in a previous step is attached to the role.

   ```
   aws iam list-attached-role-policies --role-name my-role --query "AttachedPolicies[].PolicyArn" --output text
   ```

   An example output is as follows.

   ```
                  arn:aws:iam::111122223333:policy/my-policy
   ```

1. Set a variable to store the Amazon Resource Name (ARN) of the policy that you want to use. Replace *my-policy* with the name of the policy that you want to confirm permissions for.

   ```
   export policy_arn=arn:aws:iam::111122223333:policy/my-policy
   ```

1. View the default version of the policy.

   ```
   aws iam get-policy --policy-arn $policy_arn
   ```

   An example output is as follows.

   ```
   {
       "Policy": {
           "PolicyName": "my-policy",
           "PolicyId": "EXAMPLEBIOWGLDEXAMPLE",
           "Arn": "arn:aws:iam::111122223333:policy/my-policy",
           "Path": "/",
           "DefaultVersionId": "v1",
           [...]
       }
   }
   ```

1. View the policy contents to make sure that the policy includes all the permissions that your Pod needs. If necessary, replace *1* in the following command with the version that’s returned in the previous output.

   ```
   aws iam get-policy-version --policy-arn $policy_arn --version-id v1
   ```

   An example output is as follows.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "s3:GetObject",
               "Resource": "arn:aws:s3:::my-pod-secrets-bucket"
           }
       ]
   }
   ```

   If you created the example policy in a previous step, then your output is the same. If you created a different policy, then the *example* content is different.

1. Confirm that the Kubernetes service account is annotated with the role.

   ```
   kubectl describe serviceaccount my-service-account -n default
   ```

   An example output is as follows.

   ```
   Name:                my-service-account
   Namespace:           default
   Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-role
   Image pull secrets:  <none>
   Mountable secrets:   my-service-account-token-qqjfl
   Tokens:              my-service-account-token-qqjfl
   [...]
   ```

## Next steps
<a name="_next_steps"></a>
+  [Configure Pods to use a Kubernetes service account](pod-configuration.md) 

# Configure Pods to use a Kubernetes service account
<a name="pod-configuration"></a>

If a Pod needs to access AWS services, then you must configure it to use a Kubernetes service account. The service account must be associated to an AWS Identity and Access Management (IAM) role that has permissions to access the AWS services.
+ An existing cluster. If you don’t have one, you can create one using one of the guides in [Get started with Amazon EKS](getting-started.md).
+ An existing IAM OpenID Connect (OIDC) provider for your cluster. To learn if you already have one or how to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
+ An existing Kubernetes service account that’s associated with an IAM role. The service account must be annotated with the Amazon Resource Name (ARN) of the IAM role. The role must have an associated IAM policy that contains the permissions that you want your Pods to have to use AWS services. For more information about how to create the service account and role, and configure them, see [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md).
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ An existing `kubectl` `config` file that contains your cluster configuration. To create a `kubectl` `config` file, see [Connect kubectl to an EKS cluster by creating a kubeconfig file](create-kubeconfig.md).

  1. Use the following command to create a deployment manifest that you can deploy a Pod to confirm configuration with. Replace the example values with your own values.

     ```
     cat >my-deployment.yaml <<EOF
     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: my-app
     spec:
       selector:
         matchLabels:
           app: my-app
       template:
         metadata:
           labels:
             app: my-app
         spec:
           serviceAccountName: my-service-account
           containers:
           - name: my-app
             image: public.ecr.aws/nginx/nginx:X.XX
     EOF
     ```

  1. Deploy the manifest to your cluster.

     ```
     kubectl apply -f my-deployment.yaml
     ```

  1. Confirm that the required environment variables exist for your Pod.

     1. View the Pods that were deployed with the deployment in the previous step.

        ```
        kubectl get pods | grep my-app
        ```

        An example output is as follows.

        ```
        my-app-6f4dfff6cb-76cv9   1/1     Running   0          3m28s
        ```

     1. View the ARN of the IAM role that the Pod is using.

        ```
        kubectl describe pod my-app-6f4dfff6cb-76cv9 | grep AWS_ROLE_ARN:
        ```

        An example output is as follows.

        ```
        AWS_ROLE_ARN:                 arn:aws:iam::111122223333:role/my-role
        ```

        The role ARN must match the role ARN that you annotated the existing service account with. For more about annotating the service account, see [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md).

     1. Confirm that the Pod has a web identity token file mount.

        ```
        kubectl describe pod my-app-6f4dfff6cb-76cv9 | grep AWS_WEB_IDENTITY_TOKEN_FILE:
        ```

        An example output is as follows.

        ```
        AWS_WEB_IDENTITY_TOKEN_FILE:  /var/run/secrets/eks.amazonaws.com/serviceaccount/token
        ```

        The `kubelet` requests and stores the token on behalf of the Pod. By default, the `kubelet` refreshes the token if the token is older than 80 percent of its total time to live or older than 24 hours. You can modify the expiration duration for any account other than the default service account by using the settings in your Pod spec. For more information, see [Service Account Token Volume Projection](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection) in the Kubernetes documentation.

        The [Amazon EKS Pod Identity Webhook](https://github.com/aws/amazon-eks-pod-identity-webhook#amazon-eks-pod-identity-webhook) on the cluster watches for Pods that use a service account with the following annotation:

        ```
        eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-role
        ```

        The webhook applies the previous environment variables to those Pods. Your cluster doesn’t need to use the webhook to configure the environment variables and token file mounts. You can manually configure Pods to have these environment variables. The [supported versions of the AWS SDK](iam-roles-for-service-accounts-minimum-sdk.md) look for these environment variables first in the credential chain provider. The role credentials are used for Pods that meet this criteria.

  1. Confirm that your Pods can interact with the AWS services using the permissions that you assigned in the IAM policy attached to your role.
**Note**  
When a Pod uses AWS credentials from an IAM role that’s associated with a service account, the AWS CLI or other SDKs in the containers for that Pod use the credentials that are provided by that role. If you don’t restrict access to the credentials that are provided to the [Amazon EKS node IAM role](create-node-role.md), the Pod still has access to these credentials. For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

     If your Pods can’t interact with the services as you expected, complete the following steps to confirm that everything is properly configured.

     1. Confirm that your Pods use an AWS SDK version that supports assuming an IAM role through an OpenID Connect web identity token file. For more information, see [Use IRSA with the AWS SDK](iam-roles-for-service-accounts-minimum-sdk.md).

     1. Confirm that the deployment is using the service account.

        ```
        kubectl describe deployment my-app | grep "Service Account"
        ```

        An example output is as follows.

        ```
        Service Account:  my-service-account
        ```

     1. If your Pods still can’t access services, review the [steps](associate-service-account-role.md#irsa-confirm-role-configuration) that are described in [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md) to confirm that your role and service account are configured properly.

# Configure the AWS Security Token Service endpoint for a service account
<a name="configure-sts-endpoint"></a>

If you’re using a Kubernetes service account with [IAM roles for service accounts](iam-roles-for-service-accounts.md), then you can configure the type of AWS Security Token Service endpoint that’s used by the service account.

 AWS recommends using the regional AWS STS endpoints instead of the global endpoint. This reduces latency, provides built-in redundancy, and increases session token validity. The AWS Security Token Service must be active in the AWS Region where the Pod is running. Moreover, your application must have built-in redundancy for a different AWS Region in the event of a failure of the service in the AWS Region. For more information, see [Managing AWS STS in an AWS Region](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) in the IAM User Guide.
+ An existing cluster. If you don’t have one, you can create one using one of the guides in [Get started with Amazon EKS](getting-started.md).
+ An existing IAM OIDC provider for your cluster. For more information, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
+ An existing Kubernetes service account configured for use with the [Amazon EKS IAM for service accounts](iam-roles-for-service-accounts.md) feature.

The following examples all use the aws-node Kubernetes service account used by the [Amazon VPC CNI plugin](cni-iam-role.md). You can replace the *example values* with your own service accounts, Pods, namespaces, and other resources.

1. Select a Pod that uses a service account that you want to change the endpoint for. Determine which AWS Region that the Pod runs in. Replace *aws-node-6mfgv* with your Pod name and *kube-system* with your Pod’s namespace.

   ```
   kubectl describe pod aws-node-6mfgv -n kube-system |grep Node:
   ```

   An example output is as follows.

   ```
   ip-192-168-79-166.us-west-2/192.168.79.166
   ```

   In the previous output, the Pod is running on a node in the us-west-2 AWS Region.

1. Determine the endpoint type that the Pod’s service account is using.

   ```
   kubectl describe pod aws-node-6mfgv -n kube-system |grep AWS_STS_REGIONAL_ENDPOINTS
   ```

   An example output is as follows.

   ```
   AWS_STS_REGIONAL_ENDPOINTS: regional
   ```

   If the current endpoint is global, then `global` is returned in the output. If no output is returned, then the default endpoint type is in use and has not been overridden.

1. If your cluster or platform version are the same or later than those listed in the table, then you can change the endpoint type used by your service account from the default type to a different type with one of the following commands. Replace *aws-node* with the name of your service account and *kube-system* with the namespace for your service account.
   + If your default or current endpoint type is global and you want to change it to regional:

     ```
     kubectl annotate serviceaccount -n kube-system aws-node eks.amazonaws.com/sts-regional-endpoints=true
     ```

     If you’re using [IAM roles for service accounts](iam-roles-for-service-accounts.md) to generate pre-signed S3 URLs in your application running in Pods' containers, the format of the URL for regional endpoints is similar to the following example:

     ```
     https://bucket.s3.us-west-2.amazonaws.com/path?...&X-Amz-Credential=your-access-key-id/date/us-west-2/s3/aws4_request&...
     ```
   + If your default or current endpoint type is regional and you want to change it to global:

     ```
     kubectl annotate serviceaccount -n kube-system aws-node eks.amazonaws.com/sts-regional-endpoints=false
     ```

     If your application is explicitly making requests to AWS STS global endpoints and you don’t override the default behavior of using regional endpoints in Amazon EKS clusters, then requests will fail with an error. For more information, see [Pod containers receive the following error: `An error occurred (SignatureDoesNotMatch) when calling the GetCallerIdentity operation: Credential should be scoped to a valid region`](security-iam-troubleshoot.md#security-iam-troubleshoot-wrong-sts-endpoint).

     If you’re using [IAM roles for service accounts](iam-roles-for-service-accounts.md) to generate pre-signed S3 URLs in your application running in Pods' containers, the format of the URL for global endpoints is similar to the following example:

     ```
     https://bucket.s3.amazonaws.com/path?...&X-Amz-Credential=your-access-key-id/date/us-west-2/s3/aws4_request&...
     ```

   If you have automation that expects the pre-signed URL in a certain format or if your application or downstream dependencies that use pre-signed URLs have expectations for the AWS Region targeted, then make the necessary changes to use the appropriate AWS STS endpoint.

1. Delete and re-create any existing Pods that are associated with the service account to apply the credential environment variables. The mutating web hook doesn’t apply them to Pods that are already running. You can replace *Pods*, *kube-system*, and *-l k8s-app=aws-node* with the information for the Pods that you set your annotation for.

   ```
   kubectl delete Pods -n kube-system -l k8s-app=aws-node
   ```

1. Confirm that the all Pods restarted.

   ```
   kubectl get Pods -n kube-system -l k8s-app=aws-node
   ```

1. View the environment variables for one of the Pods. Verify that the `AWS_STS_REGIONAL_ENDPOINTS` value is what you set it to in a previous step.

   ```
   kubectl describe pod aws-node-kzbtr -n kube-system |grep AWS_STS_REGIONAL_ENDPOINTS
   ```

   An example output is as follows.

   ```
   AWS_STS_REGIONAL_ENDPOINTS=regional
   ```

# Authenticate to another account with IRSA
<a name="cross-account-access"></a>

You can configure cross-account IAM permissions either by creating an identity provider from another account’s cluster or by using chained `AssumeRole` operations. In the following examples, *Account A* owns an Amazon EKS cluster that supports IAM roles for service accounts. Pods that are running on that cluster must assume IAM permissions from *Account B*.
+  **Option 1** is simpler but requires Account B to create and manage an OIDC identity provider for Account A’s cluster.
+  **Option 2** keeps OIDC management in Account A but requires role chaining through two `AssumeRole` calls.

## Option 1: Create an identity provider from another account’s cluster
<a name="_option_1_create_an_identity_provider_from_another_accounts_cluster"></a>

In this example, Account A provides Account B with the OpenID Connect (OIDC) issuer URL from their cluster. Account B follows the instructions in [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md) and [Assign IAM roles to Kubernetes service accounts](associate-service-account-role.md) using the OIDC issuer URL from Account A’s cluster. Then, a cluster administrator annotates the service account in Account A’s cluster to use the role from Account B (*444455556666*).

```
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::444455556666:role/account-b-role
```

## Option 2: Use chained `AssumeRole` operations
<a name="_option_2_use_chained_assumerole_operations"></a>

In this approach, each account creates an IAM role. Account B’s role trusts Account A, and Account A’s role uses OIDC federation to get credentials from the cluster. The Pod then chains the two roles together using AWS CLI profiles.

### Step 1: Create the target role in Account B
<a name="_step_1_create_the_target_role_in_account_b"></a>

Account B (*444455556666*) creates an IAM role with the permissions that Pods in Account A’s cluster need. Account B attaches the desired permission policy to this role, then adds the following trust policy.

 **Trust policy for Account B’s role** — This policy allows Account A’s specific IRSA role to assume this role.

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}
```

**Important**  
For least privilege, replace the `Principal` ARN with the specific role ARN from Account A instead of using the account root (`arn:aws:iam::111122223333:root`). Using the account root allows *any* IAM principal in Account A to assume this role.

### Step 2: Create the IRSA role in Account A
<a name="_step_2_create_the_irsa_role_in_account_a"></a>

Account A (*111122223333*) creates a role with a trust policy that gets credentials from the identity provider created with the cluster’s OIDC issuer address.

 **Trust policy for Account A’s role (OIDC federation)** — This policy allows the EKS cluster’s OIDC provider to issue credentials for this role.

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
        }
      }
    }
  ]
}
```

**Important**  
For least privilege, add a `StringEquals` condition for the `sub` claim to restrict this role to a specific Kubernetes service account. Without a `sub` condition, any service account in the cluster can assume this role. The `sub` value uses the format `system:serviceaccount:NAMESPACE:SERVICE_ACCOUNT_NAME `. For example, to restrict to a service account named `my-service-account` in the `default` namespace:  

```
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:default:my-service-account"
```

### Step 3: Attach the AssumeRole permission to Account A’s role
<a name="_step_3_attach_the_assumerole_permission_to_account_as_role"></a>

Account A attaches a permission policy to the role created in Step 2. This policy allows the role to assume Account B’s role.

 **Permission policy for Account A’s role** — This policy grants `sts:AssumeRole` on Account B’s target role.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::444455556666:role/account-b-role"
        }
    ]
}
```

### Step 4: Configure the Pod to chain roles
<a name="_step_4_configure_the_pod_to_chain_roles"></a>

The application code for Pods to assume Account B’s role uses two profiles: `account_b_role` and `account_a_role`. The `account_b_role` profile uses the `account_a_role` profile as its source. For the AWS CLI, the `~/.aws/config` file is similar to the following.

```
[profile account_b_role]
source_profile = account_a_role
role_arn=arn:aws:iam::444455556666:role/account-b-role

[profile account_a_role]
web_identity_token_file = /var/run/secrets/eks.amazonaws.com/serviceaccount/token
role_arn=arn:aws:iam::111122223333:role/account-a-role
```

To specify chained profiles for other AWS SDKs, consult the documentation for the SDK that you’re using. For more information, see [Tools to Build on AWS](https://aws.amazon.com/developer/tools/).

# Use IRSA with the AWS SDK
<a name="iam-roles-for-service-accounts-minimum-sdk"></a>

**Using the credentials**  
To use the credentials from IAM roles for service accounts (IRSA), your code can use any AWS SDK to create a client for an AWS service with an SDK, and by default the SDK searches in a chain of locations for AWS Identity and Access Management credentials to use. The IAM roles for service accounts credentials will be used if you don’t specify a credential provider when you create the client or otherwise initialized the SDK.

This works because IAM roles for service accounts have been added as a step in the default credential chain. If your workloads currently use credentials that are earlier in the chain of credentials, those credentials will continue to be used even if you configure an IAM roles for service accounts for the same workload.

The SDK automatically exchanges the service account OIDC token for temporary credentials from AWS Security Token Service by using the `AssumeRoleWithWebIdentity` action. Amazon EKS and this SDK action continue to rotate the temporary credentials by renewing them before they expire.

When using [IAM roles for service accounts](iam-roles-for-service-accounts.md), the containers in your Pods must use an AWS SDK version that supports assuming an IAM role through an OpenID Connect web identity token file. Make sure that you’re using the following versions, or later, for your AWS SDK:
+ Java (Version 2) – [2.10.11](https://github.com/aws/aws-sdk-java-v2/releases/tag/2.10.11) 
+ Java – [1.12.782](https://github.com/aws/aws-sdk-java/releases/tag/1.12.782) 
+  AWS SDK for Go v1 – [1.23.13](https://github.com/aws/aws-sdk-go/releases/tag/v1.23.13) 
+  AWS SDK for Go v2 – All versions support
+ Python (Boto3) – [1.9.220](https://github.com/boto/boto3/releases/tag/1.9.220) 
+ Python (botocore) – [1.12.200](https://github.com/boto/botocore/releases/tag/1.12.200) 
+  AWS CLI – [1.16.232](https://github.com/aws/aws-cli/releases/tag/1.16.232) 
+ Node – [2.525.0](https://github.com/aws/aws-sdk-js/releases/tag/v2.525.0) and [3.27.0](https://github.com/aws/aws-sdk-js-v3/releases/tag/v3.27.0) 
+ Ruby – [3.58.0](https://github.com/aws/aws-sdk-ruby/blob/version-3/gems/aws-sdk-core/CHANGELOG.md#3580-2019-07-01) 
+ C\$1\$1 – [1.7.174](https://github.com/aws/aws-sdk-cpp/releases/tag/1.7.174) 
+ .NET – [3.3.659.1](https://github.com/aws/aws-sdk-net/releases/tag/3.3.659.1) – You must also include `AWSSDK.SecurityToken`.
+ PHP – [3.110.7](https://github.com/aws/aws-sdk-php/releases/tag/3.110.7) 

Many popular Kubernetes add-ons, such as the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler), the [Route internet traffic with AWS Load Balancer Controller](aws-load-balancer-controller.md), and the [Amazon VPC CNI plugin for Kubernetes](cni-iam-role.md) support IAM roles for service accounts.

To ensure that you’re using a supported SDK, follow the installation instructions for your preferred SDK at [Tools to Build on AWS](https://aws.amazon.com/tools/) when you build your containers.

## Considerations
<a name="_considerations"></a>

### Java
<a name="_java"></a>

When using Java, you *must* include the `sts` module on the classpath. For more information, see [WebIdentityTokenFileCredentialsProvider](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/auth/credentials/WebIdentityTokenFileCredentialsProvider.html) in the Java SDK docs.

# Fetch signing keys to validate OIDC tokens
<a name="irsa-fetch-keys"></a>

Kubernetes issues a `ProjectedServiceAccountToken` to each Kubernetes Service Account. This token is an OIDC token, which is further a type of JSON web token (JWT). Amazon EKS hosts a public OIDC endpoint for each cluster that contains the signing keys for the token so external systems can validate it.

To validate a `ProjectedServiceAccountToken`, you need to fetch the OIDC public signing keys, also called the JSON Web Key Set (JWKS). Use these keys in your application to validate the token. For example, you can use the [PyJWT Python library](https://pyjwt.readthedocs.io/en/latest/) to validate tokens using these keys. For more information on the `ProjectedServiceAccountToken`, see [IAM, Kubernetes, and OpenID Connect (OIDC) background information](iam-roles-for-service-accounts.md#irsa-oidc-background).

## Prerequisites
<a name="_prerequisites"></a>
+ An existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. To determine whether you already have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
+  ** AWS CLI** — A command line tool for working with AWS services, including Amazon EKS. For more information, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide. After installing the AWS CLI, we recommend that you also configure it. For more information, see [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the AWS Command Line Interface User Guide.

## Procedure
<a name="_procedure"></a>

1. Retrieve the OIDC URL for your Amazon EKS cluster using the AWS CLI.

   ```
   $ aws eks describe-cluster --name my-cluster --query 'cluster.identity.oidc.issuer'
   "https://oidc.eks.us-west-2.amazonaws.com/id/8EBDXXXX00BAE"
   ```

1. Retrieve the public signing key using curl, or a similar tool. The result is a [JSON Web Key Set (JWKS)](https://www.rfc-editor.org/rfc/rfc7517#section-5).
**Important**  
Amazon EKS throttles calls to the OIDC endpoint. You should cache the public signing key. Respect the `cache-control` header included in the response.
**Important**  
Amazon EKS rotates the OIDC signing key every seven days.

   ```
   $ curl https://oidc.eks.us-west-2.amazonaws.com/id/8EBDXXXX00BAE/keys
   {"keys":[{"kty":"RSA","kid":"2284XXXX4a40","use":"sig","alg":"RS256","n":"wklbXXXXMVfQ","e":"AQAB"}]}
   ```

# Learn how EKS Pod Identity grants pods access to AWS services
<a name="pod-identities"></a>

Applications in a Pod’s containers can use an AWS SDK or the AWS CLI to make API requests to AWS services using AWS Identity and Access Management (IAM) permissions. Applications must sign their AWS API requests with AWS credentials.

 *EKS Pod Identities* provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instance’s role, you associate an IAM role with a Kubernetes service account and configure your Pods to use the service account.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/aUjJSorBE70?rel=0/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/aUjJSorBE70?rel=0)


Each EKS Pod Identity association maps a role to a service account in a namespace in the specified cluster. If you have the same application in multiple clusters, you can make identical associations in each cluster without modifying the trust policy of the role.

If a pod uses a service account that has an association, Amazon EKS sets environment variables in the containers of the pod. The environment variables configure the AWS SDKs, including the AWS CLI, to use the EKS Pod Identity credentials.

## Benefits of EKS Pod Identities
<a name="pod-id-benefits"></a>

EKS Pod Identities provide the following benefits:
+  **Least privilege** – You can scope IAM permissions to a service account, and only Pods that use that service account have access to those permissions. This feature also eliminates the need for third-party solutions such as `kiam` or `kube2iam`.
+  **Credential isolation** – When access to the [Amazon EC2 Instance Metadata Service (IMDS)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html) is restricted, a Pod’s containers can only retrieve credentials for the IAM role that’s associated with the service account that the container uses. A container never has access to credentials that are used by other containers in other Pods. If IMDS is not restricted, the Pod’s containers also have access to the [Amazon EKS node IAM role](create-node-role.md) and the containers may be able to gain access to credentials of IAM roles of other Pods on the same node. For more information, see [Restrict access to the instance profile assigned to the worker node](https://docs.aws.amazon.com/eks/latest/best-practices/identity-and-access-management.html#_identities_and_credentials_for_eks_pods_recommendations).

**Note**  
Pods configured with `hostNetwork: true` will always have IMDS access, but the AWS SDKs and CLI will use Pod Identity credentials when enabled.
+  **Auditability** – Access and event logging is available through AWS CloudTrail to help facilitate retrospective auditing.

**Important**  
Containers are not a security boundary, and the use of Pod Identity does not change this. Pods assigned to the same node will share a kernel and potentially other resources depending on your Pod configuration. While Pods running on separate nodes will be isolated at the compute layer, there are node applications that have additional permissions in the Kubernetes API beyond the scope of an individual instance. Some examples are `kubelet`, `kube-proxy`, CSI storage drivers, or your own Kubernetes applications.

EKS Pod Identity is a simpler method than [IAM roles for service accounts](iam-roles-for-service-accounts.md), as this method doesn’t use OIDC identity providers. EKS Pod Identity has the following enhancements:
+  **Independent operations** – In many organizations, creating OIDC identity providers is a responsibility of different teams than administering the Kubernetes clusters. EKS Pod Identity has clean separation of duties, where all configuration of EKS Pod Identity associations is done in Amazon EKS and all configuration of the IAM permissions is done in IAM.
+  **Reusability** – EKS Pod Identity uses a single IAM principal instead of the separate principals for each cluster that IAM roles for service accounts use. Your IAM administrator adds the following principal to the trust policy of any role to make it usable by EKS Pod Identities.

  ```
              "Principal": {
                  "Service": "pods.eks.amazonaws.com"
              }
  ```
+  **Scalability** — Each set of temporary credentials are assumed by the EKS Auth service in EKS Pod Identity, instead of each AWS SDK that you run in each pod. Then, the Amazon EKS Pod Identity Agent that runs on each node issues the credentials to the SDKs. Thus the load is reduced to once for each node and isn’t duplicated in each pod. For more details of the process, see [Understand how EKS Pod Identity works](pod-id-how-it-works.md).

For more information to compare the two alternatives, see [Grant Kubernetes workloads access to AWS using Kubernetes Service Accounts](service-accounts.md).

## Overview of setting up EKS Pod Identities
<a name="pod-id-setup-overview"></a>

Turn on EKS Pod Identities by completing the following procedures:

1.  [Set up the Amazon EKS Pod Identity Agent](pod-id-agent-setup.md) — You only complete this procedure once for each cluster. You do not need to complete this step if EKS Auto Mode is enabled on your cluster.

1.  [Assign an IAM role to a Kubernetes service account](pod-id-association.md) — Complete this procedure for each unique set of permissions that you want an application to have.

1.  [Configure Pods to access AWS services with service accounts](pod-id-configure-pods.md) — Complete this procedure for each Pod that needs access to AWS services.

1.  [Use pod identity with the AWS SDK](pod-id-minimum-sdk.md) — Confirm that the workload uses an AWS SDK of a supported version and that the workload uses the default credential chain.

## Limits
<a name="pod-id-limits"></a>
+ Up to 5,000 EKS Pod Identity associations per cluster to map IAM roles to Kubernetes service accounts are supported.

## Considerations
<a name="pod-id-considerations"></a>
+  **IAM Role Association**: Each Kubernetes service account in a cluster can be associated with one IAM role from the same AWS account as the cluster. To change the role, edit the EKS Pod Identity association. For cross-account access, delegate access to the role using IAM roles. To learn more, see [Delegate access across AWS accounts using IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html) in the *IAM User Guide*.
+  **EKS Pod Identity Agent**: The Pod Identity Agent is required to use EKS Pod Identity. The agent runs as a Kubernetes `DaemonSet` on cluster nodes, providing credentials only to pods on the same node. It uses the node’s `hostNetwork`, occupying port `80` and `2703` on the link-local address (`169.254.170.23` for IPv4, `[fd00:ec2::23]` for IPv6). If IPv6 is disabled in your cluster, disable IPv6 for the Pod Identity Agent. To learn more, see [Disable IPv6 in the EKS Pod Identity Agent](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-config-ipv6.html).
+  **Eventual Consistency**: EKS Pod Identity associations are eventually consistent, with potential delays of several seconds after API calls. Avoid creating or updating associations in critical, high-availability code paths. Instead, perform these actions in separate, less frequent initialization or setup routines. To learn more, see [Security Groups Per Pod](https://docs.aws.amazon.com/eks/latest/best-practices/sgpp.html) in the *EKS Best Practices Guide*.
+  **Proxy and Security Group Considerations**: For pods using a proxy, add `169.254.170.23` (IPv4) and `[fd00:ec2::23]` (IPv6) to the `no_proxy/NO_PROXY` environment variables to prevent failed requests to the EKS Pod Identity Agent. If using Security Groups for Pods with the AWS VPC CNI, set the `ENABLE_POD_ENI` flag to ‘true’ and the `POD_SECURITY_GROUP_ENFORCING_MODE` flag to ‘standard’. To learn more, see [Assign security groups to individual Pods](https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html).

### EKS Pod Identity cluster versions
<a name="pod-id-cluster-versions"></a>

To use EKS Pod Identity, the cluster must have a platform version that is the same or later than the version listed in the following table, or a Kubernetes version that is later than the versions listed in the table. To find the suggested version of the Amazon EKS Pod Identity Agent for a Kubernetes version, see [Verify Amazon EKS add-on version compatibility with a cluster](addon-compat.md).


| Kubernetes version | Platform version | 
| --- | --- | 
|  Kubernetes versions not listed  |  All platform versions support  | 
|   `1.28`   |   `eks.4`   | 

### EKS Pod Identity restrictions
<a name="pod-id-restrictions"></a>

EKS Pod Identities are available on the following:
+ Amazon EKS cluster versions listed in the previous topic [EKS Pod Identity cluster versions](#pod-id-cluster-versions).
+ Worker nodes in the cluster that are Linux Amazon EC2 instances.

EKS Pod Identities aren’t available on the following:
+  AWS Outposts.
+ Amazon EKS Anywhere.
+ Kubernetes clusters that you create and run on Amazon EC2. The EKS Pod Identity components are only available on Amazon EKS.

You can’t use EKS Pod Identities with:
+ Pods that run anywhere except Linux Amazon EC2 instances. Linux and Windows pods that run on AWS Fargate (Fargate) aren’t supported. Pods that run on Windows Amazon EC2 instances aren’t supported.

# Understand how EKS Pod Identity works
<a name="pod-id-how-it-works"></a>

Amazon EKS Pod Identity associations provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances.

Amazon EKS Pod Identity provides credentials to your workloads with an additional *EKS Auth* API and an agent pod that runs on each node.

In your add-ons, such as *Amazon EKS add-ons* and self-managed controller, operators, and other add-ons, the author needs to update their software to use the latest AWS SDKs. For the list of compatibility between EKS Pod Identity and the add-ons produced by Amazon EKS, see the previous section [EKS Pod Identity restrictions](pod-identities.md#pod-id-restrictions).

## Using EKS Pod Identities in your code
<a name="pod-id-credentials"></a>

In your code, you can use the AWS SDKs to access AWS services. You write code to create a client for an AWS service with an SDK, and by default the SDK searches in a chain of locations for AWS Identity and Access Management credentials to use. After valid credentials are found, the search is stopped. For more information about the default locations used, see the [Credential provider chain](https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html#credentialProviderChain) in the AWS SDKs and Tools Reference Guide.

EKS Pod Identities have been added to the *Container credential provider* which is searched in a step in the default credential chain. If your workloads currently use credentials that are earlier in the chain of credentials, those credentials will continue to be used even if you configure an EKS Pod Identity association for the same workload. This way you can safely migrate from other types of credentials by creating the association first, before removing the old credentials.

The container credentials provider provides temporary credentials from an agent that runs on each node. In Amazon EKS, the agent is the Amazon EKS Pod Identity Agent and on Amazon Elastic Container Service the agent is the `amazon-ecs-agent`. The SDKs use environment variables to locate the agent to connect to.

In contrast, *IAM roles for service accounts* provides a *web identity* token that the AWS SDK must exchange with AWS Security Token Service by using `AssumeRoleWithWebIdentity`.

## How EKS Pod Identity Agent works with a Pod
<a name="pod-id-agent-pod"></a>

1. When Amazon EKS starts a new pod that uses a service account with an EKS Pod Identity association, the cluster adds the following content to the Pod manifest:

   ```
       env:
       - name: AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE
         value: "/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token"
       - name: AWS_CONTAINER_CREDENTIALS_FULL_URI
         value: "http://169.254.170.23/v1/credentials"
       volumeMounts:
       - mountPath: "/var/run/secrets/pods.eks.amazonaws.com/serviceaccount/"
         name: eks-pod-identity-token
     volumes:
     - name: eks-pod-identity-token
       projected:
         defaultMode: 420
         sources:
         - serviceAccountToken:
             audience: pods.eks.amazonaws.com
             expirationSeconds: 86400 # 24 hours
             path: eks-pod-identity-token
   ```

1. Kubernetes selects which node to run the pod on. Then, the Amazon EKS Pod Identity Agent on the node uses the [AssumeRoleForPodIdentity](https://docs.aws.amazon.com/eks/latest/APIReference/API_auth_AssumeRoleForPodIdentity.html) action to retrieve temporary credentials from the EKS Auth API.

1. The EKS Pod Identity Agent makes these credentials available for the AWS SDKs that you run inside your containers.

1. You use the SDK in your application without specifying a credential provider to use the default credential chain. Or, you specify the container credential provider. For more information about the default locations used, see the [Credential provider chain](https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html#credentialProviderChain) in the AWS SDKs and Tools Reference Guide.

1. The SDK uses the environment variables to connect to the EKS Pod Identity Agent and retrieve the credentials.
**Note**  
If your workloads currently use credentials that are earlier in the chain of credentials, those credentials will continue to be used even if you configure an EKS Pod Identity association for the same workload.

# Set up the Amazon EKS Pod Identity Agent
<a name="pod-id-agent-setup"></a>

Amazon EKS Pod Identity associations provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances.

Amazon EKS Pod Identity provides credentials to your workloads with an additional *EKS Auth* API and an agent pod that runs on each node.

**Tip**  
You do not need to install the EKS Pod Identity Agent on EKS Auto Mode Clusters. This capability is built into EKS Auto Mode.

## Considerations
<a name="pod-id-agent-considerations"></a>
+ By default, the EKS Pod Identity Agent is pre-installed on EKS Auto Mode clusters. To learn more, see [Automate cluster infrastructure with EKS Auto Mode](automode.md).
+ By default, the EKS Pod Identity Agent listens on an `IPv4` and `IPv6` address for pods to request credentials. The agent uses the loopback (localhost) IP address `169.254.170.23` for `IPv4` and the localhost IP address `[fd00:ec2::23]` for `IPv6`.
+ If you disable `IPv6` addresses, or otherwise prevent localhost `IPv6` IP addresses, the agent can’t start. To start the agent on nodes that can’t use `IPv6`, follow the steps in [Disable `IPv6` in the EKS Pod Identity Agent](pod-id-agent-config-ipv6.md) to disable the `IPv6` configuration.

## Creating the Amazon EKS Pod Identity Agent
<a name="pod-id-agent-add-on-create"></a>

### Agent prerequisites
<a name="pod-id-agent-prereqs"></a>

**Important**  
The nodes where the EKS Pod Identity Agent runs must be able to access the EKS Auth API. If you are using private subnets for your nodes, you must set up an AWS PrivateLink interface endpoint for the EKS Auth API. For more information, see [Access Amazon EKS using AWS PrivateLink](vpc-interface-endpoints.md).
+ An existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md). The cluster version and platform version must be the same or later than the versions listed in [EKS Pod Identity cluster versions](pod-identities.md#pod-id-cluster-versions).
+ The node role has permissions for the agent to do the `AssumeRoleForPodIdentity` action in the EKS Auth API. You can use the [AWS managed policy: AmazonEKSWorkerNodePolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazoneksworkernodepolicy) or add a custom policy similar to the following:

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "eks-auth:AssumeRoleForPodIdentity"
              ],
              "Resource": "*"
          }
      ]
  }
  ```

  This action can be limited by tags to restrict which roles can be assumed by pods that use the agent.
+ The nodes can reach and download images from Amazon ECR. The container image for the add-on is in the registries listed in [View Amazon container image registries for Amazon EKS add-ons](add-ons-images.md).

  Note that you can change the image location and provide `imagePullSecrets` for EKS add-ons in the **Optional configuration settings** in the AWS Management Console, and in the `--configuration-values` in the AWS CLI.

### Setup agent with AWS console
<a name="setup_agent_with_shared_aws_console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the EKS Pod Identity Agent add-on for.

1. Choose the **Add-ons** tab.

1. Choose **Get more add-ons**.

1. Select the box in the top right of the add-on box for EKS Pod Identity Agent and then choose **Next**.

1. On the **Configure selected add-ons settings** page, select any version in the **Version** dropdown list.

1. (Optional) Expand **Optional configuration settings** to enter additional configuration. For example, you can provide an alternative container image location and `ImagePullSecrets`. The JSON Schema with accepted keys is shown in **Add-on configuration schema**.

   Enter the configuration keys and values in **Configuration values**.

1. Choose **Next**.

1. Confirm that the EKS Pod Identity Agent pods are running on your cluster.

   ```
   kubectl get pods -n kube-system | grep 'eks-pod-identity-agent'
   ```

   An example output is as follows.

   ```
   eks-pod-identity-agent-gmqp7                                          1/1     Running   1 (24h ago)   24h
   eks-pod-identity-agent-prnsh                                          1/1     Running   1 (24h ago)   24h
   ```

   You can now use EKS Pod Identity associations in your cluster. For more information, see [Assign an IAM role to a Kubernetes service account](pod-id-association.md).

### Setup agent with AWS CLI
<a name="setup_agent_with_shared_aws_cli"></a>

1. Run the following AWS CLI command. Replace `my-cluster` with the name of your cluster.

   ```
   aws eks create-addon --cluster-name my-cluster --addon-name eks-pod-identity-agent --addon-version v1.0.0-eksbuild.1
   ```
**Note**  
The EKS Pod Identity Agent doesn’t use the `service-account-role-arn` for *IAM roles for service accounts*. You must provide the EKS Pod Identity Agent with permissions in the node role.

1. Confirm that the EKS Pod Identity Agent pods are running on your cluster.

   ```
   kubectl get pods -n kube-system | grep 'eks-pod-identity-agent'
   ```

   An example output is as follows.

   ```
   eks-pod-identity-agent-gmqp7                                          1/1     Running   1 (24h ago)   24h
   eks-pod-identity-agent-prnsh                                          1/1     Running   1 (24h ago)   24h
   ```

   You can now use EKS Pod Identity associations in your cluster. For more information, see [Assign an IAM role to a Kubernetes service account](pod-id-association.md).

# Assign an IAM role to a Kubernetes service account
<a name="pod-id-association"></a>

This topic covers how to configure a Kubernetes service account to assume an AWS Identity and Access Management (IAM) role with EKS Pod Identity. Any Pods that are configured to use the service account can then access any AWS service that the role has permissions to access.

To create an EKS Pod Identity association, there is only a single step; you create the association in EKS through the AWS Management Console, AWS CLI, AWS SDKs, AWS CloudFormation and other tools. There isn’t any data or metadata about the associations inside the cluster in any Kubernetes objects and you don’t add any annotations to the service accounts.

 **Prerequisites** 
+ An existing cluster. If you don’t have one, you can create one by following one of the guides in [Get started with Amazon EKS](getting-started.md).
+ The IAM principal that is creating the association must have `iam:PassRole`.
+ The latest version of the AWS CLI installed and configured on your device or AWS CloudShell. You can check your current version with `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the AWS Command Line Interface User Guide. The AWS CLI version installed in the AWS CloudShell may also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the AWS CloudShell User Guide.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ An existing `kubectl` `config` file that contains your cluster configuration. To create a `kubectl` `config` file, see [Connect kubectl to an EKS cluster by creating a kubeconfig file](create-kubeconfig.md).

## Create a Pod Identity association (AWS Console)
<a name="pod-id-association-create"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the EKS Pod Identity Agent add-on for.

1. Choose the **Access** tab.

1. In the **Pod Identity associations**, choose **Create**.

1. For the **IAM role**, select the IAM role with the permissions that you want the workload to have.
**Note**  
The list only contains roles that have the following trust policy which allows EKS Pod Identity to use them.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
               "Effect": "Allow",
               "Principal": {
                   "Service": "pods.eks.amazonaws.com"
               },
               "Action": [
                   "sts:AssumeRole",
                   "sts:TagSession"
               ]
           }
       ]
   }
   ```

    `sts:AssumeRole` — EKS Pod Identity uses `AssumeRole` to assume the IAM role before passing the temporary credentials to your pods.

    `sts:TagSession` — EKS Pod Identity uses `TagSession` to include *session tags* in the requests to AWS STS.

   You can use these tags in the *condition keys* in the trust policy to restrict which service accounts, namespaces, and clusters can use this role.

   For a list of Amazon EKS condition keys, see [Conditions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-policy-keys) in the *Service Authorization Reference*. To learn which actions and resources you can use a condition key with, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions).

1. For the **Kubernetes namespace**, select the Kubernetes namespace that contains the service account and workload. Optionally, you can specify a namespace by name that doesn’t exist in the cluster.

1. For the **Kubernetes service account**, select the Kubernetes service account to use. The manifest for your Kubernetes workload must specify this service account. Optionally, you can specify a service account by name that doesn’t exist in the cluster.

1. (Optional) Select **Disable session tags** to disable the default session tags that Pod Identity automatically adds when it assumes the role.

1. (Optional) Toggle **Configure session policy** to configure an IAM policy to apply additional restrictions to this Pod Identity association beyond the permissions defined in the IAM policy attached to the IAM role.
**Note**  
A session policy can only be applied when the **Disable session tags** setting is checked.

1. (Optional) For the **Tags**, choose **Add tag** to add metadata in a key and value pair. These tags are applied to the association and can be used in IAM policies.

   You can repeat this step to add multiple tags.

1. Choose **Create**.

## Create a Pod Identity association (AWS CLI)
<a name="create_a_pod_identity_association_shared_aws_cli"></a>

1. If you want to associate an existing IAM policy to your IAM role, skip to the next step.

   Create an IAM policy. You can create your own policy, or copy an AWS managed policy that already grants some of the permissions that you need and customize it to your specific requirements. For more information, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

   1. Create a file that includes the permissions for the AWS services that you want your Pods to access. For a list of all actions for all AWS services, see the [Service Authorization Reference](https://docs.aws.amazon.com/service-authorization/latest/reference/).

      You can run the following command to create an example policy file that allows read-only access to an Amazon S3 bucket. You can optionally store configuration information or a bootstrap script in this bucket, and the containers in your Pod can read the file from the bucket and load it into your application. If you want to create this example policy, copy the following contents to your device. Replace *my-pod-secrets-bucket* with your bucket name and run the command.

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": "s3:GetObject",
                  "Resource": "arn:aws:s3:::my-pod-secrets-bucket"
              }
          ]
      }
      ```

   1. Create the IAM policy.

      ```
      aws iam create-policy --policy-name my-policy --policy-document file://my-policy.json
      ```

1. Create an IAM role and associate it with a Kubernetes service account.

   1. If you have an existing Kubernetes service account that you want to assume an IAM role, then you can skip this step.

      Create a Kubernetes service account. Copy the following contents to your device. Replace *my-service-account* with your desired name and *default* with a different namespace, if necessary. If you change *default*, the namespace must already exist.

      ```
      cat >my-service-account.yaml <<EOF
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: my-service-account
        namespace: default
      EOF
      kubectl apply -f my-service-account.yaml
      ```

      Run the following command.

      ```
      kubectl apply -f my-service-account.yaml
      ```

   1. Run the following command to create a trust policy file for the IAM role.

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "pods.eks.amazonaws.com"
                  },
                  "Action": [
                      "sts:AssumeRole",
                      "sts:TagSession"
                  ]
              }
          ]
      }
      ```

   1. Create the role. Replace *my-role* with a name for your IAM role, and *my-role-description* with a description for your role.

      ```
      aws iam create-role --role-name my-role --assume-role-policy-document file://trust-relationship.json --description "my-role-description"
      ```

   1. Attach an IAM policy to your role. Replace *my-role* with the name of your IAM role and *my-policy* with the name of an existing policy that you created.

      ```
      aws iam attach-role-policy --role-name my-role --policy-arn=arn:aws:iam::111122223333:policy/my-policy
      ```
**Note**  
Unlike IAM roles for service accounts, EKS Pod Identity doesn’t use an annotation on the service account.

   1. Run the following command to create the association. Replace `my-cluster` with the name of the cluster, replace *my-service-account* with your desired name and *default* with a different namespace, if necessary.

      ```
      aws eks create-pod-identity-association --cluster-name my-cluster --role-arn arn:aws:iam::111122223333:role/my-role --namespace default --service-account my-service-account
      ```

      An example output is as follows.

      ```
      {
          "association": {
              "clusterName": "my-cluster",
              "namespace": "default",
              "serviceAccount": "my-service-account",
              "roleArn": "arn:aws:iam::111122223333:role/my-role",
              "associationArn": "arn:aws::111122223333:podidentityassociation/my-cluster/a-abcdefghijklmnop1",
              "associationId": "a-abcdefghijklmnop1",
              "tags": {},
              "createdAt": 1700862734.922,
              "modifiedAt": 1700862734.922
          }
      }
      ```
**Note**  
You can specify a namespace and service account by name that doesn’t exist in the cluster. You must create the namespace, service account, and the workload that uses the service account for the EKS Pod Identity association to function.

## Confirm configuration
<a name="pod-id-confirm-role-configuration"></a>

1. Confirm that the IAM role’s trust policy is configured correctly.

   ```
   aws iam get-role --role-name my-role --query Role.AssumeRolePolicyDocument
   ```

   An example output is as follows.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Allow EKS Auth service to assume this role for Pod Identities",
               "Effect": "Allow",
               "Principal": {
                   "Service": "pods.eks.amazonaws.com"
               },
               "Action": [
                   "sts:AssumeRole",
                   "sts:TagSession"
               ]
           }
       ]
   }
   ```

1. Confirm that the policy that you attached to your role in a previous step is attached to the role.

   ```
   aws iam list-attached-role-policies --role-name my-role --query 'AttachedPolicies[].PolicyArn' --output text
   ```

   An example output is as follows.

   ```
                  arn:aws:iam::111122223333:policy/my-policy
   ```

1. Set a variable to store the Amazon Resource Name (ARN) of the policy that you want to use. Replace *my-policy* with the name of the policy that you want to confirm permissions for.

   ```
   export policy_arn=arn:aws:iam::111122223333:policy/my-policy
   ```

1. View the default version of the policy.

   ```
   aws iam get-policy --policy-arn $policy_arn
   ```

   An example output is as follows.

   ```
   {
       "Policy": {
           "PolicyName": "my-policy",
           "PolicyId": "EXAMPLEBIOWGLDEXAMPLE",
           "Arn": "arn:aws:iam::111122223333:policy/my-policy",
           "Path": "/",
           "DefaultVersionId": "v1",
           [...]
       }
   }
   ```

1. View the policy contents to make sure that the policy includes all the permissions that your Pod needs. If necessary, replace *1* in the following command with the version that’s returned in the previous output.

   ```
   aws iam get-policy-version --policy-arn $policy_arn --version-id v1
   ```

   An example output is as follows.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "s3:GetObject",
               "Resource": "arn:aws:s3:::my-pod-secrets-bucket"
           }
       ]
   }
   ```

   If you created the example policy in a previous step, then your output is the same. If you created a different policy, then the *example* content is different.

## Next Steps
<a name="_next_steps"></a>

 [Configure Pods to access AWS services with service accounts](pod-id-configure-pods.md) 

# Access AWS Resources using EKS Pod Identity Target IAM Roles
<a name="pod-id-assign-target-role"></a>

When running applications on Amazon Elastic Kubernetes Service (Amazon EKS), you might need to access AWS resources that exist in different AWS accounts. This guide shows you how to set up cross account access using EKS Pod Identity, which enables your Kubernetes pods to access other AWS resources using target roles.

## Prerequisites
<a name="_prerequisites"></a>

Before you begin, ensure you have completed the following steps:
+  [Set up the Amazon EKS Pod Identity Agent](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html) 
+  [Create an EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) 

## How It Works
<a name="_how_it_works"></a>

Pod Identity enables applications in your EKS cluster to access AWS resources across accounts through a process called role chaining.

When creating a Pod Identity association, you can provide two IAM roles: an [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) in the same account as your EKS cluster and a Target IAM Role from the account containing your AWS resources you wish to access, (like S3 buckets or RDS Databases). The [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) must be in your EKS cluster’s account due to [IAM PassRole](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_iam-passrole-service.html) requirements, while the Target IAM Role can be in any AWS account. PassRole enables an AWS entity to delegate role assumption to another service. EKS Pod Identity uses PassRole to connect a role to a Kubernetes service account, requiring both the role and the identity passing it to be in the same AWS account as the EKS cluster. When your application pod needs to access AWS resources, it requests credentials from Pod Identity. Pod Identity then automatically performs two role assumptions in sequence: first assuming the [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html), then using those credentials to assume the Target IAM Role. This process provides your pod with temporary credentials that have the permissions defined in the target role, allowing secure access to resources in other AWS accounts.

## Caching considerations
<a name="_caching_considerations"></a>

Due to caching mechanisms, updates to an IAM role in an existing Pod Identity association may not take effect immediately in the pods running on your EKS cluster. The Pod Identity Agent caches IAM credentials based on the association’s configuration at the time the credentials are fetched. If the association includes only an [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) and no Target IAM Role, the cached credentials last for 6 hours. If the association includes both the [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) ARN and a Target IAM Role, the cached credentials last for 59 minutes. Modifying an existing association, such as updating the [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) ARN or adding a Target IAM Role, does not reset the existing cache. As a result, the agent will not recognize updates until the cached credentials refresh. To apply changes sooner, you can recreate the existing pods; otherwise, you will need to wait for the cache to expire.

## Step 1: Create and associate a Target IAM Role
<a name="_step_1_create_and_associate_a_target_iam_role"></a>

In this step, you will establish a secure trust chain by creating and configuring a Target IAM Role. For demonstration, we will create a new Target IAM Role to establish a trust chain between two AWS accounts: the [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) (e.g., `eks-pod-identity-primary-role`) in the EKS cluster’s AWS account gains permission to assume the Target IAM Role (e.g. `eks-pod-identity-aws-resources`) in your target account, enabling access to AWS resources like Amazon S3 buckets.

### Create the Target IAM Role
<a name="_create_the_target_iam_role"></a>

1. Open the [Amazon IAM console](https://console.aws.amazon.com/iam/home).

1. In the top navigation bar, verify that you are signed into the account containing the AWS resources (like S3 buckets or DynamoDB tables) for your Target IAM Role.

1. In the left navigation pane, choose **Roles**.

1. Choose the **Create role** button, then ** AWS account** under "Trusted entity type."

1. Choose **Another AWS account**, enter your AWS account number (the account where your [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) exists), then choose **Next**.

1. Add the permission policies you would like to associate to the role (e.g., AmazonS3FullAccess), then choose **Next**.

1. Enter a role name, such as `MyCustomIAMTargetRole`, then choose **Create role**.

### Update the Target IAM Role trust policy
<a name="_update_the_target_iam_role_trust_policy"></a>

1. After creating the role, you’ll be returned to the **Roles** list. Find and select the new role you created in the previous step (e.g., `MyCustomIAMTargetRole`).

1. Select the **Trust relationships** tab.

1. Click **Edit trust policy** on the right side.

1. In the policy editor, replace the default JSON with your trust policy. Replace the placeholder values for role name and `111122223333` in the IAM role ARN with the AWS account ID hosting your EKS cluster. You can also optionally use PrincipalTags in the role trust policy to authorize only specific service accounts from a given cluster and namespace to assume your target role . For example:

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ],
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/eks-cluster-arn": "arn:aws:eks:us-east-1:111122223333:cluster/example-cluster",
          "aws:RequestTag/kubernetes-namespace": "ExampleNameSpace",
          "aws:RequestTag/kubernetes-service-account": "ExampleServiceAccountName"
        },
        "ArnEquals": {
          "aws:PrincipalARN": "arn:aws:iam::111122223333:role/eks-pod-identity-primary-role"
        }
      }
    }
  ]
}
```

The above policy policy lets the role `eks-pod-identity-primary-role` from AWS account 111122223333 with the relevant [EKS Pod Identity Session Tags](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-abac.html) assume this role.

If you [Disabled Session Tags](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-abac.html#pod-id-abac-tags) in your EKS Pod Identity, EKS Pod Identity also sets the `sts:ExternalId` with information about the cluster, namespace, and service account of a pod when assuming a target role.

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "region/111122223333/cluster-name/namespace/service-account-name"
        },
        "ArnEquals": {
          "aws:PrincipalARN": "arn:aws:iam::111122223333:role/eks-pod-identity-primary-role"
        }
      }
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "sts:TagSession"
    }
  ]
}
```

The above policy helps ensure that only the expected cluster, namespace and service account can assume the target role.

### Update the permission policy for EKS Pod Identity role
<a name="_update_the_permission_policy_for_eks_pod_identity_role"></a>

In this step, you will update the permission policy of the [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) associated with your Amazon EKS cluster by adding the Target IAM Role ARN as a resource.

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of your EKS cluster.

1. Choose the **Access** tab.

1. Under **Pod Identity associations**, select your [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html).

1. Choose **Permissions**, **Add permissions**, then **Create inline policy**.

1. Choose **JSON** on the right side.

1. In the policy editor, replace the default JSON with your permission policy. Replace the placeholder value for role name and `222233334444` in the IAM role ARN with your Target IAM Role. For example:

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ],
            "Resource": "arn:aws:iam::222233334444:role/eks-pod-identity-aws-resources"
        }
    ]
}
```

## Step 2: Associate the Target IAM Role to a Kubernetes service account
<a name="_step_2_associate_the_target_iam_role_to_a_kubernetes_service_account"></a>

In this step, you will create an association between the Target IAM role and the Kubernetes service account in your EKS cluster.

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to add the association to.

1. Choose the **Access** tab.

1. In the **Pod Identity associations**, choose **Create**.

1. Choose the [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html) in **IAM role** for your workloads to assume.

1. Choose the Target IAM role in **Target IAM role** that will be assumed by the [EKS Pod Identity role](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-role.html).

1. In the **Kubernetes namespace** field, enter the name of the namespace where you want to create the association (e.g., `my-app-namespace`). This defines where the service account resides.

1. In the **Kubernetes service account** field, enter the name of the service account (e.g., `my-service-account`) that will use the IAM credentials. This links the IAM role to the service account.

1. (Optional) Select **Disable session tags** to disable the default session tags that Pod Identity automatically adds when it assumes the role.

1. (Optional) Toggle **Configure session policy** to configure an IAM policy to apply additional restrictions to this Pod Identity association beyond the permissions defined in the IAM policy attached to the **Target IAM role**.
**Note**  
1. A session policy can only be applied when the **Disable session tags** setting is checked. 2. If you specify a session policy, then the policy restrictions apply to the **Target IAM role**'s permissions and not the **IAM role** associated with this Pod Identity association.

1. Choose **Create** to create the association.

# Configure Pods to access AWS services with service accounts
<a name="pod-id-configure-pods"></a>

If a Pod needs to access AWS services, then you must configure it to use a Kubernetes service account. The service account must be associated to an AWS Identity and Access Management (IAM) role that has permissions to access the AWS services.
+ An existing cluster. If you don’t have one, you can create one using one of the guides in [Get started with Amazon EKS](getting-started.md).
+ An existing Kubernetes service account and an EKS Pod Identity association that associates the service account with an IAM role. The role must have an associated IAM policy that contains the permissions that you want your Pods to have to use AWS services. For more information about how to create the service account and role, and configure them, see [Assign an IAM role to a Kubernetes service account](pod-id-association.md).
+ The latest version of the AWS CLI installed and configured on your device or AWS CloudShell. You can check your current version with `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the AWS Command Line Interface User Guide. The AWS CLI version installed in the AWS CloudShell may also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the AWS CloudShell User Guide.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ An existing `kubectl` `config` file that contains your cluster configuration. To create a `kubectl` `config` file, see [Connect kubectl to an EKS cluster by creating a kubeconfig file](create-kubeconfig.md).

  1. Use the following command to create a deployment manifest that you can deploy a Pod to confirm configuration with. Replace the example values with your own values.

     ```
     cat >my-deployment.yaml <<EOF
     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: my-app
     spec:
       selector:
         matchLabels:
           app: my-app
       template:
         metadata:
           labels:
             app: my-app
         spec:
           serviceAccountName: my-service-account
           containers:
           - name: my-app
             image: public.ecr.aws/nginx/nginx:X.XX
     EOF
     ```

  1. Deploy the manifest to your cluster.

     ```
     kubectl apply -f my-deployment.yaml
     ```

  1. Confirm that the required environment variables exist for your Pod.

     1. View the Pods that were deployed with the deployment in the previous step.

        ```
        kubectl get pods | grep my-app
        ```

        An example output is as follows.

        ```
        my-app-6f4dfff6cb-76cv9   1/1     Running   0          3m28s
        ```

     1. Confirm that the Pod has a service account token file mount.

        ```
        kubectl describe pod my-app-6f4dfff6cb-76cv9 | grep AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE:
        ```

        An example output is as follows.

        ```
        AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE:  /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
        ```

  1. Confirm that your Pods can interact with the AWS services using the permissions that you assigned in the IAM policy attached to your role.
**Note**  
When a Pod uses AWS credentials from an IAM role that’s associated with a service account, the AWS CLI or other SDKs in the containers for that Pod use the credentials that are provided by that role. If you don’t restrict access to the credentials that are provided to the [Amazon EKS node IAM role](create-node-role.md), the Pod still has access to these credentials. For more information, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).

     If your Pods can’t interact with the services as you expected, complete the following steps to confirm that everything is properly configured.

     1. Confirm that your Pods use an AWS SDK version that supports assuming an IAM role through an EKS Pod Identity association. For more information, see [Use pod identity with the AWS SDK](pod-id-minimum-sdk.md).

     1. Confirm that the deployment is using the service account.

        ```
        kubectl describe deployment my-app | grep "Service Account"
        ```

        An example output is as follows.

        ```
        Service Account:  my-service-account
        ```

# Grant Pods access to AWS resources based on tags
<a name="pod-id-abac"></a>

Attribute-based access control (ABAC) grants rights to users through policies which combine attributes together. EKS Pod Identity attaches tags to the temporary credentials to each Pod with attributes such as cluster name, namespace, and service account name. These role session tags enable administrators to author a single role that can work across service accounts by allowing access to AWS resources based on matching tags. By adding support for role session tags, you can enforce tighter security boundaries between clusters, and workloads within clusters, while reusing the same IAM roles and IAM policies.

## Sample policy with tags
<a name="_sample_policy_with_tags"></a>

Below is an IAM policy example that grants `s3:GetObject` permissions when the corresponding object is tagged with the EKS cluster name.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectTagging"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "s3:ExistingObjectTag/eks-cluster-name": "${aws:PrincipalTag/eks-cluster-name}"
                }
            }
        }
    ]
}
```

## Enable or disable session tags
<a name="pod-id-abac-tags"></a>

EKS Pod Identity adds a pre-defined set of session tags when it assumes the role. These session tags enable administrators to author a single role that can work across resources by allowing access to AWS resources based on matching tags.

### Enable session tags
<a name="_enable_session_tags"></a>

Session tags are automatically enabled with EKS Pod Identity—​no action is required on your part. By default, EKS Pod Identity attaches a set of predefined tags to your session. To reference these tags in policies, use the syntax `${aws:PrincipalTag/` followed by the tag key. For example, `${aws:PrincipalTag/kubernetes-namespace}`.
+  `eks-cluster-arn` 
+  `eks-cluster-name` 
+  `kubernetes-namespace` 
+  `kubernetes-service-account` 
+  `kubernetes-pod-name` 
+  `kubernetes-pod-uid` 

### Disable session tags
<a name="_disable_session_tags"></a>

 AWS compresses inline session policies, managed policy ARNs, and session tags into a packed binary format that has a separate limit. If you receive a `PackedPolicyTooLarge` error indicating the packed binary format has exceeded the size limit, you can attempt to reduce the size by disabling the session tags added by EKS Pod Identity. To disable these session tags, follow these steps:

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to modify.

1. Choose the **Access** tab.

1. In the **Pod Identity associations**, choose the association ID you would like to modify in **Association ID**, then choose **Edit**.

1. Under **Session tags**, choose **Disable session tags**.

1. Choose **Save changes**.

## Cross-account tags
<a name="pod-id-abac-chaining"></a>

All of the session tags that are added by EKS Pod Identity are *transitive*; the tag keys and values are passed to any `AssumeRole` actions that your workloads use to switch roles into another account. You can use these tags in policies in other accounts to limit access in cross-account scenarios. For more information, see [Chaining roles with session tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_role-chaining) in the *IAM User Guide*.

## Custom tags
<a name="pod-id-abac-custom-tags"></a>

EKS Pod Identity can’t add additional custom tags to the `AssumeRole` action that it performs. However, tags that you apply to the IAM role are always available through the same format: `${aws:PrincipalTag/` followed by the key, for example `${aws:PrincipalTag/MyCustomTag}`.

**Note**  
Tags added to the session through the `sts:AssumeRole` request take precedence in the case of conflict. For example, say that:  
Amazon EKS adds a key `eks-cluster-name` and value `my-cluster` to the session when EKS assumes the customer role and
You add an `eks-cluster-name` tag to the IAM role with the value `my-own-cluster`.
In this case, the former takes precedence and the value for the `eks-cluster-name` tag will be `my-cluster`.

# Use pod identity with the AWS SDK
<a name="pod-id-minimum-sdk"></a>

## Using EKS Pod Identity credentials
<a name="pod-id-using-creds"></a>

To use the credentials from a EKS Pod Identity association, your code can use any AWS SDK to create a client for an AWS service with an SDK, and by default the SDK searches in a chain of locations for AWS Identity and Access Management credentials to use. The EKS Pod Identity credentials will be used if you don’t specify a credential provider when you create the client or otherwise initialized the SDK.

This works because EKS Pod Identities have been added to the *Container credential provider* which is searched in a step in the default credential chain. If your workloads currently use credentials that are earlier in the chain of credentials, those credentials will continue to be used even if you configure an EKS Pod Identity association for the same workload.

For more information about how EKS Pod Identities work, see [Understand how EKS Pod Identity works](pod-id-how-it-works.md).

When using [Learn how EKS Pod Identity grants pods access to AWS services](pod-identities.md), the containers in your Pods must use an AWS SDK version that supports assuming an IAM role from the EKS Pod Identity Agent. Make sure that you’re using the following versions, or later, for your AWS SDK:
+ Java (Version 2) – [2.21.30](https://github.com/aws/aws-sdk-java-v2/releases/tag/2.21.30) 
+ Java – [1.12.746](https://github.com/aws/aws-sdk-java/releases/tag/1.12.746) 
+ Go v1 – [v1.47.11](https://github.com/aws/aws-sdk-go/releases/tag/v1.47.11) 
+ Go v2 – [release-2023-11-14](https://github.com/aws/aws-sdk-go-v2/releases/tag/release-2023-11-14) 
+ Python (Boto3) – [1.34.41](https://github.com/boto/boto3/releases/tag/1.34.41) 
+ Python (botocore) – [1.34.41](https://github.com/boto/botocore/releases/tag/1.34.41) 
+  AWS CLI – [1.30.0](https://github.com/aws/aws-cli/releases/tag/1.30.0) 

   AWS CLI – [2.15.0](https://github.com/aws/aws-cli/releases/tag/2.15.0) 
+ JavaScript v2 – [2.1550.0](https://github.com/aws/aws-sdk-js/releases/tag/v2.1550.0) 
+ JavaScript v3 – [v3.458.0](https://github.com/aws/aws-sdk-js-v3/releases/tag/v3.458.0) 
+ Kotlin – [v1.0.1](https://github.com/awslabs/aws-sdk-kotlin/releases/tag/v1.0.1) 
+ Ruby – [3.188.0](https://github.com/aws/aws-sdk-ruby/blob/version-3/gems/aws-sdk-core/CHANGELOG.md#31880-2023-11-22) 
+ Rust – [release-2024-03-13](https://github.com/awslabs/aws-sdk-rust/releases/tag/release-2024-03-13) 
+ C\$1\$1 – [1.11.263](https://github.com/aws/aws-sdk-cpp/releases/tag/1.11.263) 
+ .NET – [3.7.734.0](https://github.com/aws/aws-sdk-net/releases/tag/3.7.734.0) 
+ PowerShell – [4.1.502](https://www.powershellgallery.com/packages/AWS.Tools.Common/4.1.502) 
+ PHP – [3.289.0](https://github.com/aws/aws-sdk-php/releases/tag/3.287.1) 

To ensure that you’re using a supported SDK, follow the installation instructions for your preferred SDK at [Tools to Build on AWS](https://aws.amazon.com/tools/) when you build your containers.

For a list of add-ons that support EKS Pod Identity, see [Pod Identity Support Reference](retreive-iam-info.md#pod-id-add-on-versions).

# Disable `IPv6` in the EKS Pod Identity Agent
<a name="pod-id-agent-config-ipv6"></a>

## AWS Management Console
<a name="pod-id-console"></a>

1. To disable `IPv6` in the EKS Pod Identity Agent, add the following configuration to the **Optional configuration settings** of the EKS Add-on.

   1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

   1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the add-on for.

   1. Choose the **Add-ons** tab.

   1. Select the box in the top right of the EKS Pod Identity Agent add-on box and then choose **Edit**.

   1. On the **Configure EKS Pod Identity Agent** page:

      1. Select the **Version** that you’d like to use. We recommend that you keep the same version as the previous step, and update the version and configuration in separate actions.

      1. Expand the **Optional configuration settings**.

      1. Enter the JSON key `"agent":` and value of a nested JSON object with a key `"additionalArgs":` in **Configuration values**. The resulting text must be a valid JSON object. If this key and value are the only data in the text box, surround the key and value with curly braces `{ }`. The following example shows network policy is enabled:

         ```
         {
             "agent": {
                 "additionalArgs": {
                     "-b": "169.254.170.23"
                 }
             }
         }
         ```

         This configuration sets the `IPv4` address to be the only address used by the agent.

   1. To apply the new configuration by replacing the EKS Pod Identity Agent pods, choose **Save changes**.

      Amazon EKS applies changes to the EKS Add-ons by using a *rollout* of the Kubernetes `DaemonSet` for EKS Pod Identity Agent. You can track the status of the rollout in the **Update history** of the add-on in the AWS Management Console and with `kubectl rollout status daemonset/eks-pod-identity-agent --namespace kube-system`.

       `kubectl rollout` has the following commands:

      ```
      $ kubectl rollout
      
      history  -- View rollout history
      pause    -- Mark the provided resource as paused
      restart  -- Restart a resource
      resume   -- Resume a paused resource
      status   -- Show the status of the rollout
      undo     -- Undo a previous rollout
      ```

      If the rollout takes too long, Amazon EKS will undo the rollout, and a message with the type of **Addon Update** and a status of **Failed** will be added to the **Update history** of the add-on. To investigate any issues, start from the history of the rollout, and run `kubectl logs` on a EKS Pod Identity Agent pod to see the logs of EKS Pod Identity Agent.

1. If the new entry in the **Update history** has a status of **Successful**, then the rollout has completed and the add-on is using the new configuration in all of the EKS Pod Identity Agent pods.

## AWS CLI
<a name="pod-id-cli"></a>

1. To disable `IPv6` in the EKS Pod Identity Agent, add the following configuration to the **configuration values** of the EKS Add-on.

   Run the following AWS CLI command. Replace `my-cluster` with the name of your cluster and the IAM role ARN with the role that you are using.

   ```
   aws eks update-addon --cluster-name my-cluster --addon-name eks-pod-identity-agent \
       --resolve-conflicts PRESERVE --configuration-values '{"agent":{"additionalArgs": { "-b": "169.254.170.23"}}}'
   ```

   This configuration sets the `IPv4` address to be the only address used by the agent.

   Amazon EKS applies changes to the EKS Add-ons by using a *rollout* of the Kubernetes DaemonSet for EKS Pod Identity Agent. You can track the status of the rollout in the **Update history** of the add-on in the AWS Management Console and with `kubectl rollout status daemonset/eks-pod-identity-agent --namespace kube-system`.

    `kubectl rollout` has the following commands:

   ```
   kubectl rollout
   
   history  -- View rollout history
   pause    -- Mark the provided resource as paused
   restart  -- Restart a resource
   resume   -- Resume a paused resource
   status   -- Show the status of the rollout
   undo     -- Undo a previous rollout
   ```

   If the rollout takes too long, Amazon EKS will undo the rollout, and a message with the type of **Addon Update** and a status of **Failed** will be added to the **Update history** of the add-on. To investigate any issues, start from the history of the rollout, and run `kubectl logs` on a EKS Pod Identity Agent pod to see the logs of EKS Pod Identity Agent.

# Create IAM role with trust policy required by EKS Pod Identity
<a name="pod-id-role"></a>

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ]
        }
    ]
}
```

 ** `sts:AssumeRole` **   
EKS Pod Identity uses `AssumeRole` to assume the IAM role before passing the temporary credentials to your pods.

 ** `sts:TagSession` **   
EKS Pod Identity uses `TagSession` to include *session tags* in the requests to AWS STS.

 **Setting Conditions**   
You can use these tags in the *condition keys* in the trust policy to restrict which service accounts, namespaces, and clusters can use this role. For the list of request tags that Pod Identity adds, see [Enable or disable session tags](pod-id-abac.md#pod-id-abac-tags).  
For example, you can restrict which pods can assume the role a Pod Identity IAM Role to a specific `ServiceAccount` and `Namespace` with the following Trust Policy with the added `Condition`:

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/kubernetes-namespace": [
                        "Namespace"
                    ],
                    "aws:RequestTag/kubernetes-service-account": [
                        "ServiceAccount"
                    ]
                }
            }
        }
    ]
}
```

For a list of Amazon EKS condition keys, see [Conditions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-policy-keys) in the *Service Authorization Reference*. To learn which actions and resources you can use a condition key with, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions).