

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Grant IAM users access to Kubernetes with EKS access entries
<a name="access-entries"></a>

This section is designed to show you how to manage IAM principal access to Kubernetes clusters in Amazon Elastic Kubernetes Service (EKS) using access entries and policies. You’ll find details on changing authentication modes, migrating from legacy `aws-auth` ConfigMap entries, creating, updating, and deleting access entries, associating policies with entries, reviewing predefined policy permissions, and key prerequisites and considerations for secure access management.

## Overview
<a name="_overview"></a>

EKS access entries are the best way to grant users access to the Kubernetes API. For example, you can use access entries to grant developers access to use kubectl. Fundamentally, an EKS access entry associates a set of Kubernetes permissions with an IAM identity, such as an IAM role. For example, a developer may assume an IAM role and use that to authenticate to an EKS Cluster.

## Features
<a name="_features"></a>
+  **Centralized Authentication and Authorization**: Controls access to Kubernetes clusters directly via Amazon EKS APIs, eliminating the need to switch between AWS and Kubernetes APIs for user permissions.
+  **Granular Permissions Management**: Uses access entries and policies to define fine-grained permissions for AWS IAM principals, including modifying or revoking cluster-admin access from the creator.
+  **IaC Tool Integration**: Supports infrastructure as code tools like AWS CloudFormation, Terraform, and AWS CDK to define access configurations during cluster creation.
+  **Misconfiguration Recovery**: Allows restoring cluster access through the Amazon EKS API without direct Kubernetes API access.
+  **Reduced Overhead and Enhanced Security**: Centralizes operations to lower overhead while leveraging AWS IAM features like CloudTrail audit logging and multi-factor authentication.

## How to attach permissions
<a name="_how_to_attach_permissions"></a>

You can attach Kubernetes permissions to access entries in two ways:
+ Use an access policy. Access policies are pre-defined Kubernetes permissions templates maintained by AWS. For more information, see [Review access policy permissions](access-policy-permissions.md).
+ Reference a Kubernetes group. If you associate an IAM Identity with a Kubernetes group, you can create Kubernetes resources that grant the group permissions. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

## Considerations
<a name="_considerations"></a>

When enabling EKS access entries on existing clusters, keep the following in mind:
+  **Legacy Cluster Behavior**: For clusters created before the introduction of access entries (those with initial platform versions earlier than specified in [Platform version requirements](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html)), EKS automatically creates an access entry reflecting pre-existing permissions. This entry includes the IAM identity that originally created the cluster and the administrative permissions granted to that identity during cluster creation.
+  **Handling Legacy `aws-auth` ConfigMap**: If your cluster relies on the legacy `aws-auth` ConfigMap for access management, only the access entry for the original cluster creator is automatically created upon enabling access entries. Additional roles or permissions added to the ConfigMap (e.g., custom IAM roles for developers or services) are not automatically migrated. To address this, manually create corresponding access entries.

## Get started
<a name="_get_started"></a>

1. Determine the IAM Identity and Access policy you want to use.
   +  [Review access policy permissions](access-policy-permissions.md) 

1. Enable EKS Access Entries on your cluster. Confirm you have a supported platform version.
   +  [Change authentication mode to use access entries](setting-up-access-entries.md) 

1. Create an access entry that associates an IAM Identity with Kubernetes permission.
   +  [Create access entries](creating-access-entries.md) 

1. Authenticate to the cluster using the IAM identity.
   +  [Set up AWS CLI](install-awscli.md) 
   +  [Set up `kubectl` and `eksctl`](install-kubectl.md) 

# Associate access policies with access entries
<a name="access-policies"></a>

You can assign one or more access policies to *access entries* of *type* `STANDARD`. Amazon EKS automatically grants the other types of access entries the permissions required to function properly in your cluster. Amazon EKS access policies include Kubernetes permissions, not IAM permissions. Before associating an access policy to an access entry, make sure that you’re familiar with the Kubernetes permissions included in each access policy. For more information, see [Review access policy permissions](access-policy-permissions.md). If none of the access policies meet your requirements, then don’t associate an access policy to an access entry. Instead, specify one or more *group names* for the access entry and create and manage Kubernetes role-based access control objects. For more information, see [Create access entries](creating-access-entries.md).
+ An existing access entry. To create one, see [Create access entries](creating-access-entries.md).
+ An AWS Identity and Access Management role or user with the following permissions: `ListAccessEntries`, `DescribeAccessEntry`, `UpdateAccessEntry`, `ListAccessPolicies`, `AssociateAccessPolicy`, and `DisassociateAccessPolicy`. For more information, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions) in the *Service Authorization Reference*.

Before associating access policies with access entries, consider the following requirements:
+ You can associate multiple access policies to each access entry, but you can only associate each policy to an access entry once. If you associate multiple access policies, the access entry’s IAM principal has all permissions included in all associated access policies.
+ You can scope an access policy to all resources on a cluster or by specifying the name of one or more Kubernetes namespaces. You can use wildcard characters for a namespace name. For example, if you want to scope an access policy to all namespaces that start with `dev-`, you can specify `dev-*` as a namespace name. Make sure that the namespaces exist on your cluster and that your spelling matches the actual namespace name on the cluster. Amazon EKS doesn’t confirm the spelling or existence of the namespaces on your cluster.
+ You can change the *access scope* for an access policy after you associate it to an access entry. If you’ve scoped the access policy to Kubernetes namespaces, you can add and remove namespaces for the association, as necessary.
+ If you associate an access policy to an access entry that also has *group names* specified, then the IAM principal has all the permissions in all associated access policies. It also has all the permissions in any Kubernetes `Role` or `ClusterRole` object that is specified in any Kubernetes `Role` and `RoleBinding` objects that specify the group names.
+ If you run the `kubectl auth can-i --list` command, you won’t see any Kubernetes permissions assigned by access policies associated with an access entry for the IAM principal you’re using when you run the command. The command only shows Kubernetes permissions if you’ve granted them in Kubernetes `Role` or `ClusterRole` objects that you’ve bound to the group names or username that you specified for an access entry.
+ If you impersonate a Kubernetes user or group when interacting with Kubernetes objects on your cluster, such as using the `kubectl` command with `--as username ` or `--as-group group-name `, you’re forcing the use of Kubernetes RBAC authorization. As a result, the IAM principal has no permissions assigned by any access policies associated to the access entry. The only Kubernetes permissions that the user or group that the IAM principal is impersonating has are the Kubernetes permissions that you’ve granted them in Kubernetes `Role` or `ClusterRole` objects that you’ve bound to the group names or user name. For your IAM principal to have the permissions in associated access policies, don’t impersonate a Kubernetes user or group. The IAM principal will still also have any permissions that you’ve granted them in the Kubernetes `Role` or `ClusterRole` objects that you’ve bound to the group names or user name that you specified for the access entry. For more information, see [User impersonation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) in the Kubernetes documentation.

You can associate an access policy to an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-associate-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that has an access entry that you want to associate an access policy to.

1. Choose the **Access** tab.

1. If the type of the access entry is **Standard**, you can associate or disassociate Amazon EKS **access policies**. If the type of your access entry is anything other than **Standard**, then this option isn’t available.

1. Choose **Associate access policy**.

1. For **Policy name**, select the policy with the permissions you want the IAM principal to have. To view the permissions included in each policy, see [Review access policy permissions](access-policy-permissions.md).

1. For **Access scope**, choose an access scope. If you choose **Cluster**, the permissions in the access policy are granted to the IAM principal for resources in all Kubernetes namespaces. If you choose **Kubernetes namespace**, you can then choose **Add new namespace**. In the **Namespace** field that appears, you can enter the name of a Kubernetes namespace on your cluster. If you want the IAM principal to have the permissions across multiple namespaces, then you can enter multiple namespaces.

1. Choose **Add access policy**.

## AWS CLI
<a name="access-associate-cli"></a>

1. Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.

1. View the available access policies.

   ```
   aws eks list-access-policies --output table
   ```

   An example output is as follows.

   ```
   ---------------------------------------------------------------------------------------------------------
   |                                          ListAccessPolicies                                           |
   +-------------------------------------------------------------------------------------------------------+
   ||                                           accessPolicies                                            ||
   |+---------------------------------------------------------------------+-------------------------------+|
   ||                                 arn                                 |             name              ||
   |+---------------------------------------------------------------------+-------------------------------+|
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSAdminPolicy        |  AmazonEKSAdminPolicy         ||
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy |  AmazonEKSClusterAdminPolicy  ||
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSEditPolicy         |  AmazonEKSEditPolicy          ||
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSViewPolicy         |  AmazonEKSViewPolicy          ||
   |+---------------------------------------------------------------------+-------------------------------+|
   ```

   To view the permissions included in each policy, see [Review access policy permissions](access-policy-permissions.md).

1. View your existing access entries. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks list-access-entries --cluster-name my-cluster
   ```

   An example output is as follows.

   ```
   {
       "accessEntries": [
           "arn:aws:iam::111122223333:role/my-role",
           "arn:aws:iam::111122223333:user/my-user"
       ]
   }
   ```

1. Associate an access policy to an access entry. The following example associates the `AmazonEKSViewPolicy` access policy to an access entry. Whenever the *my-role* IAM role attempts to access Kubernetes objects on the cluster, Amazon EKS will authorize the role to use the permissions in the policy to access Kubernetes objects in the *my-namespace1* and *my-namespace2* Kubernetes namespaces only. Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *my-role* with the name of the IAM role that you want Amazon EKS to authorize access to Kubernetes cluster objects for.

   ```
   aws eks associate-access-policy --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role \
       --access-scope type=namespace,namespaces=my-namespace1,my-namespace2 --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
   ```

   If you want the IAM principal to have the permissions cluster-wide, replace `type=namespace,namespaces=my-namespace1,my-namespace2 ` with `type=cluster`. If you want to associate multiple access policies to the access entry, run the command multiple times, each with a unique access policy. Each associated access policy has its own scope.
**Note**  
If you later want to change the scope of an associated access policy, run the previous command again with the new scope. For example, if you wanted to remove *my-namespace2*, you’d run the command again using `type=namespace,namespaces=my-namespace1 ` only. If you wanted to change the scope from `namespace` to `cluster`, you’d run the command again using `type=cluster`, removing `type=namespace,namespaces=my-namespace1,my-namespace2 `.

1. Determine which access policies are associated to an access entry.

   ```
   aws eks list-associated-access-policies --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role
   ```

   An example output is as follows.

   ```
   {
       "clusterName": "my-cluster",
       "principalArn": "arn:aws:iam::111122223333",
       "associatedAccessPolicies": [
           {
               "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy",
               "accessScope": {
                   "type": "cluster",
                   "namespaces": []
               },
               "associatedAt": "2023-04-17T15:25:21.675000-04:00",
               "modifiedAt": "2023-04-17T15:25:21.675000-04:00"
           },
           {
               "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy",
               "accessScope": {
                   "type": "namespace",
                   "namespaces": [
                       "my-namespace1",
                       "my-namespace2"
                   ]
               },
               "associatedAt": "2023-04-17T15:02:06.511000-04:00",
               "modifiedAt": "2023-04-17T15:02:06.511000-04:00"
           }
       ]
   }
   ```

   In the previous example, the IAM principal for this access entry has view permissions across all namespaces on the cluster, and administrator permissions to two Kubernetes namespaces.

1. Disassociate an access policy from an access entry. In this example, the `AmazonEKSAdminPolicy` policy is disassociated from an access entry. The IAM principal retains the permissions in the `AmazonEKSViewPolicy` access policy for objects in the *my-namespace1* and *my-namespace2* namespaces however, because that access policy is not disassociated from the access entry.

   ```
   aws eks disassociate-access-policy --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role \
       --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy
   ```

To list available access policies, see [Review access policy permissions](access-policy-permissions.md).

# Migrating existing `aws-auth ConfigMap` entries to access entries
<a name="migrating-access-entries"></a>

If you’ve added entries to the `aws-auth` `ConfigMap` on your cluster, we recommend that you create access entries for the existing entries in your `aws-auth` `ConfigMap`. After creating the access entries, you can remove the entries from your `ConfigMap`. You can’t associate [access policies](access-policies.md) to entries in the `aws-auth` `ConfigMap`. If you want to associate access polices to your IAM principals, create access entries.

**Important**  
When a cluster is in `API_AND_CONFIGMAP` authentication mode and there’s a mapping for the same IAM role in both the `aws-auth` `ConfigMap` and in access entries, the role will use the access entry’s mapping for authentication. Access entries take precedence over `ConfigMap` entries for the same IAM principal.
Before removing existing `aws-auth` `ConfigMap` entries that were created by Amazon EKS for [managed node group](managed-node-groups.md) or a [Fargate profile](fargate-profile.md) to your cluster, double check if the correct access entries for those specific resources exist in your Amazon EKS cluster. If you remove entries that Amazon EKS created in the `ConfigMap` without having the equivalent access entries, your cluster won’t function properly.

## Prerequisites
<a name="migrating_access_entries_prereq"></a>
+ Familiarity with access entries and access policies. For more information, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md) and [Associate access policies with access entries](access-policies.md).
+ An existing cluster with a platform version that is at or later than the versions listed in the Prerequisites of the [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md) topic.
+ Version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.
+ Kubernetes permissions to modify the `aws-auth` `ConfigMap` in the `kube-system` namespace.
+ An AWS Identity and Access Management role or user with the following permissions: `CreateAccessEntry` and `ListAccessEntries`. For more information, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions) in the Service Authorization Reference.

## `eksctl`
<a name="migrating_access_entries_eksctl"></a>

1. View the existing entries in your `aws-auth ConfigMap`. Replace *my-cluster* with the name of your cluster.

   ```
   eksctl get iamidentitymapping --cluster my-cluster
   ```

   An example output is as follows.

   ```
   ARN                                                                                             USERNAME                                GROUPS                                                  ACCOUNT
   arn:aws:iam::111122223333:role/EKS-my-cluster-Admins                                            Admins                                  system:masters
   arn:aws:iam::111122223333:role/EKS-my-cluster-my-namespace-Viewers                              my-namespace-Viewers                    Viewers
   arn:aws:iam::111122223333:role/EKS-my-cluster-self-managed-ng-1                                 system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   arn:aws:iam::111122223333:user/my-user                                                          my-user
   arn:aws:iam::111122223333:role/EKS-my-cluster-fargateprofile1                                   system:node:{{SessionName}}             system:bootstrappers,system:nodes,system:node-proxier
   arn:aws:iam::111122223333:role/EKS-my-cluster-managed-ng                                        system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   ```

1.  [Create access entries](creating-access-entries.md) for any of the `ConfigMap` entries that you created returned in the previous output. When creating the access entries, make sure to specify the same values for `ARN`, `USERNAME`, `GROUPS`, and `ACCOUNT` returned in your output. In the example output, you would create access entries for all entries except the last two entries, since those entries were created by Amazon EKS for a Fargate profile and a managed node group.

1. Delete the entries from the `ConfigMap` for any access entries that you created. If you don’t delete the entry from the `ConfigMap`, the settings for the access entry for the IAM principal ARN override the `ConfigMap` entry. Replace *111122223333* with your AWS account ID and *EKS-my-cluster-my-namespace-Viewers* with the name of the role in the entry in your `ConfigMap`. If the entry you’re removing is for an IAM user, rather than an IAM role, replace `role` with `user` and *EKS-my-cluster-my-namespace-Viewers* with the user name.

   ```
   eksctl delete iamidentitymapping --arn arn:aws:iam::111122223333:role/EKS-my-cluster-my-namespace-Viewers --cluster my-cluster
   ```

# Review access policy permissions
<a name="access-policy-permissions"></a>

Access policies include `rules` that contain Kubernetes `verbs` (permissions) and `resources`. Access policies don’t include IAM permissions or resources. Similar to Kubernetes `Role` and `ClusterRole` objects, access policies only include `allow` `rules`. You can’t modify the contents of an access policy. You can’t create your own access policies. If the permissions in the access policies don’t meet your needs, then create Kubernetes RBAC objects and specify *group names* for your access entries. For more information, see [Create access entries](creating-access-entries.md). The permissions contained in access policies are similar to the permissions in the Kubernetes user-facing cluster roles. For more information, see [User-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) in the Kubernetes documentation.

## List all policies
<a name="access-policies-cli-command"></a>

Use any one of the access policies listed on this page, or retrieve a list of all available access policies using the AWS CLI:

```
aws eks list-access-policies
```

The expected output should look like this (abbreviated for brevity):

```
{
    "accessPolicies": [
        {
            "name": "AmazonAIOpsAssistantPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonAIOpsAssistantPolicy"
        },
        {
            "name": "AmazonARCRegionSwitchScalingPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonARCRegionSwitchScalingPolicy"
        },
        {
            "name": "AmazonEKSAdminPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
        },
        {
            "name": "AmazonEKSAdminViewPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy"
        },
        {
            "name": "AmazonEKSAutoNodePolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAutoNodePolicy"
        }
        // Additional policies omitted
    ]
}
```

## AmazonEKSAdminPolicy
<a name="access-policy-permissions-amazoneksadminpolicy"></a>

This access policy includes permissions that grant an IAM principal most permissions to resources. When associated to an access entry, its access scope is typically one or more Kubernetes namespaces. If you want an IAM principal to have administrator access to all resources on your cluster, associate the [AmazonEKSClusterAdminPolicy](#access-policy-permissions-amazoneksclusteradminpolicy) access policy to your access entry instead.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `apps`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `replicasets`, `replicasets/scale`, `statefulsets`, `statefulsets/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `apps`   |   `controllerrevisions`, `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `replicasets`, `replicasets/scale`, `replicasets/status`, `statefulsets`, `statefulsets/scale`, `statefulsets/status`   |   `get`, `list`, `watch`   | 
|   `authorization.k8s.io`   |   `localsubjectaccessreviews`   |   `create`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`, `horizontalpodautoscalers/status`   |   `get`, `list`, `watch`   | 
|   `batch`   |   `cronjobs`, `jobs`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `batch`   |   `cronjobs`, `cronjobs/status`, `jobs`, `jobs/status`   |   `get`, `list`, `watch`   | 
|   `discovery.k8s.io`   |   `endpointslices`   |   `get`, `list`, `watch`   | 
|   `extensions`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `ingresses`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicationcontrollers/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `extensions`   |   `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `ingresses`, `ingresses/status`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicasets/status`, `replicationcontrollers/scale`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `ingresses/status`, `networkpolicies`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `networkpolicies`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `policy`   |   `poddisruptionbudgets`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `policy`   |   `poddisruptionbudgets`, `poddisruptionbudgets/status`   |   `get`, `list`, `watch`   | 
|   `rbac.authorization.k8s.io`   |   `rolebindings`, `roles`   |   `create`, `delete`, `deletecollection`, `get`, `list`, `patch`, `update`, `watch`   | 
|  |   `configmaps`, `endpoints`, `persistentvolumeclaims`, `persistentvolumeclaims/status`, `pods`, `replicationcontrollers`, `replicationcontrollers/scale`, `serviceaccounts`, `services`, `services/status`   |   `get`, `list`, `watch`   | 
|  |   `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`, `secrets`, `services/proxy`   |   `get`, `list`, `watch`   | 
|  |   `configmaps`, `events`, `persistentvolumeclaims`, `replicationcontrollers`, `replicationcontrollers/scale`, `secrets`, `serviceaccounts`, `services`, `services/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `pods`, `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `serviceaccounts`   |   `impersonate`   | 
|  |   `bindings`, `events`, `limitranges`, `namespaces/status`, `pods/log`, `pods/status`, `replicationcontrollers/status`, `resourcequotas`, `resourcequotas/status`   |   `get`, `list`, `watch`   | 
|  |   `namespaces`   |   `get`, `list`, `watch`   | 

## AmazonEKSClusterAdminPolicy
<a name="access-policy-permissions-amazoneksclusteradminpolicy"></a>

This access policy includes permissions that grant an IAM principal administrator access to a cluster. When associated to an access entry, its access scope is typically the cluster, rather than a Kubernetes namespace. If you want an IAM principal to have a more limited administrative scope, consider associating the [AmazonEKSAdminPolicy](#access-policy-permissions-amazoneksadminpolicy) access policy to your access entry instead.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy` 


| Kubernetes API groups | Kubernetes nonResourceURLs | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | --- | 
|   `*`   |  |   `*`   |   `*`   | 
|  |   `*`   |  |   `*`   | 

## AmazonEKSAdminViewPolicy
<a name="access-policy-permissions-amazoneksadminviewpolicy"></a>

This access policy includes permissions that grant an IAM principal access to list/view all resources in a cluster. Note this includes [Kubernetes Secrets.](https://kubernetes.io/docs/concepts/configuration/secret/) 

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `*`   |   `*`   |   `get`, `list`, `watch`   | 

## AmazonEKSEditPolicy
<a name="access-policy-permissions-amazonekseditpolicy"></a>

This access policy includes permissions that allow an IAM principal to edit most Kubernetes resources.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `apps`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `replicasets`, `replicasets/scale`, `statefulsets`, `statefulsets/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `apps`   |   `controllerrevisions`, `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `replicasets`, `replicasets/scale`, `replicasets/status`, `statefulsets`, `statefulsets/scale`, `statefulsets/status`   |   `get`, `list`, `watch`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`, `horizontalpodautoscalers/status`   |   `get`, `list`, `watch`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `batch`   |   `cronjobs`, `jobs`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `batch`   |   `cronjobs`, `cronjobs/status`, `jobs`, `jobs/status`   |   `get`, `list`, `watch`   | 
|   `discovery.k8s.io`   |   `endpointslices`   |   `get`, `list`, `watch`   | 
|   `extensions`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `ingresses`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicationcontrollers/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `extensions`   |   `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `ingresses`, `ingresses/status`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicasets/status`, `replicationcontrollers/scale`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `networkpolicies`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `networking.k8s.io`   |   `ingresses`, `ingresses/status`, `networkpolicies`   |   `get`, `list`, `watch`   | 
|   `policy`   |   `poddisruptionbudgets`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `policy`   |   `poddisruptionbudgets`, `poddisruptionbudgets/status`   |   `get`, `list`, `watch`   | 
|  |   `namespaces`   |   `get`, `list`, `watch`   | 
|  |   `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`, `secrets`, `services/proxy`   |   `get`, `list`, `watch`   | 
|  |   `serviceaccounts`   |   `impersonate`   | 
|  |   `pods`, `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `configmaps`, `events`, `persistentvolumeclaims`, `replicationcontrollers`, `replicationcontrollers/scale`, `secrets`, `serviceaccounts`, `services`, `services/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `configmaps`, `endpoints`, `persistentvolumeclaims`, `persistentvolumeclaims/status`, `pods`, `replicationcontrollers`, `replicationcontrollers/scale`, `serviceaccounts`, `services`, `services/status`   |   `get`, `list`, `watch`   | 
|  |   `bindings`, `events`, `limitranges`, `namespaces/status`, `pods/log`, `pods/status`, `replicationcontrollers/status`, `resourcequotas`, `resourcequotas/status`   |   `get`, `list`, `watch`   | 

## AmazonEKSViewPolicy
<a name="access-policy-permissions-amazoneksviewpolicy"></a>

This access policy includes permissions that allow an IAM principal to view most Kubernetes resources.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `apps`   |   `controllerrevisions`, `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `replicasets`, `replicasets/scale`, `replicasets/status`, `statefulsets`, `statefulsets/scale`, `statefulsets/status`   |   `get`, `list`, `watch`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`, `horizontalpodautoscalers/status`   |   `get`, `list`, `watch`   | 
|   `batch`   |   `cronjobs`, `cronjobs/status`, `jobs`, `jobs/status`   |   `get`, `list`, `watch`   | 
|   `discovery.k8s.io`   |   `endpointslices`   |   `get`, `list`, `watch`   | 
|   `extensions`   |   `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `ingresses`, `ingresses/status`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicasets/status`, `replicationcontrollers/scale`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `ingresses/status`, `networkpolicies`   |   `get`, `list`, `watch`   | 
|   `policy`   |   `poddisruptionbudgets`, `poddisruptionbudgets/status`   |   `get`, `list`, `watch`   | 
|  |   `configmaps`, `endpoints`, `persistentvolumeclaims`, `persistentvolumeclaims/status`, `pods`, `replicationcontrollers`, `replicationcontrollers/scale`, `serviceaccounts`, `services`, `services/status`   |   `get`, `list`, `watch`   | 
|  |   `bindings`, `events`, `limitranges`, `namespaces/status`, `pods/log`, `pods/status`, `replicationcontrollers/status`, `resourcequotas`, `resourcequotas/status`   |   `get`, `list`, `watch`   | 
|  |   `namespaces`   |   `get`, `list`, `watch`   | 

## AmazonEKSSecretReaderPolicy
<a name="_amazonekssecretreaderpolicy"></a>

This access policy includes permissions that allow an IAM principal to read [Kubernetes Secrets.](https://kubernetes.io/docs/concepts/configuration/secret/) 

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSSecretReaderPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|  |   `secrets`   |   `get`, `list`, `watch`   | 

## AmazonEKSAutoNodePolicy
<a name="_amazoneksautonodepolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSAutoNodePolicy` 

This policy includes the following permissions that allow Amazon EKS components to complete the following tasks:
+  `kube-proxy` – Monitor network endpoints and services, and manage related events. This enables cluster-wide network proxy functionality.
+  `ipamd` – Manage AWS VPC networking resources and container network interfaces (CNI). This allows the IP address management daemon to handle pod networking.
+  `coredns` – Access service discovery resources like endpoints and services. This enables DNS resolution within the cluster.
+  `ebs-csi-driver` – Work with storage-related resources for Amazon EBS volumes. This allows dynamic provisioning and attachment of persistent volumes.
+  `neuron` – Monitor nodes and pods for AWS Neuron devices. This enables management of AWS Inferentia and Trainium accelerators.
+  `node-monitoring-agent` – Access node diagnostics and events. This enables cluster health monitoring and diagnostics collection.

Each component uses a dedicated service account and is restricted to only the permissions required for its specific function.

If you manually specify a Node IAM role in a NodeClass, you need to create an Access Entry that associates the new Node IAM role with this Access Policy.

## AmazonEKSBlockStoragePolicy
<a name="_amazoneksblockstoragepolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSBlockStoragePolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election and coordination resources for storage operations:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS storage components to coordinate their activities across the cluster through a leader election mechanism.

The policy is scoped to specific lease resources used by the EKS storage components to prevent conflicting access to other coordination resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the block storage capability to function properly.

## AmazonEKSLoadBalancingPolicy
<a name="_amazoneksloadbalancingpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSLoadBalancingPolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election resources for load balancing:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS load balancing components to coordinate activities across multiple replicas by electing a leader.

The policy is scoped specifically to load balancing lease resources to ensure proper coordination while preventing access to other lease resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSNetworkingPolicy
<a name="_amazoneksnetworkingpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSNetworkingPolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election resources for networking:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS networking components to coordinate IP address allocation activities by electing a leader.

The policy is scoped specifically to networking lease resources to ensure proper coordination while preventing access to other lease resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSComputePolicy
<a name="_amazonekscomputepolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSComputePolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election resources for compute operations:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS compute components to coordinate node scaling activities by electing a leader.

The policy is scoped specifically to compute management lease resources while allowing basic read access (`get`, `watch`) to all lease resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the compute management capability to function properly.

## AmazonEKSBlockStorageClusterPolicy
<a name="_amazoneksblockstorageclusterpolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSBlockStorageClusterPolicy` 

This policy grants permissions necessary for the block storage capability of Amazon EKS Auto Mode. It enables efficient management of block storage resources within Amazon EKS clusters. The policy includes the following permissions:

CSI Driver Management:
+ Create, read, update, and delete CSI drivers, specifically for block storage.

Volume Management:
+ List, watch, create, update, patch, and delete persistent volumes.
+ List, watch, and update persistent volume claims.
+ Patch persistent volume claim statuses.

Node and Pod Interaction:
+ Read node and pod information.
+ Manage events related to storage operations.

Storage Classes and Attributes:
+ Read storage classes and CSI nodes.
+ Read volume attribute classes.

Volume Attachments:
+ List, watch, and modify volume attachments and their statuses.

Snapshot Operations:
+ Manage volume snapshots, snapshot contents, and snapshot classes.
+ Handle operations for volume group snapshots and related resources.

This policy is designed to support comprehensive block storage management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including provisioning, attaching, resizing, and snapshotting of block storage volumes.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the block storage capability to function properly.

## AmazonEKSComputeClusterPolicy
<a name="_amazonekscomputeclusterpolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSComputeClusterPolicy` 

This policy grants permissions necessary for the compute management capability of Amazon EKS Auto Mode. It enables efficient orchestration and scaling of compute resources within Amazon EKS clusters. The policy includes the following permissions:

Node Management:
+ Create, read, update, delete, and manage status of NodePools and NodeClaims.
+ Manage NodeClasses, including creation, modification, and deletion.

Scheduling and Resource Management:
+ Read access to pods, nodes, persistent volumes, persistent volume claims, replication controllers, and namespaces.
+ Read access to storage classes, CSI nodes, and volume attachments.
+ List and watch deployments, daemon sets, replica sets, and stateful sets.
+ Read pod disruption budgets.

Event Handling:
+ Create, read, and manage cluster events.

Node Deprovisioning and Pod Eviction:
+ Update, patch, and delete nodes.
+ Create pod evictions and delete pods when necessary.

Custom Resource Definition (CRD) Management:
+ Create new CRDs.
+ Manage specific CRDs related to node management (NodeClasses, NodePools, NodeClaims, and NodeDiagnostics).

This policy is designed to support comprehensive compute management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including node provisioning, scheduling, scaling, and resource optimization.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the compute management capability to function properly.

## AmazonEKSLoadBalancingClusterPolicy
<a name="_amazoneksloadbalancingclusterpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSLoadBalancingClusterPolicy` 

This policy grants permissions necessary for the load balancing capability of Amazon EKS Auto Mode. It enables efficient management and configuration of load balancing resources within Amazon EKS clusters. The policy includes the following permissions:

Event and Resource Management:
+ Create and patch events.
+ Read access to pods, nodes, endpoints, and namespaces.
+ Update pod statuses.

Service and Ingress Management:
+ Full management of services and their statuses.
+ Comprehensive control over ingresses and their statuses.
+ Read access to endpoint slices and ingress classes.

Target Group Bindings:
+ Create and modify target group bindings and their statuses.
+ Read access to ingress class parameters.

Custom Resource Definition (CRD) Management:
+ Create and read all CRDs.
+ Specific management of targetgroupbindings.eks.amazonaws.com and ingressclassparams.eks.amazonaws.com CRDs.

Webhook Configuration:
+ Create and read mutating and validating webhook configurations.
+ Manage the eks-load-balancing-webhook configuration.

This policy is designed to support comprehensive load balancing management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including service exposure, ingress routing, and integration with AWS load balancing services.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the load balancing capability to function properly.

## AmazonEKSNetworkingClusterPolicy
<a name="_amazoneksnetworkingclusterpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSNetworkingClusterPolicy` 

AmazonEKSNetworkingClusterPolicy

This policy grants permissions necessary for the networking capability of Amazon EKS Auto Mode. It enables efficient management and configuration of networking resources within Amazon EKS clusters. The policy includes the following permissions:

Node and Pod Management:
+ Read access to NodeClasses and their statuses.
+ Read access to NodeClaims and their statuses.
+ Read access to pods.

CNI Node Management:
+ Permissions for CNINodes and their statuses, including create, read, update, delete, and patch.

Custom Resource Definition (CRD) Management:
+ Create and read all CRDs.
+ Specific management (update, patch, delete) of the cninodes.eks.amazonaws.com CRD.

Event Management:
+ Create and patch events.

This policy is designed to support comprehensive networking management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including node networking configuration, CNI (Container Network Interface) management, and related custom resource handling.

The policy allows the networking components to interact with node-related resources, manage CNI-specific node configurations, and handle custom resources critical for networking operations in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSHybridPolicy
<a name="access-policy-permissions-amazonekshybridpolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

This access policy includes permissions that grant EKS access to the nodes of a cluster. When associated to an access entry, its access scope is typically the cluster, rather than a Kubernetes namespace. This policy is used by Amazon EKS hybrid nodes.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSHybridPolicy` 


| Kubernetes API groups | Kubernetes nonResourceURLs | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | --- | 
|   `*`   |  |   `nodes`   |   `list`   | 

## AmazonEKSClusterInsightsPolicy
<a name="access-policy-permissions-AmazonEKSClusterInsightsPolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterInsightsPolicy` 

This policy grants read-only permissions for Amazon EKS Cluster Insights functionality. The policy includes the following permissions:

Node Access: - List and view cluster nodes - Read node status information

DaemonSet Access: - Read access to kube-proxy configuration

This policy is automatically managed by the EKS service for Cluster Insights. For more information, see [Prepare for Kubernetes version upgrades and troubleshoot misconfigurations with cluster insights](cluster-insights.md).

## AWSBackupFullAccessPolicyForBackup
<a name="_awsbackupfullaccesspolicyforbackup"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AWSBackupFullAccessPolicyForBackup` 

AWSBackupFullAccessPolicyForBackup

This policy grants the permissions necessary for AWS Backup to manage and create backups of the EKS Cluster. This policy includes the following permissions:


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `*`   |   `*`   |   `list`, `get`   | 

## AWSBackupFullAccessPolicyForRestore
<a name="_awsbackupfullaccesspolicyforrestore"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AWSBackupFullAccessPolicyForRestore` 

AWSBackupFullAccessPolicyForRestore

This policy grants the permissions necessary for AWS Backup to manage and restore backups of the EKS Cluster. This policy includes the following permissions:


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `*`   |   `*`   |   `list`, `get`, `create`   | 

## AmazonEKSACKPolicy
<a name="_amazoneksackpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSACKPolicy` 

This policy grants permissions necessary for the AWS Controllers for Kubernetes (ACK) capability to manage AWS resources from Kubernetes. The policy includes the following permissions:

ACK Custom Resource Management:
+ Full access to all ACK service custom resources across 50\$1 AWS services including S3, RDS, DynamoDB, Lambda, EC2, and more.
+ Create, read, update, and delete ACK custom resource definitions.

Namespace Access:
+ Read access to namespaces for resource organization.

Leader Election:
+ Create and read coordination leases for leader election.
+ Update and delete specific ACK service controller leases.

Event Management:
+ Create and patch events for ACK operations.

This policy is designed to support comprehensive AWS resource management through Kubernetes APIs. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the ACK capability is created.


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|  |   `namespaces`   |   `get`, `watch`, `list`   | 
|   `services.k8s.aws`, `acm.services.k8s.aws`, `acmpca.services.k8s.aws`, `apigateway.services.k8s.aws`, `apigatewayv2.services.k8s.aws`, `applicationautoscaling.services.k8s.aws`, `athena.services.k8s.aws`, `bedrock.services.k8s.aws`, `bedrockagent.services.k8s.aws`, `bedrockagentcorecontrol.services.k8s.aws`, `cloudfront.services.k8s.aws`, `cloudtrail.services.k8s.aws`, `cloudwatch.services.k8s.aws`, `cloudwatchlogs.services.k8s.aws`, `codeartifact.services.k8s.aws`, `cognitoidentityprovider.services.k8s.aws`, `documentdb.services.k8s.aws`, `dynamodb.services.k8s.aws`, `ec2.services.k8s.aws`, `ecr.services.k8s.aws`, `ecrpublic.services.k8s.aws`, `ecs.services.k8s.aws`, `efs.services.k8s.aws`, `eks.services.k8s.aws`, `elasticache.services.k8s.aws`, `elbv2.services.k8s.aws`, `emrcontainers.services.k8s.aws`, `eventbridge.services.k8s.aws`, `iam.services.k8s.aws`, `kafka.services.k8s.aws`, `keyspaces.services.k8s.aws`, `kinesis.services.k8s.aws`, `kms.services.k8s.aws`, `lambda.services.k8s.aws`, `memorydb.services.k8s.aws`, `mq.services.k8s.aws`, `networkfirewall.services.k8s.aws`, `opensearchservice.services.k8s.aws`, `organizations.services.k8s.aws`, `pipes.services.k8s.aws`, `prometheusservice.services.k8s.aws`, `ram.services.k8s.aws`, `rds.services.k8s.aws`, `recyclebin.services.k8s.aws`, `route53.services.k8s.aws`, `route53resolver.services.k8s.aws`, `s3.services.k8s.aws`, `s3control.services.k8s.aws`, `sagemaker.services.k8s.aws`, `secretsmanager.services.k8s.aws`, `ses.services.k8s.aws`, `sfn.services.k8s.aws`, `sns.services.k8s.aws`, `sqs.services.k8s.aws`, `ssm.services.k8s.aws`, `wafv2.services.k8s.aws`   |   `*`   |   `*`   | 
|   `coordination.k8s.io`   |   `leases`   |   `create`, `get`, `list`, `watch`   | 
|   `coordination.k8s.io`   |   `leases` (specific ACK service controller leases only)  |   `delete`, `update`, `patch`   | 
|  |   `events`   |   `create`, `patch`   | 
|   `apiextensions.k8s.io`   |   `customresourcedefinitions`   |   `*`   | 

## AmazonEKSArgoCDClusterPolicy
<a name="_amazoneksargocdclusterpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSArgoCDClusterPolicy` 

This policy grants cluster-level permissions necessary for the Argo CD capability to discover resources and manage cluster-scoped objects. The policy includes the following permissions:

Namespace Management:
+ Create, read, update, and delete namespaces for application namespace management.

Custom Resource Definition Management:
+ Manage Argo CD-specific CRDs (Applications, AppProjects, ApplicationSets).

API Discovery:
+ Read access to Kubernetes API endpoints for resource discovery.

This policy is designed to support cluster-level Argo CD operations including namespace management and CRD installation. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the Argo CD capability is created.


| Kubernetes API groups | Kubernetes nonResourceURLs | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | --- | 
|  |  |   `namespaces`   |   `create`, `get`, `update`, `patch`, `delete`   | 
|   `apiextensions.k8s.io`   |  |   `customresourcedefinitions`   |   `create`   | 
|   `apiextensions.k8s.io`   |  |   `customresourcedefinitions` (Argo CD CRDs only)  |   `get`, `update`, `patch`, `delete`   | 
|  |   `/api`, `/api/*`, `/apis`, `/apis/*`   |  |   `get`   | 

## AmazonEKSArgoCDPolicy
<a name="_amazoneksargocdpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSArgoCDPolicy` 

This policy grants namespace-level permissions necessary for the Argo CD capability to deploy and manage applications. The policy includes the following permissions:

Secret Management:
+ Full access to secrets for Git credentials and cluster secrets.

ConfigMap Access:
+ Read access to ConfigMaps to send warnings if customers try to use unsupported Argo CD ConfigMaps.

Event Management:
+ Read and create events for application lifecycle tracking.

Argo CD Resource Management:
+ Full access to Applications, ApplicationSets, and AppProjects.
+ Manage finalizers and status for Argo CD resources.

This policy is designed to support namespace-level Argo CD operations including application deployment and management. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the Argo CD capability is created, scoped to the Argo CD namespace.


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|  |   `secrets`   |   `*`   | 
|  |   `configmaps`   |   `get`, `list`, `watch`   | 
|  |   `events`   |   `get`, `list`, `watch`, `patch`, `create`   | 
|   `argoproj.io`   |   `applications`, `applications/finalizers`, `applications/status`, `applicationsets`, `applicationsets/finalizers`, `applicationsets/status`, `appprojects`, `appprojects/finalizers`, `appprojects/status`   |   `*`   | 

## AmazonEKSKROPolicy
<a name="_amazonekskropolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSKROPolicy` 

This policy grants permissions necessary for the kro (Kube Resource Orchestrator) capability to create and manage custom Kubernetes APIs. The policy includes the following permissions:

kro Resource Management:
+ Full access to all kro resources including ResourceGraphDefinitions and custom resource instances.

Custom Resource Definition Management:
+ Create, read, update, and delete CRDs for custom APIs defined by ResourceGraphDefinitions.

Leader Election:
+ Create and read coordination leases for leader election.
+ Update and delete the kro controller lease.

Event Management:
+ Create and patch events for kro operations.

This policy is designed to support comprehensive resource composition and custom API management through kro. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the kro capability is created.


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `kro.run`   |   `*`   |   `*`   | 
|   `apiextensions.k8s.io`   |   `customresourcedefinitions`   |   `*`   | 
|   `coordination.k8s.io`   |   `leases`   |   `create`, `get`, `list`, `watch`   | 
|   `coordination.k8s.io`   |   `leases` (kro controller lease only)  |   `delete`, `update`, `patch`   | 
|  |   `events`   |   `create`, `patch`   | 

## Access policy updates
<a name="access-policy-updates"></a>

View details about updates to access policies, since they were introduced. For automatic alerts about changes to this page, subscribe to the RSS feed in [Document history](doc-history.md).


| Change | Description | Date | 
| --- | --- | --- | 
|  Add policies for EKS Capabilities  |  Publish `AmazonEKSACKPolicy`, `AmazonEKSArgoCDClusterPolicy`, `AmazonEKSArgoCDPolicy`, and `AmazonEKSKROPolicy` for managing EKS Capabilities  |  November 22, 2025  | 
|  Add `AmazonEKSSecretReaderPolicy`   |  Add a new policy for read-only access to secrets  |  November 6, 2025  | 
|  Add policy for EKS Cluster Insights  |  Publish `AmazonEKSClusterInsightsPolicy`   |  December 2, 2024  | 
|  Add policies for Amazon EKS Hybrid  |  Publish `AmazonEKSHybridPolicy`   |  December 2, 2024  | 
|  Add policies for Amazon EKS Auto Mode  |  These access policies give the Cluster IAM Role and Node IAM Role permission to call Kubernetes APIs. AWS uses these to automate routine tasks for storage, compute, and networking resources.  |  December 2, 2024  | 
|  Add `AmazonEKSAdminViewPolicy`   |  Add a new policy for expanded view access, including resources like Secrets.  |  April 23, 2024  | 
|  Access policies introduced.  |  Amazon EKS introduced access policies.  |  May 29, 2023  | 

# Change authentication mode to use access entries
<a name="setting-up-access-entries"></a>

To begin using access entries, you must change the authentication mode of the cluster to either the `API_AND_CONFIG_MAP` or `API` modes. This adds the API for access entries.

## AWS Console
<a name="access-entries-setup-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create an access entry in.

1. Choose the **Access** tab.

1. The **Authentication mode** shows the current authentication mode of the cluster. If the mode says EKS API, you can already add access entries and you can skip the remaining steps.

1. Choose **Manage access**.

1. For **Cluster authentication mode**, select a mode with the EKS API. Note that you can’t change the authentication mode back to a mode that removes the EKS API and access entries.

1. Choose **Save changes**. Amazon EKS begins to update the cluster, the status of the cluster changes to Updating, and the change is recorded in the **Update history** tab.

1. Wait for the status of the cluster to return to Active. When the cluster is Active, you can follow the steps in [Create access entries](creating-access-entries.md) to add access to the cluster for IAM principals.

## AWS CLI
<a name="access-setup-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the * AWS Command Line Interface User Guide*.

1. Run the following command. Replace *my-cluster* with the name of your cluster. If you want to disable the `ConfigMap` method permanently, replace `API_AND_CONFIG_MAP` with `API`.

   Amazon EKS begins to update the cluster, the status of the cluster changes to UPDATING, and the change is recorded in the ** aws eks list-updates **.

   ```
   aws eks update-cluster-config --name my-cluster --access-config authenticationMode=API_AND_CONFIG_MAP
   ```

1. Wait for the status of the cluster to return to Active. When the cluster is Active, you can follow the steps in [Create access entries](creating-access-entries.md) to add access to the cluster for IAM principals.

## Required platform version
<a name="_required_platform_version"></a>

To use *access entries*, the cluster must have a platform version that is the same or later than the version listed in the following table, or a Kubernetes version that is later than the versions listed in the table. If your Kubernetes version is not listed, all platform versions support access entries.


| Kubernetes version | Platform version | 
| --- | --- | 
|  Not Listed  |  All Supported  | 
|   `1.30`   |   `eks.2`   | 
|   `1.29`   |   `eks.1`   | 
|   `1.28`   |   `eks.6`   | 

For more information, see [platform-versions](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html).

# Create access entries
<a name="creating-access-entries"></a>

Before creating access entries, consider the following:
+ A properly set authentication mode. See [Change authentication mode to use access entries](setting-up-access-entries.md).
+ An *access entry* includes the Amazon Resource Name (ARN) of one, and only one, existing IAM principal. An IAM principal can’t be included in more than one access entry. Additional considerations for the ARN that you specify:
  + IAM best practices recommend accessing your cluster using IAM *roles* that have short-term credentials, rather than IAM *users* that have long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.
  + If the ARN is for an IAM role, it *can* include a path. ARNs in `aws-auth` `ConfigMap` entries, *can’t* include a path. For example, your ARN can be ` arn:aws:iam::<111122223333>:role/<development/apps/my-role>` or ` arn:aws:iam::<111122223333>:role/<my-role>`.
  + If the type of the access entry is anything other than `STANDARD` (see next consideration about types), the ARN must be in the same AWS account that your cluster is in. If the type is `STANDARD`, the ARN can be in the same, or different, AWS account than the account that your cluster is in.
  + You can’t change the IAM principal after the access entry is created.
  + If you ever delete the IAM principal with this ARN, the access entry isn’t automatically deleted. We recommend that you delete the access entry with an ARN for an IAM principal that you delete. If you don’t delete the access entry and ever recreate the IAM principal, even if it has the same ARN, the access entry won’t work. This is because even though the ARN is the same for the recreated IAM principal, the `roleID` or `userID` (you can see this with the `aws sts get-caller-identity` AWS CLI command) is different for the recreated IAM principal than it was for the original IAM principal. Even though you don’t see the IAM principal’s `roleID` or `userID` for an access entry, Amazon EKS stores it with the access entry.
+ Each access entry has a *type*. The type of the access entry depends on the type of resource it is associated with, and does not define the permissions. If you don’t specify a type, Amazon EKS automatically sets the type to `STANDARD` 
  +  `EC2_LINUX` - For an IAM role used with Linux or Bottlerocket self-managed nodes
  +  `EC2_WINDOWS` - For an IAM role used with Windows self-managed nodes
  +  `FARGATE_LINUX` - For an IAM role used with AWS Fargate (Fargate)
  +  `HYBRID_LINUX` - For an IAM role used with hybrid nodes
  +  `STANDARD` - Default type if none specified
  +  `EC2` - For EKS Auto Mode custom node classes. For more information, see [Create node class access entry](create-node-class.md#auto-node-access-entry).
  + You can’t change the type after the access entry is created.
+ It’s unnecessary to create an access entry for an IAM role that’s used for a managed node group or a Fargate profile. EKS will create access entries (if enabled), or update the auth config map (if access entries are unavailable)
+ If the type of the access entry is `STANDARD`, you can specify a *username* for the access entry. If you don’t specify a value for username, Amazon EKS sets one of the following values for you, depending on the type of the access entry and whether the IAM principal that you specified is an IAM role or IAM user. Unless you have a specific reason for specifying your own username, we recommend that don’t specify one and let Amazon EKS auto-generate it for you. If you specify your own username:
  + It can’t start with `system:`, `eks:`, `aws:`, `amazon:`, or `iam:`.
  + If the username is for an IAM role, we recommend that you add `{{SessionName}}` or `{{SessionNameRaw}}` to the end of your username. If you add either `{{SessionName}}` or `{{SessionNameRaw}}` to your username, the username must include a colon *before* \$1\$1SessionName\$1\$1. When this role is assumed, the name of the AWS STS session name that is specified when assuming the role is automatically passed to the cluster and will appear in CloudTrail logs. For example, you can’t have a username of `john{{SessionName}}`. The username would have to be `:john{{SessionName}}` or `jo:hn{{SessionName}}`. The colon only has to be before `{{SessionName}}`. The username generated by Amazon EKS in the following table includes an ARN. Since an ARN includes colons, it meets this requirement. The colon isn’t required if you don’t include `{{SessionName}}` in your username. Note that in `{{SessionName}}` the special character "@" is replaced with "-" in the session name. `{{SessionNameRaw}}` keeps all special characters in the session name.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/creating-access-entries.html)

    You can change the username after the access entry is created.
+ If an access entry’s type is `STANDARD`, and you want to use Kubernetes RBAC authorization, you can add one or more *group names* to the access entry. After you create an access entry you can add and remove group names. For the IAM principal to have access to Kubernetes objects on your cluster, you must create and manage Kubernetes role-based authorization (RBAC) objects. Create Kubernetes `RoleBinding` or `ClusterRoleBinding` objects on your cluster that specify the group name as a `subject` for `kind: Group`. Kubernetes authorizes the IAM principal access to any cluster objects that you’ve specified in a Kubernetes `Role` or `ClusterRole` object that you’ve also specified in your binding’s `roleRef`. If you specify group names, we recommend that you’re familiar with the Kubernetes role-based authorization (RBAC) objects. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.
**Important**  
Amazon EKS doesn’t confirm that any Kubernetes RBAC objects that exist on your cluster include any of the group names that you specify. For example, if you create an access entry for group that currently doesn’t exist, Amazon EKS will accept the configuration without returning an error, but the IAM principal won’t have any permissions until matching Kubernetes RBAC resources are created.

  Instead of, or in addition to, Kubernetes authorizing the IAM principal access to Kubernetes objects on your cluster, you can associate Amazon EKS *access policies* to an access entry. Amazon EKS authorizes IAM principals to access Kubernetes objects on your cluster with the permissions in the access policy. You can scope an access policy’s permissions to Kubernetes namespaces that you specify. Use of access policies don’t require you to manage Kubernetes RBAC objects. For more information, see [Associate access policies with access entries](access-policies.md).
+ If you create an access entry with type `EC2_LINUX` or `EC2_Windows`, the IAM principal creating the access entry must have the `iam:PassRole` permission. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.
+ Similar to standard [IAM behavior](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency), access entry creation and updates are eventually consistent, and may take several seconds to be effective after the initial API call returns successfully. You must design your applications to account for these potential delays. We recommend that you don’t include access entry creates or updates in the critical, high- availability code paths of your application. Instead, make changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them.
+ Access entries do not support [service linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). You cannot create access entries where the principal ARN is a service linked role. You can identify service linked roles by their ARN, which is in the format ` arn:aws:iam::*:role/aws-service-role/*`.

You can create an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-create-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create an access entry in.

1. Choose the **Access** tab.

1. Choose **Create access entry**.

1. For **IAM principal**, select an existing IAM role or user. IAM best practices recommend accessing your cluster using IAM *roles* that have short-term credentials, rather than IAM *users* that have long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

1. For **Type**, if the access entry is for the node role used for self-managed Amazon EC2 nodes, select **EC2 Linux** or **EC2 Windows**. Otherwise, accept the default (**Standard**).

1. If the **Type** you chose is **Standard** and you want to specify a **Username**, enter the username.

1. If the **Type** you chose is **Standard** and you want to use Kubernetes RBAC authorization for the IAM principal, specify one or more names for **Groups**. If you don’t specify any group names and want to use Amazon EKS authorization, you can associate an access policy in a later step, or after the access entry is created.

1. (Optional) For **Tags**, assign labels to the access entry. For example, to make it easier to find all resources with the same tag.

1. Choose **Next**.

1. On the **Add access policy** page, if the type you chose was **Standard** and you want Amazon EKS to authorize the IAM principal to have permissions to the Kubernetes objects on your cluster, complete the following steps. Otherwise, choose **Next**.

   1. For **Policy name**, choose an access policy. You can’t view the permissions of the access policies, but they include similar permissions to those in the Kubernetes user-facing `ClusterRole` objects. For more information, see [User-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) in the Kubernetes documentation.

   1. Choose one of the following options:
      +  **Cluster** – Choose this option if you want Amazon EKS to authorize the IAM principal to have the permissions in the access policy for all Kubernetes objects on your cluster.
      +  **Kubernetes namespace** – Choose this option if you want Amazon EKS to authorize the IAM principal to have the permissions in the access policy for all Kubernetes objects in a specific Kubernetes namespace on your cluster. For **Namespace**, enter the name of the Kubernetes namespace on your cluster. If you want to add additional namespaces, choose **Add new namespace** and enter the namespace name.

   1. If you want to add additional policies, choose **Add policy**. You can scope each policy differently, but you can add each policy only once.

   1. Choose **Next**.

1. Review the configuration for your access entry. If anything looks incorrect, choose **Previous** to go back through the steps and correct the error. If the configuration is correct, choose **Create**.

## AWS CLI
<a name="access-create-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

1. To create an access entry You can use any of the following examples to create access entries:
   + Create an access entry for a self-managed Amazon EC2 Linux node group. Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *EKS-my-cluster-self-managed-ng-1* with the name of your [node IAM role](create-node-role.md). If your node group is a Windows node group, then replace *EC2\$1LINUX* with `EC2_Windows`.

     ```
     aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/EKS-my-cluster-self-managed-ng-1 --type EC2_LINUX
     ```

     You can’t use the `--kubernetes-groups` option when you specify a type other than `STANDARD`. You can’t associate an access policy to this access entry, because its type is a value other than `STANDARD`.
   + Create an access entry that allows an IAM role that’s not used for an Amazon EC2 self-managed node group, that you want Kubernetes to authorize access to your cluster with. Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *my-role* with the name of your IAM role. Replace *Viewers* with the name of a group that you’ve specified in a Kubernetes `RoleBinding` or `ClusterRoleBinding` object on your cluster.

     ```
     aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role --type STANDARD --username Viewers --kubernetes-groups Viewers
     ```
   + Create an access entry that allows an IAM user to authenticate to your cluster. This example is provided because this is possible, though IAM best practices recommend accessing your cluster using IAM *roles* that have short-term credentials, rather than IAM *users* that have long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

     ```
     aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:user/my-user --type STANDARD --username my-user
     ```

     If you want this user to have more access to your cluster than the permissions in the Kubernetes API discovery roles, then you need to associate an access policy to the access entry, since the `--kubernetes-groups` option isn’t used. For more information, see [Associate access policies with access entries](access-policies.md) and [API discovery roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles) in the Kubernetes documentation.

# Update access entries
<a name="updating-access-entries"></a>

You can update an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-update-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create an access entry in.

1. Choose the **Access** tab.

1. Choose the access entry that you want to update.

1. Choose **Edit**.

1. For **Username**, you can change the existing value.

1. For **Groups**, you can remove existing group names or add new group names. If the following groups names exist, don’t remove them: **system:nodes** or **system:bootstrappers**. Removing these groups can cause your cluster to function improperly. If you don’t specify any group names and want to use Amazon EKS authorization, associate an [access policy](access-policies.md) in a later step.

1. For **Tags**, you can assign labels to the access entry. For example, to make it easier to find all resources with the same tag. You can also remove existing tags.

1. Choose **Save changes**.

1. If you want to associate an access policy to the entry, see [Associate access policies with access entries](access-policies.md).

## AWS CLI
<a name="access-update-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

1. To update an access entry Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *EKS-my-cluster-my-namespace-Viewers* with the name of an IAM role.

   ```
   aws eks update-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/EKS-my-cluster-my-namespace-Viewers --kubernetes-groups Viewers
   ```

   You can’t use the `--kubernetes-groups` option if the type of the access entry is a value other than `STANDARD`. You also can’t associate an access policy to an access entry with a type other than `STANDARD`.

# Delete access entries
<a name="deleting-access-entries"></a>

If you discover that you deleted an access entry in error, you can always recreate it. If the access entry that you’re deleting is associated to any access policies, the associations are automatically deleted. You don’t have to disassociate access policies from an access entry before deleting the access entry.

You can delete an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-delete-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to delete an access entry from.

1. Choose the **Access** tab.

1. In the **Access entries** list, choose the access entry that you want to delete.

1. Choose Delete.

1. In the confirmation dialog box, choose **Delete**.

## AWS CLI
<a name="access-delete-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

1. To delete an access entry Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *my-role* with the name of the IAM role that you no longer want to have access to your cluster.

   ```
   aws eks delete-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role
   ```

# Set a custom username for EKS access entries
<a name="set-custom-username"></a>

When creating access entries for Amazon EKS, you can either use the automatically generated username or specify a custom username. This page explains both options and guides you through setting a custom username.

## Overview
<a name="_overview"></a>

The username in an access entry is used to identify the IAM principal in Kubernetes logs and audit trails. By default, Amazon EKS generates a username based on the IAM identity’s ARN, but you can specify a custom username if needed.

## Default username generation
<a name="_default_username_generation"></a>

If you don’t specify a value for username, Amazon EKS automatically generates a username based on the IAM Identity:
+  **For IAM Users**:
  + EKS sets the Kubernetes username to the ARN of the IAM User
  + Example:

    ```
    {arn-aws}iam::<111122223333>:user/<my-user>
    ```
+  **For IAM Roles**:
  + EKS sets the Kubernetes username based on the ARN of the IAM Role
  + The STS ARN of the role when it’s assumed. Amazon EKS appends `{{SessionName}}` to the role. If the ARN of the role that you specified contained a path, Amazon EKS removes it in the generated username.
  + Example:

    ```
    {arn-aws}sts::<111122223333>:assumed-role/<my-role>/{{SessionName}}
    ```

Unless you have a specific reason for specifying your own username, we recommend that you don’t specify one and let Amazon EKS auto-generate it for you.

## Setting a custom username
<a name="_setting_a_custom_username"></a>

When creating an access entry, you can specify a custom username using the `--username` parameter:

```
aws eks create-access-entry --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --type STANDARD --username <custom-username>
```

### Requirements for custom usernames
<a name="_requirements_for_custom_usernames"></a>

If you specify a custom username:
+ The username can’t start with `system:`, `eks:`, `aws:`, `amazon:`, or `iam:`.
+ If the username is for an IAM role, we recommend that you add `{{SessionName}}` or `{{SessionNameRaw}}` to the end of your username.
  + If you add either `{{SessionName}}` or `{{SessionNameRaw}}` to your username, the username must include a colon *before* \$1\$1SessionName\$1\$1.

# Create an access entry for an IAM role or user using an access policy and the AWS CLI
<a name="create-standard-access-entry-policy"></a>

Create Amazon EKS access entries that use AWS-managed EKS access policies to grant IAM identities standardized permissions for accessing and managing Kubernetes clusters.

## Overview
<a name="_overview"></a>

Access entries in Amazon EKS define how IAM identities (users and roles) can access and interact with your Kubernetes clusters. By creating access entries with EKS access policies, you can:
+ Grant specific IAM users or roles permission to access your EKS cluster
+ Control permissions using AWS-managed EKS access policies that provide standardized, predefined permission sets
+ Scope permissions to specific namespaces or cluster-wide
+ Simplify access management without modifying the `aws-auth` ConfigMap or creating Kubernetes RBAC resources
+ Use AWS-integrated approach to Kubernetes access control that covers common use cases while maintaining security best practices

This approach is recommended for most use cases because it provides AWS-managed, standardized permissions without requiring manual Kubernetes RBAC configuration. EKS access policies eliminate the need to manually configure Kubernetes RBAC resources and offer predefined permission sets that cover common use cases.

## Prerequisites
<a name="_prerequisites"></a>
+ The *authentication mode* of your cluster must be configured to enable *access entries*. For more information, see [Change authentication mode to use access entries](setting-up-access-entries.md).
+ Install and configure the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

## Step 1: Define access entry
<a name="ap1-s1"></a>

1. Find the ARN of IAM identity, such as a user or role, that you want to grant permissions to.
   + Each IAM identity can have only one EKS access entry.

1. Determine if you want the Amazon EKS access policy permissions to apply to only a specific Kubernetes namespace, or across the entire cluster.
   + If you want to limit the permissions to a specific namespace, make note of the namespace name.

1. Select the EKS access policy you want for the IAM identity. This policy gives in-cluster permissions. Note the ARN of the policy.
   + For a list of policies, see [available access policies](access-policy-permissions.md).

1. Determine if the auto-generated username is appropriate for the access entry, or if you need to manually specify a username.
   +  AWS auto-generates this value based on the IAM identity. You can set a custom username. This is visible in Kubernetes logs.
   + For more information, see [Set a custom username for EKS access entries](set-custom-username.md).

## Step 2: Create access entry
<a name="ap1-s2"></a>

After planning the access entry, use the AWS CLI to create it.

The following example covers most use cases. [View the CLI reference for all configuration options](https://docs.aws.amazon.com/cli/latest/reference/eks/create-access-entry.html).

You will attach the access policy in the next step.

```
aws eks create-access-entry --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --type STANDARD
```

## Step 3: Associate access policy
<a name="_step_3_associate_access_policy"></a>

The command differs based on whether you want the policy to be limited to a specified Kubernetes namespace.

You need the ARN of the access policy. Review the [available access policies](access-policy-permissions.md).

### Create policy without namespace scope
<a name="_create_policy_without_namespace_scope"></a>

```
aws eks associate-access-policy --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --access-scope type=cluster --policy-arn <access-policy-arn>
```

### Create with namespace scope
<a name="_create_with_namespace_scope"></a>

```
aws eks associate-access-policy --cluster-name <cluster-name> --principal-arn <iam-identity-arn> \
    --access-scope type=namespace,namespaces=my-namespace1,my-namespace2 --policy-arn <access-policy-arn>
```

## Next steps
<a name="_next_steps"></a>
+  [Create a kubeconfig so you can use kubectl with an IAM identity](create-kubeconfig.md) 

# Create an access entry using Kubernetes groups with the AWS CLI
<a name="create-k8s-group-access-entry"></a>

Create Amazon EKS access entries that use Kubernetes groups for authorization and require manual RBAC configuration.

**Note**  
For most use cases, we recommend using EKS Access Policies instead of the Kubernetes groups approach described on this page. EKS Access Policies provide a simpler, more AWS-integrated way to manage access without requiring manual RBAC configuration. Use the Kubernetes groups approach only when you need more granular control than what EKS Access Policies offer.

## Overview
<a name="_overview"></a>

Access entries define how IAM identities (users and roles) access your Kubernetes clusters. The Kubernetes groups approach grants IAM users or roles permission to access your EKS cluster through standard Kubernetes RBAC groups. This method requires creating and managing Kubernetes RBAC resources (Roles, RoleBindings, ClusterRoles, and ClusterRoleBindings) and is recommended when you need highly customized permission sets, complex authorization requirements, or want to maintain consistent access control patterns across hybrid Kubernetes environments.

This topic does not cover creating access entries for IAM identities used for Amazon EC2 instances to join EKS clusters.

## Prerequisites
<a name="_prerequisites"></a>
+ The *authentication mode* of your cluster must be configured to enable *access entries*. For more information, see [Change authentication mode to use access entries](setting-up-access-entries.md).
+ Install and configure the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.
+ Familiarity with Kubernetes RBAC is recommended. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

## Step 1: Define access entry
<a name="k8s-group-s1"></a>

1. Find the ARN of the IAM identity, such as a user or role, that you want to grant permissions to.
   + Each IAM identity can have only one EKS access entry.

1. Determine which Kubernetes groups you want to associate with this IAM identity.
   + You will need to create or use existing Kubernetes `Role`/`ClusterRole` and `RoleBinding`/`ClusterRoleBinding` resources that reference these groups.

1. Determine if the auto-generated username is appropriate for the access entry, or if you need to manually specify a username.
   +  AWS auto-generates this value based on the IAM identity. You can set a custom username. This is visible in Kubernetes logs.
   + For more information, see [Set a custom username for EKS access entries](set-custom-username.md).

## Step 2: Create access entry with Kubernetes groups
<a name="k8s-group-s2"></a>

After planning the access entry, use the AWS CLI to create it with the appropriate Kubernetes groups.

```
aws eks create-access-entry --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --type STANDARD --kubernetes-groups <groups>
```

Replace:
+  `<cluster-name>` with your EKS cluster name
+  `<iam-identity-arn>` with the ARN of the IAM user or role
+  `<groups>` with a comma-separated list of Kubernetes groups (e.g., "system:developers,system:readers")

 [View the CLI reference for all configuration options](https://docs.aws.amazon.com/cli/latest/reference/eks/create-access-entry.html).

## Step 3: Configure Kubernetes RBAC
<a name="_step_3_configure_kubernetes_rbac"></a>

For the IAM principal to have access to Kubernetes objects on your cluster, you must create and manage Kubernetes role-based access control (RBAC) objects:

1. Create Kubernetes `Role` or `ClusterRole` objects that define the permissions.

1. Create Kubernetes `RoleBinding` or `ClusterRoleBinding` objects on your cluster that specify the group name as a `subject` for `kind: Group`.

For detailed information about configuring groups and permissions in Kubernetes, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

## Next steps
<a name="_next_steps"></a>
+  [Create a kubeconfig so you can use kubectl with an IAM identity](create-kubeconfig.md) 