

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Security considerations for Kubernetes
<a name="security-k8s"></a>

The following are considerations for security in the cloud, as they affect Kubernetes in Amazon EKS clusters. For an in-depth review of security controls and practices in Kubernetes, see [Cloud Native Security and Kubernetes](https://kubernetes.io/docs/concepts/security/cloud-native-security/) in the Kubernetes documentation.

**Topics**
+ [Secure workloads with Kubernetes certificates](cert-signing.md)
+ [Understand Amazon EKS created RBAC roles and users](default-roles-users.md)
+ [Encrypt Kubernetes secrets with KMS on existing clusters](enable-kms.md)
+ [Use AWS Secrets Manager secrets with Amazon EKS Pods](manage-secrets.md)
+ [Default envelope encryption for all Kubernetes API Data](envelope-encryption.md)
+ [Harden Kubernetes RBAC in Amazon EKS](rbac-hardening.md)

# Secure workloads with Kubernetes certificates
<a name="cert-signing"></a>

The Kubernetes Certificates API automates [X.509](https://www.itu.int/rec/T-REC-X.509) credential provisioning. The API features a command line interface for Kubernetes API clients to request and obtain [X.509 certificates](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/) from a Certificate Authority (CA). You can use the `CertificateSigningRequest` (CSR) resource to request that a denoted signer sign the certificate. Your requests are either approved or denied before they’re signed. Kubernetes supports both built-in signers and custom signers with well-defined behaviors. This way, clients can predict what happens to their CSRs. To learn more about certificate signing, see [signing requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/).

One of the built-in signers is `kubernetes.io/legacy-unknown`. The `v1beta1` API of CSR resource honored this legacy-unknown signer. However, the stable `v1` API of CSR doesn’t allow the `signerName` to be set to `kubernetes.io/legacy-unknown`.

If you want to use Amazon EKS CA for generating certificates on your clusters, you must use a custom signer. To use the CSR `v1` API version and generate a new certificate, you must migrate any existing manifests and API clients. Existing certificates that were created with the existing `v1beta1` API are valid and function until the certificate expires. This includes the following:
+ Trust distribution: None. There’s no standard trust or distribution for this signer in a Kubernetes cluster.
+ Permitted subjects: Any
+ Permitted x509 extensions: Honors subjectAltName and key usage extensions and discards other extensions
+ Permitted key usages: Must not include usages beyond ["key encipherment", "digital signature", "server auth"]
**Note**  
Client certificate signing is not supported.
+ Expiration/certificate lifetime: 1 year (default and maximum)
+ CA bit allowed/disallowed: Not allowed

## Example CSR generation with signerName
<a name="csr-example"></a>

These steps shows how to generate a serving certificate for DNS name `myserver.default.svc` using `signerName: beta.eks.amazonaws.com/app-serving`. Use this as a guide for your own environment.

1. Run the `openssl genrsa -out myserver.key 2048` command to generate an RSA private key.

   ```
   openssl genrsa -out myserver.key 2048
   ```

1. Run the following command to generate a certificate request.

   ```
   openssl req -new -key myserver.key -out myserver.csr -subj "/CN=myserver.default.svc"
   ```

1. Generate a `base64` value for the CSR request and store it in a variable for use in a later step.

   ```
   base_64=$(cat myserver.csr | base64 -w 0 | tr -d "
   ")
   ```

1. Run the following command to create a file named `mycsr.yaml`. In the following example, `beta.eks.amazonaws.com/app-serving` is the `signerName`.

   ```
   cat >mycsr.yaml <<EOF
   apiVersion: certificates.k8s.io/v1
   kind: CertificateSigningRequest
   metadata:
     name: myserver
   spec:
     request: $base_64
     signerName: beta.eks.amazonaws.com/app-serving
     usages:
       - digital signature
       - key encipherment
       - server auth
   EOF
   ```

1. Submit the CSR.

   ```
   kubectl apply -f mycsr.yaml
   ```

1. Approve the serving certificate.

   ```
   kubectl certificate approve myserver
   ```

1. Verify that the certificate was issued.

   ```
   kubectl get csr myserver
   ```

   An example output is as follows.

   ```
   NAME       AGE     SIGNERNAME                           REQUESTOR          CONDITION
   myserver   3m20s   beta.eks.amazonaws.com/app-serving   kubernetes-admin   Approved,Issued
   ```

1. Export the issued certificate.

   ```
   kubectl get csr myserver -o jsonpath='{.status.certificate}'| base64 -d > myserver.crt
   ```

# Understand Amazon EKS created RBAC roles and users
<a name="default-roles-users"></a>

When you create a Kubernetes cluster, several default Kubernetes identities are created on that cluster for the proper functioning of Kubernetes. Amazon EKS creates Kubernetes identities for each of its default components. The identities provide Kubernetes role-based authorization control (RBAC) for the cluster components. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

When you install optional [add-ons](eks-add-ons.md) to your cluster, additional Kubernetes identities might be added to your cluster. For more information about identities not addressed by this topic, see the documentation for the add-on.

You can view the list of Amazon EKS created Kubernetes identities on your cluster using the AWS Management Console or `kubectl` command line tool. All of the user identities appear in the `kube` audit logs available to you through Amazon CloudWatch.

## AWS Management Console
<a name="default-role-users-console"></a>

### Prerequisite
<a name="_prerequisite"></a>

The [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that you use must have the permissions described in [Required permissions](view-kubernetes-resources.md#view-kubernetes-resources-permissions).

### To view Amazon EKS created identities using the AWS Management Console
<a name="to_view_amazon_eks_created_identities_using_the_shared_consolelong"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the **Clusters** list, choose the cluster that contains the identities that you want to view.

1. Choose the **Resources** tab.

1. Under **Resource types**, choose **Authorization**.

1. Choose, **ClusterRoles**, **ClusterRoleBindings**, **Roles**, or **RoleBindings**. All resources prefaced with **eks** are created by Amazon EKS. Additional Amazon EKS created identity resources are:
   + The **ClusterRole** and **ClusterRoleBinding** named **aws-node**. The **aws-node** resources support the [Amazon VPC CNI plugin for Kubernetes](managing-vpc-cni.md), which Amazon EKS installs on all clusters.
   + A **ClusterRole** named **vpc-resource-controller-role** and a **ClusterRoleBinding** named **vpc-resource-controller-rolebinding**. These resources support the [Amazon VPC resource controller](https://github.com/aws/amazon-vpc-resource-controller-k8s), which Amazon EKS installs on all clusters.

   In addition to the resources that you see in the console, the following special user identities exist on your cluster, though they’re not visible in the cluster’s configuration:
   +  ** `eks:cluster-bootstrap` ** – Used for `kubectl` operations during cluster bootstrap.
   +  ** `eks:support-engineer` ** – Used for cluster management operations.

1. Choose a specific resource to view details about it. By default, you’re shown information in **Structured view**. In the top-right corner of the details page you can choose **Raw view** to see all information for the resource.

## Kubectl
<a name="default-role-users-kubectl"></a>

### Prerequisite
<a name="_prerequisite_2"></a>

The entity that you use (AWS Identity and Access Management (IAM) or OpenID Connect (OIDC)) to list the Kubernetes resources on the cluster must be authenticated by IAM or your OIDC identity provider. The entity must be granted permissions to use the Kubernetes `get` and `list` verbs for the `Role`, `ClusterRole`, `RoleBinding`, and `ClusterRoleBinding` resources on your cluster that you want the entity to work with. For more information about granting IAM entities access to your cluster, see [Grant IAM users and roles access to Kubernetes APIs](grant-k8s-access.md). For more information about granting entities authenticated by your own OIDC provider access to your cluster, see [Grant users access to Kubernetes with an external OIDC provider](authenticate-oidc-identity-provider.md).

### To view Amazon EKS created identities using `kubectl`
<a name="_to_view_amazon_eks_created_identities_using_kubectl"></a>

Run the command for the type of resource that you want to see. All returned resources that are prefaced with **eks** are created by Amazon EKS. In addition to the resources returned in the output from the commands, the following special user identities exist on your cluster, though they’re not visible in the cluster’s configuration:
+  ** `eks:cluster-bootstrap` ** – Used for `kubectl` operations during cluster bootstrap.
+  ** `eks:support-engineer` ** – Used for cluster management operations.

 **ClusterRoles** – `ClusterRoles` are scoped to your cluster, so any permission granted to a role applies to resources in any Kubernetes namespace on the cluster.

The following command returns all of the Amazon EKS created Kubernetes `ClusterRoles` on your cluster.

```
kubectl get clusterroles | grep eks
```

In addition to the `ClusterRoles` returned in the output that are prefaced with `eks:`, the following `ClusterRoles` exist.
+  ** `aws-node` ** – This `ClusterRole` supports the [Amazon VPC CNI plugin for Kubernetes](managing-vpc-cni.md), which Amazon EKS installs on all clusters.
+  ** `vpc-resource-controller-role` ** – This `ClusterRole` supports the [Amazon VPC resource controller](https://github.com/aws/amazon-vpc-resource-controller-k8s), which Amazon EKS installs on all clusters.

To see the specification for a `ClusterRole`, replace *eks:k8s-metrics* in the following command with a `ClusterRole` returned in the output of the previous command. The following example returns the specification for the *eks:k8s-metrics* `ClusterRole`.

```
kubectl describe clusterrole eks:k8s-metrics
```

An example output is as follows.

```
Name:         eks:k8s-metrics
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources         Non-Resource URLs  Resource Names  Verbs
  ---------         -----------------  --------------  -----
                    [/metrics]         []              [get]
  endpoints         []                 []              [list]
  nodes             []                 []              [list]
  pods              []                 []              [list]
  deployments.apps  []                 []              [list]
```

 **ClusterRoleBindings** – `ClusterRoleBindings` are scoped to your cluster.

The following command returns all of the Amazon EKS created Kubernetes `ClusterRoleBindings` on your cluster.

```
kubectl get clusterrolebindings | grep eks
```

In addition to the `ClusterRoleBindings` returned in the output, the following `ClusterRoleBindings` exist.
+  ** `aws-node` ** – This `ClusterRoleBinding` supports the [Amazon VPC CNI plugin for Kubernetes](managing-vpc-cni.md), which Amazon EKS installs on all clusters.
+  ** `vpc-resource-controller-rolebinding` ** – This `ClusterRoleBinding` supports the [Amazon VPC resource controller](https://github.com/aws/amazon-vpc-resource-controller-k8s), which Amazon EKS installs on all clusters.

To see the specification for a `ClusterRoleBinding`, replace *eks:k8s-metrics* in the following command with a `ClusterRoleBinding` returned in the output of the previous command. The following example returns the specification for the *eks:k8s-metrics* `ClusterRoleBinding`.

```
kubectl describe clusterrolebinding eks:k8s-metrics
```

An example output is as follows.

```
Name:         eks:k8s-metrics
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  eks:k8s-metrics
Subjects:
  Kind  Name             Namespace
  ----  ----             ---------
  User  eks:k8s-metrics
```

 **Roles** – `Roles` are scoped to a Kubernetes namespace. All Amazon EKS created `Roles` are scoped to the `kube-system` namespace.

The following command returns all of the Amazon EKS created Kubernetes `Roles` on your cluster.

```
kubectl get roles -n kube-system | grep eks
```

To see the specification for a `Role`, replace *eks:k8s-metrics* in the following command with the name of a `Role` returned in the output of the previous command. The following example returns the specification for the *eks:k8s-metrics* `Role`.

```
kubectl describe role eks:k8s-metrics -n kube-system
```

An example output is as follows.

```
Name:         eks:k8s-metrics
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources         Non-Resource URLs  Resource Names             Verbs
  ---------         -----------------  --------------             -----
  daemonsets.apps   []                 [aws-node]                 [get]
  deployments.apps  []                 [vpc-resource-controller]  [get]
```

 **RoleBindings** – `RoleBindings` are scoped to a Kubernetes namespace. All Amazon EKS created `RoleBindings` are scoped to the `kube-system` namespace.

The following command returns all of the Amazon EKS created Kubernetes `RoleBindings` on your cluster.

```
kubectl get rolebindings -n kube-system | grep eks
```

To see the specification for a `RoleBinding`, replace *eks:k8s-metrics* in the following command with a `RoleBinding` returned in the output of the previous command. The following example returns the specification for the *eks:k8s-metrics* `RoleBinding`.

```
kubectl describe rolebinding eks:k8s-metrics -n kube-system
```

An example output is as follows.

```
Name:         eks:k8s-metrics
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  Role
  Name:  eks:k8s-metrics
Subjects:
  Kind  Name             Namespace
  ----  ----             ---------
  User  eks:k8s-metrics
```

# Encrypt Kubernetes secrets with KMS on existing clusters
<a name="enable-kms"></a>

**Important**  
This procedure only applies to EKS clusters running Kubernetes version 1.27 or lower. If you are running Kubernetes version 1.28 or higher, your Kubernetes secrets are protected with envelope encryption by default. For more information, see [Default envelope encryption for all Kubernetes API Data](envelope-encryption.md).

If you enable [secrets encryption](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/), the Kubernetes secrets are encrypted using the AWS KMS key that you select. The KMS key must meet the following conditions:
+ Symmetric
+ Can encrypt and decrypt data
+ Created in the same AWS Region as the cluster
+ If the KMS key was created in a different account, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) must have access to the KMS key.

For more information, see [Allowing IAM principals in other accounts to use a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the * [AWS Key Management Service Developer Guide](https://docs.aws.amazon.com/kms/latest/developerguide/) *.

**Warning**  
You can’t disable secrets encryption after enabling it. This action is irreversible.

eksctl   
This procedure only applies to EKS clusters running Kubernetes version 1.27 or lower. For more information, see [Default envelope encryption for all Kubernetes API Data](envelope-encryption.md).

You can enable encryption in two ways:
+ Add encryption to your cluster with a single command.

  To automatically re-encrypt your secrets, run the following command.

  ```
  eksctl utils enable-secrets-encryption \
      --cluster my-cluster \
      --key-arn arn:aws:kms:region-code:account:key/key
  ```

  To opt-out of automatically re-encrypting your secrets, run the following command.

  ```
  eksctl utils enable-secrets-encryption \
      --cluster my-cluster \
      --key-arn arn:aws:kms:region-code:account:key/key \
      --encrypt-existing-secrets=false
  ```
+ Add encryption to your cluster with a `kms-cluster.yaml` file.

  ```
  apiVersion: eksctl.io/v1alpha5
  kind: ClusterConfig
  
  metadata:
    name: my-cluster
    region: region-code
  
  secretsEncryption:
    keyARN: arn:aws:kms:region-code:account:key/key
  ```

  To have your secrets re-encrypt automatically, run the following command.

  ```
  eksctl utils enable-secrets-encryption -f kms-cluster.yaml
  ```

  To opt out of automatically re-encrypting your secrets, run the following command.

  ```
  eksctl utils enable-secrets-encryption -f kms-cluster.yaml --encrypt-existing-secrets=false
  ```  
 AWS Management Console   

  1. This procedure only applies to EKS clusters running Kubernetes version 1.27 or lower. For more information, see [Default envelope encryption for all Kubernetes API Data](envelope-encryption.md).

  1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

  1. Choose the cluster that you want to add KMS encryption to.

  1. Choose the **Overview** tab (this is selected by default).

  1. Scroll down to the **Secrets encryption** section and choose **Enable**.

  1. Select a key from the dropdown list and choose the **Enable** button. If no keys are listed, you must create one first. For more information, see [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) 

  1. Choose the **Confirm** button to use the chosen key.  
 AWS CLI  

  1. This procedure only applies to EKS clusters running Kubernetes version 1.27 or lower. For more information, see [Default envelope encryption for all Kubernetes API Data](envelope-encryption.md).

  1. Associate the [secrets encryption](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) configuration with your cluster using the following AWS CLI command. Replace the example values with your own.

     ```
     aws eks associate-encryption-config \
         --cluster-name my-cluster \
         --encryption-config '[{"resources":["secrets"],"provider":{"keyArn":"arn:aws:kms:region-code:account:key/key"}}]'
     ```

     An example output is as follows.

     ```
     {
       "update": {
         "id": "3141b835-8103-423a-8e68-12c2521ffa4d",
         "status": "InProgress",
         "type": "AssociateEncryptionConfig",
         "params": [
           {
             "type": "EncryptionConfig",
             "value": "[{\"resources\":[\"secrets\"],\"provider\":{\"keyArn\":\"arn:aws:kms:region-code:account:key/key\"}}]"
           }
         ],
         "createdAt": 1613754188.734,
         "errors": []
       }
     }
     ```

  1. You can monitor the status of your encryption update with the following command. Use the specific `cluster name` and `update ID` that was returned in the previous output. When a `Successful` status is displayed, the update is complete.

     ```
     aws eks describe-update \
         --region region-code \
         --name my-cluster \
         --update-id 3141b835-8103-423a-8e68-12c2521ffa4d
     ```

     An example output is as follows.

     ```
     {
       "update": {
         "id": "3141b835-8103-423a-8e68-12c2521ffa4d",
         "status": "Successful",
         "type": "AssociateEncryptionConfig",
         "params": [
           {
             "type": "EncryptionConfig",
             "value": "[{\"resources\":[\"secrets\"],\"provider\":{\"keyArn\":\"arn:aws:kms:region-code:account:key/key\"}}]"
           }
         ],
         "createdAt": 1613754188.734,
         "errors": []
       }
     }
     ```

  1. To verify that encryption is enabled in your cluster, run the `describe-cluster` command. The response contains an `EncryptionConfig` string.

     ```
     aws eks describe-cluster --region region-code --name my-cluster
     ```

After you enabled encryption on your cluster, you must encrypt all existing secrets with the new key:

**Note**  
If you use `eksctl`, running the following command is necessary only if you opt out of re-encrypting your secrets automatically.

```
kubectl get secrets --all-namespaces -o json | kubectl annotate --overwrite -f - kms-encryption-timestamp="time value"
```

**Warning**  
If you enable [secrets encryption](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) for an existing cluster and the KMS key that you use is ever deleted, then there’s no way to recover the cluster. If you delete the KMS key, you permanently put the cluster in a degraded state. For more information, see [Deleting AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html).

**Note**  
By default, the `create-key` command creates a [symmetric encryption KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) with a key policy that gives the account root admin access on AWS KMS actions and resources. If you want to scope down the permissions, make sure that the `kms:DescribeKey` and `kms:CreateGrant` actions are permitted on the policy for the principal that calls the `create-cluster` API.  
For clusters using KMS Envelope Encryption, `kms:CreateGrant` permissions are required. The condition `kms:GrantIsForAWSResource` is not supported for the CreateCluster action, and should not be used in KMS policies to control `kms:CreateGrant` permissions for users performing CreateCluster.

# Use AWS Secrets Manager secrets with Amazon EKS Pods
<a name="manage-secrets"></a>

To show secrets from Secrets Manager and parameters from Parameter Store as files mounted in Amazon EKS Pods, you can use the AWS Secrets and Configuration Provider (ASCP) for the [Kubernetes Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/).

With the ASCP, you can store and manage your secrets in Secrets Manager and then retrieve them through your workloads running on Amazon EKS. You can use IAM roles and policies to limit access to your secrets to specific Kubernetes Pods in a cluster. The ASCP retrieves the Pod identity and exchanges the identity for an IAM role. ASCP assumes the IAM role of the Pod, and then it can retrieve secrets from Secrets Manager that are authorized for that role.

If you use Secrets Manager automatic rotation for your secrets, you can also use the Secrets Store CSI Driver rotation reconciler feature to ensure you are retrieving the latest secret from Secrets Manager.

**Note**  
 AWS Fargate (Fargate) node groups are not supported.

For more information, see [Using Secrets Manager secrets in Amazon EKS](https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_csi_driver.html) in the AWS Secrets Manager User Guide.

# Default envelope encryption for all Kubernetes API Data
<a name="envelope-encryption"></a>

Amazon Elastic Kubernetes Service (Amazon EKS) provides default envelope encryption for all Kubernetes API data in EKS clusters running Kubernetes version 1.28 or higher.

Envelope encryption protects the data you store with the Kubernetes API server. For example, envelope encryption applies to the configuration of your Kubernetes cluster, such as `ConfigMaps`. Envelope encryption does not apply to data on nodes or EBS volumes. EKS previously supported encrypting Kubernetes secrets, and now this envelope encryption extends to all Kubernetes API data.

This provides a managed, default experience that implements defense-in-depth for your Kubernetes applications and doesn’t require any action on your part.

Amazon EKS uses AWS [Key Management Service (KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) with [Kubernetes KMS provider v2](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#configuring-the-kms-provider-kms-v2) for this additional layer of security with an [Amazon Web Services owned key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk), and the option for you to bring your own [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) (CMK) from AWS KMS.

## Understanding envelope encryption
<a name="_understanding_envelope_encryption"></a>

Envelope encryption is the process of encrypting plain text data with a data encryption key (DEK) before it’s sent to the datastore (etcd), and then encrypting the DEK with a root KMS key that is stored in a remote, centrally managed KMS system (AWS KMS). This is a defense-in-depth strategy because it protects the data with an encryption key (DEK), and then adds another security layer by protecting that DEK with a separate, securely stored encryption key called a key encryption key (KEK).

## How Amazon EKS enables default envelope encryption with KMS v2 and AWS KMS
<a name="how_amazon_eks_enables_default_envelope_encryption_with_kms_v2_and_shared_aws_kms"></a>

Amazon EKS uses [KMS v2](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#kms-v2) to implement default envelope encryption for all API data in the managed Kubernetes control plane before it’s persisted in the [etcd](https://etcd.io/docs/v3.5/faq/) database. At startup, the cluster API server generates a data encryption key (DEK) from a secret seed combined with randomly generated data. Also at startup, the API server makes a call to the KMS plugin to encrypt the DEK seed using a remote key encryption key (KEK) from AWS KMS. This is a one-time call executed at startup of the API server and on KEK rotation. The API server then caches the encrypted DEK seed. After this, the API server uses the cached DEK seed to generate other single use DEKs based on a Key Derivation Function (KDF). Each of these generated DEKs is then used only once to encrypt a single Kubernetes resource before it’s stored in etcd. With the use of an encrypted cached DEK seed in KMS v2, the process of encrypting Kubernetes resources in the API server is both more performant and cost effective.

 **By default, this KEK is owned by AWS, but you can optionally bring your own from AWS KMS.** 

The diagram below depicts the generation and encryption of a DEK at the startup of the API server.

![\[The diagram depicts the generation and encryption of a DEK at the startup of the API server\]](http://docs.aws.amazon.com/eks/latest/userguide/images/security-generate-dek.png)


The high-level diagram below depicts the encryption of a Kubernetes resource before it’s stored in etcd.

![\[The high-level diagram depicts the encryption of a Kubernetes resource before it’s stored in etcd.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/security-encrypt-request.png)


## Frequently asked questions
<a name="_frequently_asked_questions"></a>

### How does default envelope encryption improve the security posture of my EKS cluster?
<a name="_how_does_default_envelope_encryption_improve_the_security_posture_of_my_eks_cluster"></a>

This feature reduces the surface area and period of time in which metadata and customer content are un-encrypted. With default envelope encryption, metadata and customer content are only ever in a temporarily un-encrypted state in the kube-apiserver’s memory before being stored in etcd. The kube-apiserver’s memory is secured through the [Nitro system](https://docs.aws.amazon.com/whitepapers/latest/security-design-of-aws-nitro-system/the-components-of-the-nitro-system.html). Amazon EKS only uses [Nitro-based EC2 instances](https://docs.aws.amazon.com/whitepapers/latest/security-design-of-aws-nitro-system/security-design-of-aws-nitro-system.html) for the managed Kubernetes control plane. These instances have security control designs that prevent any system or person from accessing their memory.

### Which version of Kubernetes do I need to run in order to have this feature?
<a name="_which_version_of_kubernetes_do_i_need_to_run_in_order_to_have_this_feature"></a>

For default envelope encryption to be enabled, your Amazon EKS cluster has to be running Kubernetes version 1.28 or later.

### Is my data still secure if I’m running a Kubernetes cluster version that doesn’t support this feature?
<a name="_is_my_data_still_secure_if_im_running_a_kubernetes_cluster_version_that_doesnt_support_this_feature"></a>

Yes. At AWS, [security is our highest priority](https://aws.amazon.com/security/). We base all our digital transformation and innovation on the highest security operational practices, and stay committed to raising that bar.

All of the data stored in the etcd are encrypted at the disk level for every EKS cluster, irrespective of the Kubernetes version being run. EKS uses root keys that generate volume encryption keys which are managed by the EKS service. Additionally, every Amazon EKS cluster is run in an isolated VPC using cluster-specific virtual machines. Because of this architecture, and our practices around operational security, Amazon EKS has [achieved multiple compliance ratings and standards](https://docs.aws.amazon.com/eks/latest/userguide/compliance.html) including SOC 1,2,3, PCI-DSS, ISO, and HIPAA eligibility. These compliance ratings and standards are maintained for all EKS clusters with or without default envelope encryption.

### How does envelope encryption work in Amazon EKS?
<a name="_how_does_envelope_encryption_work_in_amazon_eks"></a>

At startup, the cluster API server generates a data encryption key (DEK) from a secret seed combined with randomly generated data. Also at startup, the API server makes a call to the KMS plugin to encrypt the DEK using a remote key encryption key (KEK) from AWS KMS. This is a one-time call executed at startup of the API server and on KEK rotation. The API server then caches the encrypted DEK seed. After this, the API server uses the cached DEK seed to generate other single use DEKs based on a Key Derivation Function (KDF). Each of these generated DEKs is then used only once to encrypt a single Kubernetes resource before it’s stored in etcd.

It’s important to note that there are additional calls made from the API server to verify the health and normal functionality of the AWS KMS integration. These additional health checks are visible in your AWS CloudTrail.

### Do I have to do anything or change any permissions for this feature to work in my EKS cluster?
<a name="_do_i_have_to_do_anything_or_change_any_permissions_for_this_feature_to_work_in_my_eks_cluster"></a>

No, you don’t have to take any action. Envelope encryption in Amazon EKS is now a default configuration that is enabled in all clusters running Kubernetes version 1.28 or higher. The AWS KMS integration is established by the Kubernetes API server managed by AWS. This means you do not need to configure any permissions to start using KMS encryption for your cluster.

### How can I know if default envelope encryption is enabled on my cluster?
<a name="_how_can_i_know_if_default_envelope_encryption_is_enabled_on_my_cluster"></a>

If you migrate to use your own CMK, then you will see the ARN of the KMS key associated with your cluster. Additionally, you can view the AWS CloudTrail event logs associated with the use of your cluster’s CMK.

If your cluster uses an AWS owned key, then this will be detailed in the EKS console (excluding the ARN of the key).

### Can AWS access the AWS owned key used for default envelope encryption in Amazon EKS?
<a name="can_shared_aws_access_the_shared_aws_owned_key_used_for_default_envelope_encryption_in_amazon_eks"></a>

No. AWS has stringent security controls in Amazon EKS that prevent any person from accessing any plaintext encryption keys used for securing data in the etcd database. These security measures are also applied to the AWS owned KMS key.

### Is default envelope encryption enabled in my existing EKS cluster?
<a name="_is_default_envelope_encryption_enabled_in_my_existing_eks_cluster"></a>

If you are running an Amazon EKS cluster with Kubernetes version 1.28 or higher, then envelope encryption of all Kubernetes API data is enabled. For existing clusters, Amazon EKS uses the `eks:kms-storage-migrator` RBAC ClusterRole to migrate data that was previously not envelope encrypted in etcd to this new encryption state.

### What does this mean if I already enabled envelope encryption for Secrets in my EKS cluster?
<a name="_what_does_this_mean_if_i_already_enabled_envelope_encryption_for_secrets_in_my_eks_cluster"></a>

If you have an existing customer managed key (CMK) in KMS that was used to envelope encrypt your Kubernetes Secrets, that same key will be used as the KEK for envelope encryption of all Kubernetes API data types in your cluster.

### Is there any additional cost to running an EKS cluster with default envelope encryption?
<a name="_is_there_any_additional_cost_to_running_an_eks_cluster_with_default_envelope_encryption"></a>

There is no additional cost associated with the managed Kubernetes control plane if you are using an [Amazon Web Services owned key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk) for the default envelope encryption. By default, every EKS cluster running Kubernetes version 1.28 or later uses an [Amazon Web Services owned key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-cmk). However, if you use your own AWS KMS key, normal [KMS pricing](https://aws.amazon.com/kms/pricing/) will apply.

### How much does it cost to use my own AWS KMS key to encrypt Kubernetes API data in my cluster?
<a name="how_much_does_it_cost_to_use_my_own_shared_aws_kms_key_to_encrypt_kubernetes_api_data_in_my_cluster"></a>

You pay \$11 per month to store any custom key that you create or import to KMS. KMS charges for encryption and decryption requests. There is a free tier of 20,000 requests per month per account and you pay \$10.03 per 10,000 requests above the free tier per month. This applies across all KMS usage for an account, so the cost of using your own AWS KMS key on your cluster will be impacted by the usage of this key on other clusters or AWS resources within your account.

### Will my KMS charges be higher now that my customer managed key (CMK) is being used to envelope encrypt all Kubernetes API data and not just Secrets?
<a name="_will_my_kms_charges_be_higher_now_that_my_customer_managed_key_cmk_is_being_used_to_envelope_encrypt_all_kubernetes_api_data_and_not_just_secrets"></a>

No. Our implementation with KMS v2 significantly reduces the number of calls made to AWS KMS. This will in turn reduce the costs associated with your CMK irrespective of the additional Kubernetes data being encrypted or decrypted in your EKS cluster.

As detailed above, the generated DEK seed used for encryption of Kubernetes resources is stored locally in the Kubernetes API server’s cache after it has been encrypted with the remote KEK. If the encrypted DEK seed is not in the API server’s cache, the API server will call AWS KMS to encrypt the DEK seed. The API server then caches the encrypted DEK seed for future use in the cluster without calling KMS. Similarly, for decrypt requests, the API server will call AWS KMS for the first decrypt request, after which the decrypted DEK seed will be cached and used for future decrypt operations.

For more information, see [KEP-3299: KMS v2 Improvements](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/3299-kms-v2-improvements) in the Kubernetes Enhancements on GitHub.

### Can I use the same CMK key for multiple Amazon EKS clusters?
<a name="_can_i_use_the_same_cmk_key_for_multiple_amazon_eks_clusters"></a>

Yes. To use a key again, you can link it to a cluster in the same region by associating the ARN with the cluster during creation. However, if you are using the same CMK for multiple EKS clusters, you should put the requisite measures in place to prevent arbitrary disablement of the CMK. Otherwise, a disabled CMK associated with multiple EKS clusters will have a wider scope of impact on the clusters depending on that key.

### What happens to my EKS cluster if my CMK becomes unavailable after default envelope encryption is enabled?
<a name="_what_happens_to_my_eks_cluster_if_my_cmk_becomes_unavailable_after_default_envelope_encryption_is_enabled"></a>

If you disable a KMS key, it cannot be used in any [cryptographic operation](https://docs.aws.amazon.com/kms/latest/developerguide/kms-cryptography.html#cryptographic-operations). Without access to an existing CMK, the API server will be unable to encrypt and persist any newly created Kubernetes objects, as well as decrypt any previously encrypted Kubernetes objects stored in etcd. If the CMK is disabled, the cluster will be immediately placed in an unhealthy/degraded state at which point we will be unable to fulfill our [Service Commitment](https://aws.amazon.com/eks/sla/) until you re-enable the associated CMK.

When a CMK is disabled, you will receive notifications about the degraded health of your EKS cluster and the need to re-enable your CMK within 30 days of disabling it to ensure successful restoration of your Kubernetes control plane resources.

### How can I protect my EKS cluster from the impact of a disabled/deleted CMK?
<a name="_how_can_i_protect_my_eks_cluster_from_the_impact_of_a_disableddeleted_cmk"></a>

To protect your EKS clusters from such an occurrence, your key administrators should manage access to KMS key operations using IAM policies with a least privilege principle to reduce the risk of any arbitrary disablement or deletion of keys associated with EKS clusters. Additionally, you can set a [CloudWatch alarm](https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys-creating-cloudwatch-alarm.html) to be notified about the state of your CMK.

### Will my EKS cluster be restored if I re-enable the CMK?
<a name="_will_my_eks_cluster_be_restored_if_i_re_enable_the_cmk"></a>

To ensure successful restoration of your EKS cluster, we strongly recommend re-enabling your CMK within the first 30 days of it being disabled. However, the successful restoration of your EKS cluster will also depend on whether or not it undergoes any API breaking changes due to an automatic Kubernetes upgrade that may take place while the cluster is in an unhealthy/degraded state.

### Why is my EKS cluster placed in an unhealthy/degraded state after disabling the CMK?
<a name="_why_is_my_eks_cluster_placed_in_an_unhealthydegraded_state_after_disabling_the_cmk"></a>

The EKS control plane’s API server uses a DEK key which is encrypted and cached in the API server’s memory to encrypt all the objects during create/update operations before they’re stored in etcd. When an existing object is being retrieved from etcd, the API server uses the same cached DEK key and decrypts the Kubernetes resource object. If you disable the CMK, the API server will not see any immediate impact because of the cached DEK key in the API server’s memory. However, when the API server instance is restarted, it won’t have a cached DEK and will need to call AWS KMS for encrypt and decrypt operations. Without a CMK, this process will fail with a KMS\$1KEY\$1DISABLED error code, preventing the API server from booting successfully.

### What happens to my EKS cluster if I delete my CMK?
<a name="_what_happens_to_my_eks_cluster_if_i_delete_my_cmk"></a>

Deleting the CMK key associated with your EKS cluster will degrade its health beyond recovery. Without your cluster’s CMK, the API server will no longer be able to encrypt and persist any new Kubernetes objects, as well as decrypt any previously encrypted Kubernetes objects stored in the etcd database. You should only proceed with deleting a CMK key for your EKS cluster when you are sure that you don’t need to use the EKS cluster anymore.

Please note that if the CMK is not found (KMS\$1KEY\$1NOT\$1FOUND) or the grants for the CMK associated with your cluster are revoked (KMS\$1GRANT\$1REVOKED), your cluster will not be recoverable. For more information about about cluster health and error codes, see [Cluster health FAQs and error codes with resolution paths](https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html#cluster-health-status).

### Will I still be charged for a degraded/unhealthy EKS cluster because I disabled or deleted my CMK?
<a name="_will_i_still_be_charged_for_a_degradedunhealthy_eks_cluster_because_i_disabled_or_deleted_my_cmk"></a>

Yes. Although the EKS control plane will not be usable in the event of a disabled CMK, AWS will still be running dedicated infrastructure resources allocated to the EKS cluster until it is deleted by the customer. Additionally, our [Service Commitment](https://aws.amazon.com/eks/sla/) will not apply in such a circumstance because it will be a voluntary action or inaction by the customer that prevents the normal health and operation of your EKS cluster.

### Can my EKS cluster be automatically upgraded when it’s in an unhealthy/degraded state because of a disabled CMK?
<a name="_can_my_eks_cluster_be_automatically_upgraded_when_its_in_an_unhealthydegraded_state_because_of_a_disabled_cmk"></a>

Yes. However, if your cluster has a disabled CMK, you will have a 30 day period to re-enable it. In this 30 day period, your Kubernetes cluster will not be automatically upgraded. However, if this period lapses and you have not re-enabled the CMK, the cluster will be automatically upgraded to the next version (n\$11) that is in standard support, following the Kubernetes version lifecycle in EKS.

We strongly recommend quickly re-enabling a disabled CMK when you become aware of an impacted cluster. It’s important to note, that although EKS will automatically upgrade these impacted clusters, there’s no guarantee that they will recover successfully, especially if the cluster undergoes multiple automatic upgrades since this may include changes to the Kubernetes API and unexpected behavior in the API server’s bootstrap process.

### Can I use a KMS key alias?
<a name="_can_i_use_a_kms_key_alias"></a>

Yes. Amazon EKS [supports using KMS key aliases](https://docs.aws.amazon.com/eks/latest/APIReference/API_EncryptionConfig.html#API_EncryptionConfig_Contents). An alias is a friendly name for a [Amazon Web Service KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#kms_keys). For example, an alias lets you refer to a KMS key as **my-key** instead of ** `1234abcd-12ab-34cd-56ef-1234567890ab` **.

### Can I still backup and restore my cluster resources using my own Kubernetes backup solution?
<a name="_can_i_still_backup_and_restore_my_cluster_resources_using_my_own_kubernetes_backup_solution"></a>

Yes. You can use a Kubernetes backup solution (like [Velero](https://velero.io/)) for Kubernetes cluster disaster recovery, data migration, and data protection. If you run a Kubernetes backup solution that accesses the cluster resources through the API server, any data that the application retrieves will be decrypted before reaching the client. This will allow you to recover the cluster resources in another Kubernetes cluster.

# Harden Kubernetes RBAC in Amazon EKS
<a name="rbac-hardening"></a>

Kubernetes role-based access control (RBAC) controls what actions identities can perform inside a cluster. Many cluster components, including CSI drivers and other add-ons installed as DaemonSets, require broad permissions to function. Reviewing and scoping these permissions reduces the potential scope of any unintended access.

This topic describes the permission considerations for common cluster components and the recommended controls.

## DaemonSet service account permissions
<a name="_daemonset_service_account_permissions"></a>

DaemonSet Pods run on every node in the cluster, so their service account tokens and the RBAC permissions those tokens grant are present on every node.

An unauthorized process on a node may be able to access the service account tokens of other Pods running on the same node, including DaemonSet Pods. The RBAC permissions granted to DaemonSet service accounts are the same on every node in the cluster.

Components commonly deployed as DaemonSets include:
+ CSI node drivers (`ebs-csi-node`, `efs-csi-node`, `mountpoint-s3-csi-node`)
+ The Amazon VPC CNI plugin (`aws-node`)
+  `kube-proxy` 

If a DaemonSet Pod has AWS IAM credentials through EKS Pod Identity or IAM Roles for Service Accounts (IRSA), a process that gains access outside its container on the same node may also access those credentials. This extends the scope of impact beyond Kubernetes RBAC to any AWS API permissions granted to a DaemonSet’s IAM role.

**Important**  
When reviewing permissions, treat the Kubernetes RBAC permissions and the IAM permissions of every DaemonSet service account as accessible from every node in the cluster.

## CSI driver RBAC scope
<a name="_csi_driver_rbac_scope"></a>

CSI drivers commonly hold broad RBAC grants because they interact with nodes, persistent volumes, and storage APIs.

### Node object permissions
<a name="_node_object_permissions"></a>

CSI drivers may require RBAC permissions to modify Node objects to support features such as taint removal or other node management tasks. Due to Kubernetes RBAC limitations, these permissions apply to *all* Node objects in the cluster, not only the local node the driver is running on.

For the EBS CSI driver, the Helm chart provides a parameter (`node.serviceAccount.disableMutation`) that removes the node modification permission from the `ebs-csi-node` service account. Enabling this disables the taint removal feature.

### Service account token exposure
<a name="_service_account_token_exposure"></a>

CSI driver Pods may use projected service account tokens for authentication. On a node where an unauthorized process has gained access outside its container, those tokens may be accessible through the container filesystem or the kubelet API. If the service account is also associated with an IAM role through EKS Pod Identity or IRSA, an exposed token can be used to obtain AWS IAM credentials.

## Recommended controls
<a name="_recommended_controls"></a>

### Scope RBAC to least privilege
<a name="_scope_rbac_to_least_privilege"></a>
+ Review the ClusterRoles bound to CSI driver and DaemonSet service accounts. Remove permissions that are not required for your workloads.
+ For the EBS CSI driver, set `node.serviceAccount.disableMutation` to `true` if you don’t use the taint removal feature.
+ Use `kubectl auth can-i --list --as=system:serviceaccount:NAMESPACE:SERVICE_ACCOUNT` to audit effective permissions.

### Enforce Pod security standards
<a name="_enforce_pod_security_standards"></a>

Apply the [Kubernetes Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/) using the built-in Pod Security Admission controller or a policy engine. At minimum, enforce the `baseline` profile cluster-wide and the `restricted` profile for workload namespaces. This limits the ability to create privileged containers outside of system namespaces.

### Use network policies
<a name="_use_network_policies"></a>

Apply network policies to restrict egress from CSI driver and DaemonSet Pods to only the endpoints they need (for example, the Kubernetes API server and AWS service endpoints). This reduces the scope of actions possible.

### Monitor RBAC activity
<a name="_monitor_rbac_activity"></a>

Enable Kubernetes audit logging and monitor for unexpected API calls from DaemonSet service accounts. Look for:
+ Node modifications from CSI driver service accounts
+ Pod creation in system namespaces
+ Unusual `get` or `list` calls on Secrets

For more information, see [Send control plane logs to CloudWatch Logs](control-plane-logs.md).