Create local Amazon EKS clusters on AWS Outposts for high availability - Amazon EKS

Create local Amazon EKS clusters on AWS Outposts for high availability

You can use local clusters to run your entire Amazon EKS cluster locally on AWS Outposts. This helps mitigate the risk of application downtime that might result from temporary network disconnects to the cloud. These disconnects can be caused by fiber cuts or weather events. Because the entire Kubernetes cluster runs locally on Outposts, applications remain available. You can perform cluster operations during network disconnects to the cloud. For more information, see Prepare local Amazon EKS clusters on AWS Outposts for network disconnects. The following diagram shows a local cluster deployment.

Outpost local cluster

Local clusters are generally available for use with Outposts racks.

Supported AWS Regions

You can create local clusters in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Middle East (Bahrain), and South America (São Paulo). For detailed information about supported features, see Comparing the deployment options.

Topics

    Create an Amazon EKS local cluster

    You can create a local cluster with the following tools described in this page:

    You could also use the AWS CLI, the Amazon EKS API, the AWS SDKs, AWS CloudFormation or Terraform to create clusters on Outposts.

    eksctl

    To create a local cluster with eksctl

    1. Install version 0.194.0 or later of the eksctl command line tool on your device or AWS CloudShell. To install or update eksctl, see Installation in the eksctl documentation.

    2. Copy the contents that follow to your device. Replace the following values and then run the modified command to create the outpost-control-plane.yaml file:

      • Replace region-code with the supported AWS Region that you want to create your cluster in.

      • Replace my-cluster with a name for your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.

      • Replace vpc-ExampleID1 and subnet-ExampleID1 with the IDs of your existing VPC and subnet. The VPC and subnet must meet the requirements in Create a VPC and subnets for Amazon EKS clusters on AWS Outposts.

      • Replace uniqueid with the ID of your Outpost.

      • Replace m5.large with an instance type available on your Outpost. Before choosing an instance type, see Select instance types and placement groups for Amazon EKS clusters on AWS Outposts based on capacity considerations. Three control plane instances are deployed. You can’t change this number.

        cat >outpost-control-plane.yaml <<EOF apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: my-cluster region: region-code version: "1.24" vpc: clusterEndpoints: privateAccess: true id: "vpc-vpc-ExampleID1" subnets: private: outpost-subnet-1: id: "subnet-subnet-ExampleID1" outpost: controlPlaneOutpostARN: arn:aws:outposts:region-code:111122223333:outpost/op-uniqueid controlPlaneInstanceType: m5.large EOF

        For a complete list of all available options and defaults, see AWS Outposts Support and Config file schema in the eksctl documentation.

    3. Create the cluster using the configuration file that you created in the previous step. eksctl creates a VPC and one subnet on your Outpost to deploy the cluster in.

      eksctl create cluster -f outpost-control-plane.yaml

      Cluster provisioning takes several minutes. While the cluster is being created, several lines of output appear. The last line of output is similar to the following example line.

      [✓] EKS cluster "my-cluster" in "region-code" region is ready
      Tip

      To see the most options that you can specify when creating a cluster with eksctl, use the eksctl create cluster --help command. To see all the available options, you can use a config file. For more information, see Using config files and the config file schema in the eksctl documentation. You can find config file examples on GitHub.

      The eksctl command automatically created an access entry for the IAM principal (user or role) that created the cluster and granted the IAM principal administrator permissions to Kubernetes objects on the cluster. If you don’t want the cluster creator to have administrator access to Kubernetes objects on the cluster, add the following text to the previous configuration file: bootstrapClusterCreatorAdminPermissions: false (at the same level as metadata, vpc, and outpost). If you added the option, then after cluster creation, you need to create an access entry for at least one IAM principal, or no IAM principals will have access to Kubernetes objects on the cluster.

    AWS Management Console

    To create your cluster with the AWS Management Console

    1. You need an existing VPC and subnet that meet Amazon EKS requirements. For more information, see Create a VPC and subnets for Amazon EKS clusters on AWS Outposts.

    2. If you already have a local cluster IAM role, or you’re going to create your cluster with eksctl, then you can skip this step. By default, eksctl creates a role for you.

      1. Run the following command to create an IAM trust policy JSON file.

        cat >eks-local-cluster-role-trust-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF
      2. Create the Amazon EKS cluster IAM role. To create an IAM role, the IAM principal that is creating the role must be assigned the iam:CreateRole action (permission).

        aws iam create-role --role-name myAmazonEKSLocalClusterRole --assume-role-policy-document file://"eks-local-cluster-role-trust-policy.json"
      3. Attach the Amazon EKS managed policy named AmazonEKSLocalOutpostClusterPolicy to the role. To attach an IAM policy to an IAM principal, the principal that is attaching the policy must be assigned one of the following IAM actions (permissions): iam:AttachUserPolicy or iam:AttachRolePolicy.

        aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSLocalOutpostClusterPolicy --role-name myAmazonEKSLocalClusterRole
    3. Open the Amazon EKS console.

    4. At the top of the console screen, make sure that you have selected a supported AWS Region.

    5. Choose Add cluster and then choose Create.

    6. On the Configure cluster page, enter or select values for the following fields:

      • Kubernetes control plane location – Choose AWS Outposts.

      • Outpost ID – Choose the ID of the Outpost that you want to create your control plane on.

      • Instance type – Select an instance type. Only the instance types available in your Outpost are displayed. In the dropdown list, each instance type describes how many nodes the instance type is recommended for. Before choosing an instance type, see Select instance types and placement groups for Amazon EKS clusters on AWS Outposts based on capacity considerations. All replicas are deployed using the same instance type. You can’t change the instance type after your cluster is created. Three control plane instances are deployed. You can’t change this number.

      • Name – A name for your cluster. It must be unique in your AWS account. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.

      • Kubernetes version – Choose the Kubernetes version that you want to use for your cluster. We recommend selecting the latest version, unless you need to use an earlier version.

      • Cluster service role – Choose the Amazon EKS cluster IAM role that you created in a previous step to allow the Kubernetes control plane to manage AWS resources.

      • Kubernetes cluster administrator access – If you want the IAM principal (role or user) that’s creating the cluster to have administrator access to the Kubernetes objects on the cluster, accept the default (allow). Amazon EKS creates an access entry for the IAM principal and grants cluster administrator permissions to the access entry. For more information about access entries, see Grant IAM users access to Kubernetes with EKS access entries.

        If you want a different IAM principal than the principal creating the cluster to have administrator access to Kubernetes cluster objects, choose the disallow option. After cluster creation, any IAM principal that has IAM permissions to create access entries can add an access entries for any IAM principals that need access to Kubernetes cluster objects. For more information about the required IAM permissions, see Actions defined by Amazon Elastic Kubernetes Service in the Service Authorization Reference. If you choose the disallow option and don’t create any access entries, then no IAM principals will have access to the Kubernetes objects on the cluster.

      • Tags – (Optional) Add any tags to your cluster. For more information, see Organize Amazon EKS resources with tags. When you’re done with this page, choose Next.

    7. On the Specify networking page, select values for the following fields:

      • VPC – Choose an existing VPC. The VPC must have a sufficient number of IP addresses available for the cluster, any nodes, and other Kubernetes resources that you want to create. Your VPC must meet the requirements in VPC requirements and considerations.

      • Subnets – By default, all available subnets in the VPC specified in the previous field are preselected. The subnets that you choose must meet the requirements in Subnet requirements and considerations.

      • Security groups – (Optional) Specify one or more security groups that you want Amazon EKS to associate to the network interfaces that it creates. Amazon EKS automatically creates a security group that enables communication between your cluster and your VPC. Amazon EKS associates this security group, and any that you choose, to the network interfaces that it creates. For more information about the cluster security group that Amazon EKS creates, see View Amazon EKS security group requirements for clusters. You can modify the rules in the cluster security group that Amazon EKS creates. If you choose to add your own security groups, you can’t change the ones that you choose after cluster creation. For on-premises hosts to communicate with the cluster endpoint, you must allow inbound traffic from the cluster security group. For clusters that don’t have an ingress and egress internet connection (also knows as private clusters), you must do one of the following:

        • Add the security group associated with required VPC endpoints. For more information about the required endpoints, see Using interface VPC endpoints in Subnet access to AWS services.

        • Modify the security group that Amazon EKS created to allow traffic from the security group associated with the VPC endpoints. When you’re done with this page, choose Next.

    8. On the Configure observability page, you can optionally choose which Metrics and Control plane logging options that you want to turn on. By default, each log type is turned off.

    9. On the Review and create page, review the information that you entered or selected on the previous pages. If you need to make changes, choose Edit. When you’re satisfied, choose Create. The Status field shows CREATING while the cluster is provisioned.

      Cluster provisioning takes several minutes.

    View your Amazon EKS local cluster

    1. After your cluster is created, you can view the Amazon EC2 control plane instances that were created.

      aws ec2 describe-instances --query 'Reservations[].Instances[].{Name:Tags[?Key==Name]|[0].Value}' | grep my-cluster-control-plane

      An example output is as follows.

      "Name": "my-cluster-control-plane-id1" "Name": "my-cluster-control-plane-id2" "Name": "my-cluster-control-plane-id3"

      Each instance is tainted with node-role.eks-local.amazonaws.com/control-plane so that no workloads are ever scheduled on the control plane instances. For more information about taints, see Taints and Tolerations in the Kubernetes documentation. Amazon EKS continuously monitors the state of local clusters. We perform automatic management actions, such as security patches and repairing unhealthy instances. When local clusters are disconnected from the cloud, we complete actions to ensure that the cluster is repaired to a healthy state upon reconnect.

    2. If you created your cluster using eksctl, then you can skip this step. eksctl completes this step for you. Enable kubectl to communicate with your cluster by adding a new context to the kubectl config file. For instructions on how to create and update the file, see Connect kubectl to an EKS cluster by creating a kubeconfig file.

      aws eks update-kubeconfig --region region-code --name my-cluster

      An example output is as follows.

      Added new context arn:aws:eks:region-code:111122223333:cluster/my-cluster to /home/username/.kube/config
    3. To connect to your local cluster’s Kubernetes API server, have access to the local gateway for the subnet, or connect from within the VPC. For more information about connecting an Outpost rack to your on-premises network, see How local gateways for racks work in the AWS Outposts User Guide. If you use Direct VPC Routing and the Outpost subnet has a route to your local gateway, the private IP addresses of the Kubernetes control plane instances are automatically broadcasted over your local network. The local cluster’s Kubernetes API server endpoint is hosted in Amazon Route 53 (Route 53). The API service endpoint can be resolved by public DNS servers to the Kubernetes API servers' private IP addresses.

      Local clusters' Kubernetes control plane instances are configured with static elastic network interfaces with fixed private IP addresses that don’t change throughout the cluster lifecycle. Machines that interact with the Kubernetes API server might not have connectivity to Route 53 during network disconnects. If this is the case, we recommend configuring /etc/hosts with the static private IP addresses for continued operations. We also recommend setting up local DNS servers and connecting them to your Outpost. For more information, see the AWS Outposts documentation. Run the following command to confirm that communication’s established with your cluster.

      kubectl get svc

      An example output is as follows.

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 28h
    4. (Optional) Test authentication to your local cluster when it’s in a disconnected state from the AWS Cloud. For instructions, see Prepare local Amazon EKS clusters on AWS Outposts for network disconnects.

    Internal resources

    Amazon EKS creates the following resources on your cluster. The resources are for Amazon EKS internal use. For proper functioning of your cluster, don’t edit or modify these resources.

    • The following mirror Pods:

      • aws-iam-authenticator-node-hostname

      • eks-certificates-controller-node-hostname

      • etcd-node-hostname

      • kube-apiserver-node-hostname

      • kube-controller-manager-node-hostname

      • kube-scheduler-node-hostname

    • The following self-managed add-ons:

      • kube-system/coredns

      • kube-system/ kube-proxy (not created until you add your first node)

      • kube-system/aws-node (not created until you add your first node). Local clusters use the Amazon VPC CNI plugin for Kubernetes plugin for cluster networking. Do not change the configuration for control plane instances (Pods named aws-node-controlplane-*). There are configuration variables that you can use to change the default value for when the plugin creates new network interfaces. For more information, see the documentation on GitHub.

    • The following services:

      • default/kubernetes

      • kube-system/kube-dns

    • A PodSecurityPolicy named eks.system

    • A ClusterRole named eks:system:podsecuritypolicy

    • A ClusterRoleBinding named eks:system

    • A default PodSecurityPolicy

    • In addition to the cluster security group, Amazon EKS creates a security group in your AWS account that’s named eks-local-internal-do-not-use-or-edit-cluster-name-uniqueid . This security group allows traffic to flow freely between Kubernetes components running on the control plane instances.

    Recommended next steps: