

# Containers & microservices
<a name="containersandmicroservices-pattern-list"></a>

**Topics**
+ [Access an Amazon Neptune database from an Amazon EKS container](access-amazon-neptune-database-from-amazon-eks-container.md)
+ [Access container applications privately on Amazon ECS by using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Access container applications privately on Amazon ECS by using AWS Fargate, AWS PrivateLink, and a Network Load Balancer](access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.md)
+ [Access container applications privately on Amazon EKS using AWS PrivateLink and a Network Load Balancer](access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer.md)
+ [Automate backups for Amazon RDS for PostgreSQL DB instances by using AWS Batch](automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch.md)
+ [Automate deployment of Node Termination Handler in Amazon EKS by using a CI/CD pipeline](automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline.md)
+ [Automatically build and deploy a Java application to Amazon EKS using a CI/CD pipeline](automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.md)
+ [Copy Amazon ECR container images across AWS accounts and AWS Regions](copy-ecr-container-images-across-accounts-regions.md)
+ [Create an Amazon ECS task definition and mount a file system on EC2 instances using Amazon EFS](create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.md)
+ [Deploy Lambda functions with container images](deploy-lambda-functions-with-container-images.md)
+ [Deploy Java microservices on Amazon ECS using AWS Fargate](deploy-java-microservices-on-amazon-ecs-using-aws-fargate.md)
+ [Deploy Kubernetes resources and packages using Amazon EKS and a Helm chart repository in Amazon S3](deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3.md)
+ [Deploy a CockroachDB cluster in Amazon EKS by using Terraform](deploy-cockroachdb-on-eks-using-terraform.md)
+ [Deploy a sample Java microservice on Amazon EKS and expose the microservice using an Application Load Balancer](deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer.md)
+ [Deploy a gRPC-based application on an Amazon EKS cluster and access it with an Application Load Balancer](deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.md)
+ [Deploy containerized applications on AWS IoT Greengrass V2 running as a Docker container](deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.md)
+ [Deploy containers by using Elastic Beanstalk](deploy-containers-by-using-elastic-beanstalk.md)
+ [Generate a static outbound IP address using a Lambda function, Amazon VPC, and a serverless architecture](generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.md)
+ [Identify duplicate container images automatically when migrating to an Amazon ECR repository](identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.md)
+ [Install SSM Agent on Amazon EKS worker nodes by using Kubernetes DaemonSet](install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset.md)
+ [Install the SSM Agent and CloudWatch agent on Amazon EKS worker nodes using preBootstrapCommands](install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands.md)
+ [Migrate NGINX Ingress Controllers when enabling Amazon EKS Auto Mode](migrate-nginx-ingress-controller-eks-auto-mode.md)
+ [Migrate your container workloads from Azure Red Hat OpenShift (ARO) to Red Hat OpenShift Service on AWS (ROSA)](migrate-container-workloads-from-aro-to-rosa.md)
+ [Run Amazon ECS tasks on Amazon WorkSpaces with Amazon ECS Anywhere](run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere.md)
+ [Run an ASP.NET Core web API Docker container on an Amazon EC2 Linux instance](run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.md)
+ [Run stateful workloads with persistent data storage by using Amazon EFS on Amazon EKS with AWS Fargate](run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate.md)
+ [Set up event-driven auto scaling in Amazon EKS by using Amazon EKS Pod Identity and KEDA](event-driven-auto-scaling-with-eks-pod-identity-and-keda.md)
+ [Streamline PostgreSQL deployments on Amazon EKS by using PGO](streamline-postgresql-deployments-amazon-eks-pgo.md)
+ [Simplify application authentication with mutual TLS in Amazon ECS by using Application Load Balancer](simplify-application-authentication-with-mutual-tls-in-amazon-ecs.md)
+ [More patterns](containersandmicroservices-more-patterns-pattern-list.md)

# Access an Amazon Neptune database from an Amazon EKS container
<a name="access-amazon-neptune-database-from-amazon-eks-container"></a>

*Ramakrishnan Palaninathan, Amazon Web Services*

## Summary
<a name="access-amazon-neptune-database-from-amazon-eks-container-summary"></a>

This pattern establishes a connection between Amazon Neptune, which is a fully managed graph database, and Amazon Elastic Kubernetes Service (Amazon EKS), a container orchestration service, to access a Neptune database. Neptune DB clusters are confined within a virtual private cloud (VPC) on AWS. For this reason, accessing Neptune requires careful configuration of the VPC to enable connectivity.

Unlike Amazon Relational Database Service (Amazon RDS) for PostgreSQL, Neptune doesn't rely on typical database access credentials. Instead, it uses AWS Identity and Access Management (IAM) roles for authentication. Therefore, connecting to Neptune from Amazon EKS involves setting up an IAM role with the necessary permissions to access Neptune.

Furthermore, Neptune endpoints are accessible only within the VPC where the cluster resides. This means that you have to configure network settings to facilitate communication between Amazon EKS and Neptune. Depending on your specific requirements and networking preferences, there are [various approaches to configuring the VPC](https://docs.aws.amazon.com/neptune/latest/userguide/get-started-vpc.html) to enable seamless connectivity between Neptune and Amazon EKS. Each method offers distinct advantages and considerations, which provide flexibility in designing your database architecture to suit your application's needs.

## Prerequisites and limitations
<a name="access-amazon-neptune-database-from-amazon-eks-container-prereqs"></a>

**Prerequisites **
+ Install the latest version of **kubectl** (see [instructions](https://kubernetes.io/docs/tasks/tools/#kubectl)). To check your version, run: 

  ```
  kubectl version --short
  ```
+ Install the latest version of **eksctl** (see [instructions](https://eksctl.io/installation/)). To check your version, run: 

  ```
  eksctl info
  ```
+ Install the latest version of the AWS Command Line Interface (AWS CLI) version 2 (see [instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)). To check your version, run: 

  ```
  aws --version
  ```
+ Create a Neptune DB cluster (see [instructions](https://docs.aws.amazon.com/neptune/latest/userguide/get-started-cfn-create.html)). Make sure to establish communications between the cluster's VPC and Amazon EKS through [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-getting-started.html), or another method. Also make sure that the status of the cluster is "available" and that it has an inbound rule on port 8182 for the security group.
+ Configure an IAM OpenID Connect (OIDC) provider on an existing Amazon EKS cluster (see [instructions](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html)).

**Product versions**
+ [Amazon EKS 1.27](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html)
+ [Amazon Neptune engine version 1.3.0.0 (2023-11-15)](https://docs.aws.amazon.com/neptune/latest/userguide/engine-releases-1.3.0.0.html)

## Architecture
<a name="access-amazon-neptune-database-from-amazon-eks-container-architecture"></a>

The following diagram shows the connection between Kubernetes pods in an Amazon EKS cluster and Neptune to provide access to a Neptune database.

![\[Connecting pods in a Kubernetes node with Amazon Neptune.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2fcf9e00-1664-462a-825e-b0fdd962f478/images/86da67e5-340e-4b29-acc6-2da416ce57eb.png)


**Automation and scale**

You can use the Amazon EKS [Horizontal Pod Autoscaler ](https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html)to scale this solution.

## Tools
<a name="access-amazon-neptune-database-from-amazon-eks-container-tools"></a>

**Services**
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Neptune](https://docs.aws.amazon.com/neptune/latest/userguide/intro.html) is a graph database service that helps you build and run applications that work with highly connected datasets.

## Best practices
<a name="access-amazon-neptune-database-from-amazon-eks-container-best-practices"></a>

For best practices, see [Identity and Access Management](https://aws.github.io/aws-eks-best-practices/security/docs/iam/) in the *Amazon EKS Best Practices Guides*.

## Epics
<a name="access-amazon-neptune-database-from-amazon-eks-container-epics"></a>

### Set environment variables
<a name="set-environment-variables"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify the cluster context. | Before you interact with your Amazon EKS cluster by using Helm or other command-line tools, you must define environment variables that encapsulate your cluster's details. These variables are used in subsequent commands to ensure that they target the correct cluster and resources.First, confirm that you are operating within the correct cluster context. This ensures that any subsequent commands are sent to the intended Kubernetes cluster. To verify the current context, run the following command.<pre>kubectl config current-context</pre> | AWS administrator, Cloud administrator | 
| Define the `CLUSTER_NAME` variable. | Define the `CLUSTER_NAME` environment variable for your Amazon EKS cluster. In the following command, replace the sample value `us-west-2` with the correct AWS Region for your cluster. Replace the sample value `eks-workshop` with your existing cluster name.<pre>export CLUSTER_NAME=$(aws eks describe-cluster --region us-west-2 --name eks-workshop --query "cluster.name" --output text)</pre> | AWS administrator, Cloud administrator | 
| Validate output. | To validate that the variables have been set properly, run the following command.<pre>echo $CLUSTER_NAME</pre>Verify that the output of this command matches the input you specified in the previous step. | AWS administrator, Cloud administrator | 

### Create IAM role and associate it with Kubernetes
<a name="create-iam-role-and-associate-it-with-kubernetes"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a service account. | You use [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html?sc_channel=el&sc_campaign=appswave&sc_content=eks-integrate-secrets-manager&sc_geo=mult&sc_country=mult&sc_outcome=acq) to map your Kubernetes service accounts to IAM roles, to enable fine-grained permissions management for your applications that run on Amazon EKS. You can use [eksctl](https://eksctl.io/) to create and associate an IAM role with a specific Kubernetes service account within your Amazon EKS cluster. The AWS managed policy `NeptuneFullAccess` allows write and read access to your specified Neptune cluster.You must have an [OIDC endpoint](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html?sc_channel=el&sc_campaign=appswave&sc_content=eks-integrate-secrets-manager&sc_geo=mult&sc_country=mult&sc_outcome=acq) associated with your cluster before you run these commands.Create a service account that you want to associate with an AWS managed policy named `NeptuneFullAccess`.<pre>eksctl create iamserviceaccount --name eks-neptune-sa --namespace default --cluster $CLUSTER_NAME --attach-policy-arn arn:aws:iam::aws:policy/NeptuneFullAccess --approve --override-existing-serviceaccounts</pre>where `eks-neptune-sa `is the name of the service account that you want to create.Upon completion, this command displays the following response:<pre>2024-02-07 01:12:39 [ℹ] created serviceaccount "default/eks-neptune-sa"</pre> | AWS administrator, Cloud administrator | 
| Verify that the account is set up properly. | Make sure that the `eks-neptune-sa` service account is set up correctly in the default namespace in your cluster.<pre>kubectl get sa eks-neptune-sa -o yaml</pre>The output should look like this:<pre>apiVersion: v1<br />kind: ServiceAccount<br />metadata:<br />  annotations:<br />    eks.amazonaws.com/role-arn: arn:aws:iam::123456789123:role/eksctl-eks-workshop-addon-iamserviceaccount-d-Role1-Q35yKgdQOlmM<br />  creationTimestamp: "2024-02-07T01:12:39Z"<br />  labels:<br />    app.kubernetes.io/managed-by: eksctl<br />  name: eks-neptune-sa<br />  namespace: default<br />  resourceVersion: "5174750"<br />  uid: cd6ba2f7-a0f5-40e1-a6f4-4081e0042316</pre> | AWS administrator, Cloud administrator | 
| Check connectivity. | Deploy a sample pod called `pod-util` and check connectivity with Neptune.<pre>apiVersion: v1<br />kind: Pod<br />metadata:<br />  name: pod-util<br />  namespace: default<br />spec:<br />  serviceAccountName: eks-neptune-sa<br />  containers:<br />  - name: pod-util<br />    image: public.ecr.aws/patrickc/troubleshoot-util<br />    command:<br />      - sleep<br />      - "3600"<br />    imagePullPolicy: IfNotPresent</pre><pre>kubectl apply -f pod-util.yaml</pre><pre>kubectl exec --stdin --tty pod-util -- /bin/bash<br />bash-5.1# curl -X POST -d '{"gremlin":"g.V().limit(1)"}' https://db-neptune-1.cluster-xxxxxxxxxxxx.us-west-2.neptune.amazonaws.com:8182/gremlin<br />{"requestId":"a4964f2d-12b1-4ed3-8a14-eff511431a0e","status":{"message":"","code":200,"attributes":{"@type":"g:Map","@value":[]}},"result":{"data":{"@type":"g:List","@value":[]},"meta":{"@type":"g:Map","@value":[]}}}<br />bash-5.1# exit<br />exit</pre> | AWS administrator, Cloud administrator | 

### Validate connection activity
<a name="validate-connection-activity"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable IAM database authentication. | By default, IAM database authentication is disabled when you create a Neptune DB cluster. You can enable or disable IAM database authentication by using the AWS Management Console.Follow the steps in the AWS documentation to [enable IAM database authentication in Neptune](https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-enable.html). | AWS administrator, Cloud administrator | 
| Verify connections. | In this step, you interact with the `pod-util` container, which is already in running status, to install **awscurl **and verify the connection.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-amazon-neptune-database-from-amazon-eks-container.html) | AWS administrator, Cloud administrator | 

## Troubleshooting
<a name="access-amazon-neptune-database-from-amazon-eks-container-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Can't access the Neptune database. | Review the IAM policy that's attached to the service account. Make sure that it allows the necessary actions (for example, `neptune:Connec,neptune:DescribeDBInstances`) for the operations you want to run. | 

## Related resources
<a name="access-amazon-neptune-database-from-amazon-eks-container-resources"></a>
+ [Grant Kubernetes workloads access to AWS using Kubernetes Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/service-accounts.html) (Amazon EKS documentation)
+ [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) (Amazon EKS documentation)
+ [Creating a new Neptune DB cluster](https://docs.aws.amazon.com/neptune/latest/userguide/get-started-create-cluster.html) (Amazon Neptune documentation)

# Access container applications privately on Amazon ECS by using AWS PrivateLink and a Network Load Balancer
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-summary"></a>

This pattern describes how to privately host a Docker container application on Amazon Elastic Container Service (Amazon ECS) behind a Network Load Balancer, and access the application by using AWS PrivateLink. You can then use a private network to securely access services on the Amazon Web Services (AWS) Cloud. Amazon Relational Database Service (Amazon RDS) hosts the relational database for the application running on Amazon ECS with high availability (HA). Amazon Elastic File System (Amazon EFS) is used if the application requires persistent storage.

The Amazon ECS service running the Docker applications, with a Network Load Balancer at the front end, can be associated with a virtual private cloud (VPC) endpoint for access through AWS PrivateLink. This VPC endpoint service can then be shared with other VPCs by using their VPC endpoints.

You can also use [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) instead of an Amazon EC2 Auto Scaling group. For more information, see [Access container applications privately on Amazon ECS by using AWS Fargate, AWS PrivateLink, and a Network Load Balancer](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html?did=pg_card).

## Prerequisites and limitations
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-prereqs"></a>

**Prerequisites **
+ An active AWS account
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), installed and configured on Linux, macOS, or Windows
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows
+ An application running on Docker

## Architecture
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-architecture"></a>

![\[Using AWS PrivateLink to access a container app on Amazon ECS behind a Network Load Balancer.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a316bf46-24db-4514-957d-abc60f8f6962/images/573951ed-74bb-4023-9d9c-43e77e4f8eda.png)


 

**Technology stack**
+ Amazon CloudWatch
+ Amazon Elastic Compute Cloud (Amazon EC2)
+ Amazon EC2 Auto Scaling
+ Amazon Elastic Container Registry (Amazon ECR)
+ Amazon ECS
+ Amazon RDS
+ Amazon Simple Storage Service (Amazon S3)
+ AWS Lambda
+ AWS PrivateLink
+ AWS Secrets Manager
+ Application Load Balancer
+ Network Load Balancer
+ VPC

*Automation and scale*
+ You can use [AWS CloudFormation ](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)to create this pattern by using [Infrastructure as Code](https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html).

## Tools
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-tools"></a>
+ [Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) – Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud.
+ [Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html) – Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application.
+ [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage containers on a cluster.
+ [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) – Amazon Elastic Container Registry (Amazon ECR) is a managed AWS container image registry service that is secure, scalable, and reliable.
+ [Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) – Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – Lambda is a compute service for running code without provisioning or managing servers.
+ [Amazon RDS](https://docs.aws.amazon.com/rds/) – Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet. It is designed to make web-scale computing easier for developers.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) – Secrets Manager helps you replace hardcoded credentials in your code, including passwords, by providing an API call to Secrets Manager to retrieve the secret programmatically.
+ [Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) – Amazon Virtual Private Cloud (Amazon VPC) helps you launch AWS resources into a virtual network that you've defined.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) – Elastic Load Balancing distributes incoming application or network traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, in multiple Availability Zones.
+ [Docker](https://www.docker.com/) – Docker helps developers to pack, ship, and run any application as a lightweight, portable, and self-sufficient container.

## Epics
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-epics"></a>

### Create networking components
<a name="create-networking-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create the load balancers
<a name="create-the-load-balancers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Network Load Balancer.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Application Load Balancer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an Amazon EFS file system
<a name="create-an-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EFS file system. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Mount targets for the subnets. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Verify that the subnets are mounted as targets.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an S3 bucket
<a name="create-an-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket.  | Open the Amazon S3 console and create an S3 bucket to store your application’s static assets, if required. | Cloud administrator | 

### Create a Secrets Manager secret
<a name="create-a-secrets-manager-secret"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS KMS key to encrypt the Secrets Manager secret. | Open the AWS Key Management Service (AWS KMS) console and create a KMS key. | Cloud administrator | 
|  Create a Secrets Manager secret to store the Amazon RDS password. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud Administrator  | 

### Create an Amazon RDS instance
<a name="create-an-amazon-rds-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a DB subnet group.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Amazon RDS instance. | Create and configure an Amazon RDS instance within the private subnets. Make sure that **Multi-AZ** is turned on for HA. | Cloud administrator | 
| Load data to the Amazon RDS instance.  | Load the relational data required by your application into your Amazon RDS instance. This process will vary depending on your application's needs, as well as how your database schema is defined and designed. | Cloud administrator, DBA | 

### Create the Amazon ECS components
<a name="create-the-amazon-ecs-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an ECS cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create the Docker images.  | Create the Docker images by following the instructions in the *Related resources* section. | Cloud administrator | 
| Create Amazon ECR repositories. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator, DevOps engineer | 
| Authenticate your Docker client for the Amazon ECR repository.  | To authenticate your Docker client for the Amazon ECR repository, run the "`aws ecr get-login-password` command in the AWS CLI. | Cloud administrator | 
| Push the Docker images to the Amazon ECR repository.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Amazon ECS task definition.  | A task definition is required to run Docker containers in Amazon ECS. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html)For help with setting up your task definition, see "Creating a task definition" in the *Related resources* section. Make sure you provide the Docker images that you pushed to Amazon ECR. | Cloud administrator | 
| Create an Amazon ECS service.  | Create an Amazon ECS service by using the ECS cluster you created earlier. Make sure you choose Amazon EC2 as the launch type, and choose the task definition created in the previous step, as well as the target group of the Application Load Balancer. | Cloud administrator | 

### Create an Amazon EC2 Auto Scaling group
<a name="create-an-amazon-ec2-auto-scaling-group"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a launch configuration. | Open the Amazon EC2 console, and create a launch configuration. Make sure that the user data has the code to allow the EC2 instances to join the desired ECS cluster. For an example of the code required, see the *Related resources* section. | Cloud administrator | 
| Create an Amazon EC2 Auto Scaling group.  | Return to the Amazon EC2 console and under **Auto Scaling**, choose **Auto Scaling groups**. Set up an Amazon EC2 Auto Scaling group. Make sure you choose the private subnets and launch configuration that you created earlier. | Cloud administrator | 

### Set up AWS PrivateLink
<a name="set-up-aws-privatelink"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the AWS PrivateLink endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html)For more information, see the *Related resources* section. | Cloud administrator | 

### Create a VPC endpoint
<a name="create-a-vpc-endpoint"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC endpoint. | Create a VPC endpoint for the AWS PrivateLink endpoint that you created earlier. The VPC endpoint Fully Qualified Domain Name (FQDN) will point to the AWS PrivateLink endpoint FQDN. This creates an elastic network interface to the VPC endpoint service that the DNS endpoints can access. | Cloud administrator | 

### Create the Lambda function
<a name="create-the-lambda-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Lambda function. | On the AWS Lambda console, create a Lambda function to update the Application Load Balancer IP addresses as targets for the Network Load Balancer. For more information on this, see the [Using AWS Lambda to enable static IP addresses for Application Load Balancers](https://aws.amazon.com/blogs/networking-and-content-delivery/using-aws-lambda-to-enable-static-ip-addresses-for-application-load-balancers/) blog post. | App developer | 

## Related resources
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer-resources"></a>

**Create the load balancers:**
+ [Use a Network Load Balancer for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/nlb.html)
+ [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html)
+ [Use an Application Load Balancer for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/alb.html)
+ [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html)

**Create an Amazon EFS file system:**
+ [Create an Amazon EFS file system](https://docs.aws.amazon.com/efs/latest/ug/creating-using-create-fs.html)
+ [Create mount targets in Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html)

**Create an S3 bucket:**
+ [Create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html#creating-bucket)

**Create a Secrets Manager secret:**
+ [Create keys in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html)
+ [Create a secret in AWS Secrets Manager ](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html)

**Create an Amazon RDS instance:**
+ [Create an Amazon RDS DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html)

**Create the Amazon ECS components:**
+ [Create an Amazon ECS cluster ](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-ec2-cluster-console-v2.html)
+ [Create a Docker image](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html)
+ [Create an Amazon ECR repository ](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html)
+ [Authenticate Docker with Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth)
+ [Push an image to an Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html)
+ [Create Amazon ECS task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html)
+ [Create an Amazon ECS service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

**Create an Amazon EC2 Auto Scaling group:**
+ [Create a launch configuration](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-config.html)
+ [Create an Auto Scaling group using a launch configuration](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg.html)
+ [Bootstrap container instances with Amazon EC2 user data](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html)

**Set up AWS PrivateLink:**
+ [VPC endpoint services (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-share-your-services.html)

**Create a VPC endpoint:**
+ [Interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html)

**Create the Lambda function:**
+ [Create a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html)

**Other resources:**
+ [Using static IP addresses for Application Load Balancers](https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/)
+ [Securely accessing services over AWS PrivateLink](https://d1.awsstatic.com/whitepapers/aws-privatelink.pdf)

# Access container applications privately on Amazon ECS by using AWS Fargate, AWS PrivateLink, and a Network Load Balancer
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-summary"></a>

This pattern describes how to privately host a Docker container application on the Amazon Web Services (AWS) Cloud by using Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type, behind a Network Load Balancer, and access the application by using AWS PrivateLink. Amazon Relational Database Service (Amazon RDS) hosts the relational database for the application running on Amazon ECS with high availability (HA). You can use Amazon Elastic File System (Amazon EFS) if the application requires persistent storage.

This pattern uses a [Fargate launch type](https://docs.aws.amazon.com/AmazonECS/latest/userguide/launch_types.html) for the Amazon ECS service running the Docker applications, with a Network Load Balancer at the front end. It can then be associated with a virtual private cloud (VPC) endpoint for access through AWS PrivateLink. This VPC endpoint service can then be shared with other VPCs by using their VPC endpoints.

You can use Fargate with Amazon ECS to run containers without having to manage servers or clusters of Amazon Elastic Compute Cloud (Amazon EC2) instances. You can also use an Amazon EC2 Auto Scaling group instead of Fargate. For more information, see [Access container applications privately on Amazon ECS by using AWS PrivateLink and a Network Load Balancer](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-privatelink-and-a-network-load-balancer.html?did=pg_card&trk=pg_card).

## Prerequisites and limitations
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), installed and configured on Linux, macOS, or Windows
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows
+ An application running on Docker

## Architecture
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-architecture"></a>

![\[Using PrivateLink to access a container app on Amazon ECS with an AWS Fargate launch type.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/31cca5e2-8d8b-45ec-b872-a06b0dd97007/images/57cc9995-45f4-4039-a0bf-2d2b3d6a05de.png)


**Technology stack**
+ Amazon CloudWatch
+ Amazon Elastic Container Registry (Amazon ECR)
+ Amazon ECS
+ Amazon EFS
+ Amazon RDS
+ Amazon Simple Storage Service (Amazon S3)
+ AWS Fargate
+ AWS PrivateLink
+ AWS Secrets Manager
+ Application Load Balancer
+ Network Load Balancer
+ VPC

**Automation and scale**
+ You can use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to create this pattern by using [Infrastructure as Code](https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html).

## Tools
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-tools"></a>

**AWS services**
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed AWS container image registry service that is secure, scalable, and reliable.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage containers on a cluster.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
+ [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances.
+ [Amazon Relational Database Service (Amazon RDS)](https://docs.aws.amazon.com/rds/index.html) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is storage for the internet. It is designed to make web-scale computing easier for developers.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/) helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you've defined.
+ [Elastic Load Balancing (ELB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in multiple Availability Zones.

**Other tools**
+ [Docker](https://www.docker.com/) helps developers to easily pack, ship, and run any application as a lightweight, portable, and self-sufficient container.

## Epics
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-epics"></a>

### Create networking components
<a name="create-networking-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create the load balancers
<a name="create-the-load-balancers"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Network Load Balancer.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html)For help with this and other stories, see the *Related resources* section. | Cloud administrator | 
| Create an Application Load Balancer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an Amazon EFS file system
<a name="create-an-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EFS file system. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Mount targets for the subnets. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Verify that the subnets are mounted as targets.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an S3 bucket
<a name="create-an-s3-bucket"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket. | Open the Amazon S3 console and [create an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html#creating-bucket) to store your application’s static assets, if required. | Cloud administrator | 

### Create a Secrets Manager secret
<a name="create-a-secrets-manager-secret"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Create an AWS KMS key to encrypt the Secrets Manager secret. | Open the AWS Key Management Service (AWS KMS) console and create a KMS key. | Cloud administrator | 
|  Create a Secrets Manager secret to store the Amazon RDS password. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create an Amazon RDS instance
<a name="create-an-amazon-rds-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a DB subnet group.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Amazon RDS instance. | Create and configure an Amazon RDS instance within the private subnets. Make sure that **Multi-AZ** is turned on for high availability (HA). | Cloud administrator | 
| Load data to the Amazon RDS instance.  | Load the relational data required by your application into your Amazon RDS instance. This process will vary depending on your application's needs, as well as how your database schema is defined and designed. | DBA | 

### Create the Amazon ECS components
<a name="create-the-amazon-ecs-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an ECS cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create the Docker images. | Create the Docker images by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html). | Cloud administrator | 
| Create an Amazon ECR repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator, DevOps engineer | 
| Push the Docker images to the Amazon ECR repository.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 
| Create an Amazon ECS task definition.  | A task definition is required to run Docker containers in Amazon ECS. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html)For help with setting up your task definition, see "Creating a task definition" in the *Related resources* section. Make sure you provide the Docker images that you pushed to Amazon ECR. | Cloud administrator | 
| Create an ECS service and choose Fargate as the launch type. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Set up AWS PrivateLink
<a name="set-up-aws-privatelink"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the AWS PrivateLink endpoint. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer.html) | Cloud administrator | 

### Create a VPC endpoint
<a name="create-a-vpc-endpoint"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a VPC endpoint. | [Create a VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) for the AWS PrivateLink endpoint that you created earlier. The VPC endpoint Fully Qualified Domain Name (FQDN) will point to the AWS PrivateLink endpoint FQDN. This creates an elastic network interface to the VPC endpoint service that the Domain Name Service endpoints can access. | Cloud administrator | 

### Set the target
<a name="set-the-target"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add the Application Load Balancer as a target. | To add the Application Load Balancer as a target for the Network Load Balancer, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/application-load-balancer-target.html). | App developer | 

## Related resources
<a name="access-container-applications-privately-on-amazon-ecs-by-using-aws-fargate-aws-privatelink-and-a-network-load-balancer-resources"></a>

**Create the load balancers:**
+ [Use a Network Load Balancer for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/nlb.html)
+ [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html)
+ [Use an Application Load Balancer for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/alb.html)
+ [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html)

**Create an Amazon EFS file system:**
+ [Create an Amazon EFS file system](https://docs.aws.amazon.com/efs/latest/ug/creating-using-create-fs.html)
+ [Create mount targets in Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html)

**Create a Secrets Manager secret:**
+ [Create keys in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html)
+ [Create a secret in AWS Secrets Manager ](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html)

**Create an Amazon RDS instance:**
+ [Create an Amazon RDS DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html)

**Create the Amazon ECS components**
+ [Create an Amazon ECR repository ](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html)
+ [Authenticate Docker with Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth)
+ [Push an image to an Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html)
+ [Create Amazon ECS task definition ](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html)
+ [Create an Amazon ECS service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-console-v2.html)

**Other resources:**
+ [Securely accessing services over AWS PrivateLink](https://d1.awsstatic.com/whitepapers/aws-privatelink.pdf)

# Access container applications privately on Amazon EKS using AWS PrivateLink and a Network Load Balancer
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-summary"></a>

This pattern describes how to privately host a Docker container application on Amazon Elastic Kubernetes Service (Amazon EKS) behind a Network Load Balancer, and access the application by using AWS PrivateLink. You can then use a private network to securely access services on the Amazon Web Services (AWS) Cloud. 

The Amazon EKS cluster running the Docker applications, with a Network Load Balancer at the front end, can be associated with a virtual private cloud (VPC) endpoint for access through AWS PrivateLink. This VPC endpoint service can then be shared with other VPCs by using their VPC endpoints.

The setup described by this pattern is a secure way to share application access among VPCs and AWS accounts. It requires no special connectivity or routing configurations, because the connection between the consumer and provider accounts is on the global AWS backbone and doesn’t traverse the public internet.

## Prerequisites and limitations
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-prereqs"></a>

**Prerequisites **
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows.
+ An application running on Docker.
+ An active AWS account.
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html), installed and configured on Linux, macOS, or Windows.
+ An existing Amazon EKS cluster with tagged private subnets and configured to host applications. For more information, see [Subnet tagging](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging) in the Amazon EKS documentation. 
+ Kubectl, installed and configured to access resources on your Amazon EKS cluster. For more information, see [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation. 

## Architecture
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-architecture"></a>

![\[Use PrivateLink and a Network Load Balancer to access an application in an Amazon EKS container.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/ce977924-012c-4fb6-8e51-94d6e5c829a6/images/378456a3-f4d1-4a57-bb36-879c240cabfb.png)


**Technology stack  **
+ Amazon EKS
+ AWS PrivateLink
+ Network Load Balancer

**Automation and scale**
+ Kubernetes manifests can be tracked and managed on a Git-based repository, and deployed by using continuous integration and continuous delivery (CI/CD) in AWS CodePipeline. 
+ You can use AWS CloudFormation to create this pattern by using infrastructure as code (IaC).

## Tools
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-tools"></a>
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – AWS Command Line Interface (AWS CLI) is an open-source tool that enables you to interact with AWS services using commands in your command-line shell.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) – Elastic Load Balancing distributes incoming application or network traffic across multiple targets, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses, in one or more Availability Zones.
+ [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) – Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
+ [Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) – Amazon Virtual Private Cloud (Amazon VPC) helps you launch AWS resources into a virtual network that you've defined.
+ [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) – Kubectl is a command line utility for running commands against Kubernetes clusters.

## Epics
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-epics"></a>

### Deploy the Kubernetes deployment and service manifest files
<a name="deploy-the-kubernetes-deployment-and-service-manifest-files"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Create the Kubernetes deployment manifest file. | Create a deployment manifest file by modifying the following sample file according to your requirements.<pre>apiVersion: apps/v1<br />kind: Deployment<br />metadata:<br />  name: sample-app<br />spec:<br />  replicas: 3<br />  selector:<br />    matchLabels:<br />      app: nginx<br />  template:<br />    metadata:<br />      labels:<br />        app: nginx<br />    spec:<br />      containers:<br />        - name: nginx<br />          image: public.ecr.aws/z9d2n7e1/nginx:1.19.5<br />          ports:<br />            - name: http<br />              containerPort: 80</pre>This is a NGINX sample configuration file that is deployed by using the NGINX Docker image. For more information, see [How to use the official NGINX Docker image](https://www.docker.com/blog/how-to-use-the-official-nginx-docker-image/) in the Docker documentation. | DevOps engineer | 
| Deploy the Kubernetes deployment manifest file. | Run the following command to apply the deployment manifest file to your Amazon EKS cluster:`kubectl apply –f <your_deployment_file_name> ` | DevOps engineer | 
|  Create the Kubernetes service manifest file.  | Create a service manifest file by modifying the following sample file according to your requirements.<pre>apiVersion: v1<br />kind: Service<br />metadata:<br />  name: sample-service<br />  annotations:<br />    service.beta.kubernetes.io/aws-load-balancer-type: nlb<br />    service.beta.kubernetes.io/aws-load-balancer-internal: "true"<br />spec:<br />  ports:<br />    - port: 80<br />      targetPort: 80<br />      protocol: TCP<br />  type: LoadBalancer<br />  selector:<br />    app: nginx</pre>Make sure that you included the following `annotations` to define an internal Network Load Balancer:<pre>service.beta.kubernetes.io/aws-load-balancer-type: nlb<br />service.beta.kubernetes.io/aws-load-balancer-internal: "true"</pre> | DevOps engineer | 
| Deploy the Kubernetes service manifest file. | Run the following command to apply the service manifest file to your Amazon EKS cluster:`kubectl apply -f <your_service_file_name>` | DevOps engineer | 

### Create the endpoints
<a name="create-the-endpoints"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Record the Network Load Balancer’s name.  | Run the following command to retrieve the name of the Network Load Balancer:`kubectl get svc sample-service -o wide`Record the Network Load Balancer’s name, which is required to create an AWS PrivateLink endpoint. | DevOps engineer | 
| Create an AWS PrivateLink endpoint. | Sign in to the AWS Management Console, open the Amazon VPC console, and then create an AWS PrivateLink endpoint. Associate this endpoint with the Network Load Balancer, this makes the application privately available to customers. For more information, see [VPC endpoint services (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-service.html) in the Amazon VPC documentation.If the consumer account requires access to the application, the consumer account’s [AWS account ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html) must be added to the allowed principals list for the AWS PrivateLink endpoint configuration. For more information, see [Adding and removing permissions for your endpoint service](https://docs.aws.amazon.com/vpc/latest/userguide/add-endpoint-service-permissions.html) in the Amazon VPC documentation. | Cloud administrator  | 
| Create a VPC endpoint. | On the Amazon VPC console, choose **Endpoint Services**, and then choose **Create Endpoint Service**. Create a VPC endpoint for the AWS PrivateLink endpoint.The VPC endpoint’s fully qualified domain name (FQDN) points to the FQDN for the AWS PrivateLink endpoint. This creates an elastic network interface to the VPC endpoint service that the DNS endpoints can access.  | Cloud administrator | 

## Related resources
<a name="access-container-applications-privately-on-amazon-eks-using-aws-privatelink-and-a-network-load-balancer-resources"></a>
+ [Using the official NGINX Docker image](https://www.docker.com/blog/how-to-use-the-official-nginx-docker-image/)
+ [Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html) 
+ [Creating VPC endpoint services (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-service.html) 
+ [Adding and removing permissions for your endpoint service ](https://docs.aws.amazon.com/vpc/latest/userguide/add-endpoint-service-permissions.html)

# Automate backups for Amazon RDS for PostgreSQL DB instances by using AWS Batch
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch"></a>

*Kirankumar Chandrashekar, Amazon Web Services*

## Summary
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-summary"></a>

Backing up your PostgreSQL databases is an important task and can typically be completed with the [pg\$1dump utility](https://www.postgresql.org/docs/current/app-pgdump.html), which uses the COPY command by default to create a schema and data dump of a PostgreSQL database. However, this process can become repetitive if you require regular backups for multiple PostgreSQL databases. If your PostgreSQL databases are hosted in the cloud, you can also take advantage of the [automated backup](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html) feature provided by Amazon Relational Database Service (Amazon RDS) for PostgreSQL as well. This pattern describes how to automate regular backups for Amazon RDS for PostgreSQL DB instances using the pg\$1dump utility.

Note: The instructions assume that you're using Amazon RDS. However, you can also use this approach for PostgreSQL databases that are hosted outside Amazon RDS. To take backups, the AWS Lambda function must be able to access your databases.

A time-based Amazon CloudWatch Events event initiates a Lambda function that searches for specific backup [tags applied to the metadata](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html) of the PostgreSQL DB instances on Amazon RDS. If the PostgreSQL DB instances have the **bkp:AutomatedDBDump = Active** tag and other required backup tags, the Lambda function submits individual jobs for each database backup to AWS Batch. 

AWS Batch processes these jobs and uploads the backup data to an Amazon Simple Storage Service (Amazon S3) bucket. This pattern uses a Dockerfile and an entrypoint.sh file to build a Docker container image that is used to make backups in the AWS Batch job. After the backup process is complete, AWS Batch records the backup details to an inventory table on Amazon DynamoDB. As an additional safeguard, a CloudWatch Events event initiates an Amazon Simple Notification Service (Amazon SNS) notification if a job fails in AWS Batch. 

## Prerequisites and limitations
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ An existing managed or unmanaged compute environment. For more information, see [Managed and unmanaged compute environments](https://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) in the AWS Batch documentation. 
+ [AWS Command Line Interface (CLI) version 2 Docker image](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-docker.html), installed and configured.
+ Existing Amazon RDS for PostgreSQL DB instances.  
+ An existing S3 bucket. 
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows.
+ Familiarity with coding in Lambda. 

## Architecture
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-architecture"></a>

![\[Architecture to back up Amazon RDS for PostgreSQL DB instances by using the pg_dump utility.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/3283f739-980b-43d4-aca0-9d77a2ce3b85/images/352e2eab-1b7d-44ec-840a-a772a175e873.png)


 

**Technology stack  **
+ Amazon CloudWatch Events
+ Amazon DynamoDB
+ Amazon Elastic Container Registry (Amazon ECR)
+ Amazon RDS
+ Amazon SNS
+ Amazon S3
+ AWS Batch
+ AWS Key Management Service (AWS KMS)
+ AWS Lambda
+ AWS Secrets Manager
+ Docker

## Tools
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-tools"></a>
+ [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html) – CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) – DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
+ [Amazon ECR](https://docs.aws.amazon.com/ecr/index.html) – Amazon Elastic Container Registry (Amazon ECR) is a managed AWS container image registry service that is secure, scalable, and reliable.
+ [Amazon RDS](https://docs.aws.amazon.com/rds/index.html) – Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud.
+ [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) – Amazon Simple Notification Service (Amazon SNS) is a managed service that provides message delivery from publishers to subscribers.
+ [Amazon S3](https://docs.aws.amazon.com/s3/index.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet.
+ [AWS Batch](https://docs.aws.amazon.com/batch/index.html) – AWS Batch helps you run batch computing workloads on the AWS Cloud.
+ [AWS KMS](https://docs.aws.amazon.com/kms/index.html) – AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/index.html) – Lambda is a compute service that helps you run code without provisioning or managing servers.
+ [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/index.html) – Secrets Manager helps you replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically.
+ [Docker](https://www.docker.com/) – Docker helps developers easily pack, ship, and run any application as a lightweight, portable, and self-sufficient container.

Your PostgreSQL DB instances on Amazon RDS must have [tags applied to their metadata](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html). The Lambda function searches for tags to identify DB instances that should be backed up, and the following tags are typically used.


| 
| 
| Tag | Description | 
| --- |--- |
| bkp:AutomatedDBDump = Active | Identifies an Amazon RDS DB instance as a candidate for backups. | 
| bkp:AutomatedBackupSecret = <secret\$1name > | Identifies the Secrets Manager secret that contains the Amazon RDS login credentials. | 
| bkp:AutomatedDBDumpS3Bucket = <s3\$1bucket\$1name> | Identifies the S3 bucket to send backups to. | 
| bkp:AutomatedDBDumpFrequencybkp:AutomatedDBDumpTime | Identify the frequency and times when databases should be backed up.  | 
| bkp:pgdumpcommand = <pgdump\$1command> | Identifies the databases for which the backups need to be taken. | 

## Epics
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-epics"></a>

### Create an inventory table in DynamoDB
<a name="create-an-inventory-table-in-dynamodb"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a table in DynamoDB. | Sign in to the AWS Management Console, open the Amazon DynamoDB console, and create a table. For help with this and other stories, see the *Related resources* section. | Cloud administrator, Database administrator | 
| Confirm that the table was created.  | Run the `aws dynamodb describe-table --table-name <table-name> \| grep TableStatus` command. If the table exists, the command will return the `"TableStatus": "ACTIVE",` result. | Cloud administrator, Database administrator | 

### Create an SNS topic for failed job events in AWS Batch
<a name="create-an-sns-topic-for-failed-job-events-in-aws-batch"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an SNS topic. | Open the Amazon SNS console, choose **Topics**, and create an SNS topic with the name `JobFailedAlert`. Subscribe an active email address to the topic, and check your email inbox to confirm the SNS subscription email from AWS Notifications. | Cloud administrator | 
| Create a failed job event rule for AWS Batch.  | Open the Amazon CloudWatch console, choose **Events**, and then choose **Create rule**. Choose **Show advanced options**, and choose **Edit**. For **Build a pattern that selects events for processing by your targets**, replace the existing text with the "Failed job event" code from the *Additional information* section. This code defines a CloudWatch Events rule that initiates when AWS Batch has a `Failed` event. | Cloud administrator | 
| Add event rule target.  | In **Targets**, choose **Add targets**, and choose the `JobFailedAlert` SNS topic. Configure the remaining details and create the Cloudwatch Events rule. | Cloud administrator | 

### Build a Docker image and push it to an Amazon ECR repository
<a name="build-a-docker-image-and-push-it-to-an-amazon-ecr-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECR repository. | Open the Amazon ECR console and choose the AWS Region in which you want to create your repository. Choose **Repositories**, and then choose **Create repository**. Configure the repository according to your requirements. | Cloud administrator | 
| Write a Dockerfile.  | Sign in to Docker and use the "Sample Dockerfile" and "Sample entrypoint.sh file" from the *Additional information* section to build a Dockerfile. | DevOps engineer | 
| Create a Docker image and push it to the Amazon ECR repository. | Build the Dockerfile into a Docker image and push it to the Amazon ECR repository. For help with this story, see the *Related resources* section. | DevOps engineer | 

### Create the AWS Batch components
<a name="create-the-aws-batch-components"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an AWS Batch job definition. | Open the AWS Batch console and create a job definition that includes the Amazon ECR repository’s Uniform Resource Identifier (URI) as the property `Image`. | Cloud administrator | 
| Configure the AWS Batch job queue.  | On the AWS Batch console, choose **Job queues**, and then choose **Create queue**. Create a job queue that will store jobs until AWS Batch runs them on the resources within your compute environment. Important: Make sure you write logic for AWS Batch to record the backup details to the DynamoDB inventory table. | Cloud administrator | 

### Create and schedule a Lambda function
<a name="create-and-schedule-a-lambda-function"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Lambda function to search for tags. | Create a Lambda function that searches for tags on your PostgreSQL DB instances and identifies backup candidates. Make sure your Lambda function can identify the `bkp:AutomatedDBDump = Active` tag and all other required tags. Important: The Lambda function must also be able to add jobs to the AWS Batch job queue. | DevOps engineer | 
| Create a time-based CloudWatch Events event.  | Open the Amazon CloudWatch console and create a CloudWatch Events event that uses a cron expression to run your Lambda function on a regular schedule. Important: All scheduled events use the UTC time zone. | Cloud administrator | 

### Test the backup automation
<a name="test-the-backup-automation"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon KMS key. | Open the Amazon KMS console and create a KMS key that can be used to encrypt the Amazon RDS credentials stored in AWS Secrets Manager. | Cloud administrator | 
| Create an AWS Secrets Manager secret. | Open the AWS Secrets Manager console and store your Amazon RDS for PostgreSQL database credentials as a secret. | Cloud administrator | 
| Add the required tags to the PostgreSQL DB instances. | Open the Amazon RDS console and add tags to the PostgreSQL DB instances that you want to automatically back up. You can use the tags from the table in the *Tools* section. If you require backups from multiple PostgreSQL databases within the same Amazon RDS instance, then use `-d test:-d test1` as the value for the `bkp:pgdumpcommand` tag. `test` and `test1` are database names. Make sure that there is no space after the colon (:). | Cloud administrator | 
| Verify the backup automation.  | To verify the backup automation, you can either invoke the Lambda function or wait for the backup schedule to begin. After the backup process is complete, check that the DynamoDB inventory table has a valid backup entry for your PostgreSQL DB instances. If they match, then the backup automation process is successful. | Cloud administrator | 

## Related resources
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-resources"></a>

**Create an inventory table in DynamoDB**
+ [Create an Amazon DynamoDB table ](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-1.html)

 

**Create an SNS topic for failed job events in AWS Batch**
+ [Create an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-tutorial-create-topic.html)
+ [Send SNS alerts for failed job events in AWS Batch](https://docs.aws.amazon.com/batch/latest/userguide/batch_sns_tutorial.html)

 

**Build a Docker image and push it to an Amazon ECR repository**
+ [Create an Amazon ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html)    
+ [Write a Dockerfile, create a Docker image, and push it to Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html)

 

**Create the AWS Batch components **
+ [Create an AWS Batch job definition](https://docs.aws.amazon.com/batch/latest/userguide/Batch_GetStarted.html#first-run-step-1)    
+ [Configure your compute environment and AWS Batch job queue ](https://docs.aws.amazon.com/batch/latest/userguide/Batch_GetStarted.html#first-run-step-2)   
+ [Create a job queue in AWS Batch](https://docs.aws.amazon.com/batch/latest/userguide/create-job-queue.html)

 

**Create a Lambda function**
+ [Create a Lambda function and write code](https://docs.aws.amazon.com/lambda/latest/dg/getting-started-create-function.html)
+ [Use Lambda with DynamoDB](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html)

 

**Create a CloudWatch Events event**
+ [Create a time-based CloudWatch Events event ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Scheduled-Rule.html)   
+ [Use cron expressions in Cloudwatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html)

 

**Test the backup automation**
+ [Create an Amazon KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html)    
+ [Create a Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html)
+ [Add tags to an Amazon RDS instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.html)

## Additional information
<a name="automate-backups-for-amazon-rds-for-postgresql-db-instances-by-using-aws-batch-additional"></a>

**Failed job event:**

```
{
  "detail-type": [
    "Batch Job State Change"
  ],
  "source": [
    "aws.batch"
  ],
  "detail": {
    "status": [
      "FAILED"
    ]
  }
}
```

**Sample Dockerfile:**

```
FROM alpine:latest
RUN apk --update add py-pip postgresql-client jq bash && \
pip install awscli && \
rm -rf /var/cache/apk/*
ADD entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
```

**Sample entrypoint.sh file:**

```
 #!/bin/bash
set -e
DATETIME=`date +"%Y-%m-%d_%H_%M"`
FILENAME=RDS_PostGres_dump_${RDS_INSTANCE_NAME}
FILE=${FILENAME}_${DATETIME}

aws configure --profile new-profile set role_arn arn:aws:iam::${TargetAccountId}:role/${TargetAccountRoleName}
aws configure --profile new-profile set credential_source EcsContainer

echo "Central Account access provider IAM role is: "
aws sts get-caller-identity

echo "Target Customer Account access provider IAM role is: "
aws sts get-caller-identity --profile new-profile

securestring=$(aws secretsmanager get-secret-value --secret-id $SECRETID --output json --query 'SecretString' --region=$REGION --profile new-profile)

if [[ ${securestring} ]]; then
    echo "successfully accessed secrets manager and got the credentials"
    export PGPASSWORD=$(echo $securestring | jq --raw-output | jq -r '.DB_PASSWORD')
    PGSQL_USER=$(echo $securestring | jq --raw-output | jq -r '.DB_USERNAME')
    echo "Executing pg_dump for the PostGres endpoint ${PGSQL_HOST}"
    # pg_dump -h $PGSQL_HOST -U $PGSQL_USER -n dms_sample | gzip -9 -c  | aws s3 cp - --region=$REGION  --profile new-profile s3://$BUCKET/$FILE
    # in="-n public:-n private"
    IFS=':' list=($EXECUTE_COMMAND);
    for command in "${list[@]}";
      do
        echo $command;
        pg_dump -h $PGSQL_HOST -U $PGSQL_USER ${command} | gzip -9 -c  | aws s3 cp - --region=$REGION --profile new-profile s3://${BUCKET}/${FILE}-${command}".sql.gz"
        echo $?;
        if  [[ $? -ne 0 ]]; then
            echo "Error occurred in database backup process. Exiting now....."
            exit 1
        else
            echo "Postgresql dump was successfully taken for the RDS endpoint ${PGSQL_HOST} and is uploaded to the following S3 location s3://${BUCKET}/${FILE}-${command}.sql.gz"
            #write the details into the inventory table in central account
            echo "Writing to DynamoDB inventory table"
            aws dynamodb put-item --table-name ${RDS_POSTGRES_DUMP_INVENTORY_TABLE} --region=$REGION --item '{ "accountId": { "S": "'"${TargetAccountId}"'" }, "dumpFileUrl": {"S": "'"s3://${BUCKET}/${FILE}-${command}.sql.gz"'" }, "DumpAvailableTime": {"S": "'"`date +"%Y-%m-%d::%H::%M::%S"` UTC"'"}}'
            echo $?
            if  [[ $? -ne 0 ]]; then
                echo "Error occurred while putting item to DynamoDb Inventory Table. Exiting now....."
                exit 1
            else
                echo "Successfully written to DynamoDb Inventory Table ${RDS_POSTGRES_DUMP_INVENTORY_TABLE}"
            fi
        fi
      done;
else
    echo "Something went wrong {$?}"
    exit 1
fi

exec "$@"
```

# Automate deployment of Node Termination Handler in Amazon EKS by using a CI/CD pipeline
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline"></a>

*Sandip Gangapadhyay, Sandeep Gawande, Viyoma Sachdeva, Pragtideep Singh, and John Vargas, Amazon Web Services*

## Summary
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-summary"></a>

**Notice**: AWS CodeCommit is no longer available to new customers. Existing customers of AWS CodeCommit can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/)

On the Amazon Web Services (AWS) Cloud, you can use [AWS Node Termination Handler](https://github.com/aws/aws-node-termination-handler), an open-source project, to handle Amazon Elastic Compute Cloud (Amazon EC2) instance shutdown within Kubernetes gracefully. AWS Node Termination Handler helps to ensure that the Kubernetes control plane responds appropriately to events that can cause your EC2 instance to become unavailable. Such events include the following:
+ [EC2 instance scheduled maintenance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html)
+ [Amazon EC2 Spot Instance interruptions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html)
+ [Auto Scaling group scale in](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroupLifecycle.html#as-lifecycle-scale-in)
+ [Auto Scaling group rebalancing](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html#AutoScalingBehavior.InstanceUsage) across Availability Zones
+ EC2 instance termination through the API or the AWS Management Console

If an event isn’t handled, your application code might not stop gracefully. It also might take longer to recover full availability, or it might accidentally schedule work to nodes that are going down. The `aws-node-termination-handler` (NTH) can operate in two different modes: Instance Metadata Service (IMDS) or Queue Processor. For more information about the two modes, see the [Readme file](https://github.com/aws/aws-node-termination-handler#readme).

This pattern uses AWS CodeCommit, and it automates the deployment of NTH by using Queue Processor through a continuous integration and continuous delivery (CI/CD) pipeline.

**Note**  
If you're using [EKS managed node groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html), you don't need the `aws-node-termination-handler`.

## Prerequisites and limitations
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ A web browser that is supported for use with the AWS Management Console. See the [list of supported browsers](https://aws.amazon.com/premiumsupport/knowledge-center/browsers-management-console/).
+ AWS Cloud Development Kit (AWS CDK) [installed](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install).
+ `kubectl`, the Kubernetes command line tool, [installed](https://kubernetes.io/docs/tasks/tools/).
+ `eksctl`, the AWS Command Line Interface (AWS CLI) for Amazon Elastic Kubernetes Service (Amazon EKS), [installed](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html).
+ A running EKS cluster with version 1.20 or later.
+ A self-managed node group attached to the EKS cluster. To create an Amazon EKS cluster with a self-managed node group, run the following command.

  ```
  eksctl create cluster --managed=false --region <region> --name <cluster_name>
  ```

  For more information on `eksctl`, see the [eksctl documentation](https://eksctl.io/usage/creating-and-managing-clusters/).
+ AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. For more information, see [Creating an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html).

**Limitations **
+ You must use an AWS Region that supports the Amazon EKS service.

**Product versions**
+ Kubernetes version 1.20 or later
+ `eksctl` version 0.107.0 or later
+ AWS CDK version 2.27.0 or later

## Architecture
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-architecture"></a>

**Target technology stack  **
+ A virtual private cloud (VPC)
+ An EKS cluster
+ Amazon Simple Queue Service (Amazon SQS)
+ IAM
+ Kubernetes

**Target architecture**** **

The following diagram shows the high-level view of the end-to-end steps when the node termination is started.

![\[A VPC with an Auto Scaling group, an EKS cluster with Node Termination Handler, and an SQS queue.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/970dfb73-9526-4942-a974-e8eef6416596/images/9e0125ae-d55b-49dd-ae70-ccaedf03832a.png)


The workflow shown in the diagram consists of the following high-level steps:

1. The automatic scaling EC2 instance terminate event is sent to the SQS queue.

1. The NTH Pod monitors for new messages in the SQS queue.

1. The NTH Pod receives the new message and does the following:
   + Cordons the node so that new pod does not run on the node.
   + Drains the node, so that the existing pod is evacuated
   + Sends a lifecycle hook signal to the Auto Scaling group so that the node can be terminated.

**Automation and scale**
+ Code is managed and deployed by AWS CDK, backed by AWS CloudFormation nested stacks.
+ The [Amazon EKS control plane](https://docs.aws.amazon.com/eks/latest/userguide/disaster-recovery-resiliency.html) runs across multiple Availability Zones to ensure high availability.
+ For [automatic scaling](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html), Amazon EKS supports the Kubernetes [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) and [Karpenter](https://karpenter.sh/).

## Tools
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-tools"></a>

**AWS services**
+ [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/latest/guide/home.html) is a software development framework that helps you define and provision AWS Cloud infrastructure in code.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html) helps you maintain application availability and allows you to automatically add or remove Amazon EC2 instances according to conditions you define.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.

**Other tools**
+ [kubectl](https://kubernetes.io/docs/reference/kubectl/kubectl/) is a Kubernetes command line tool for running commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

**Code **

The code for this pattern is available in the [deploy-nth-to-eks](https://github.com/aws-samples/deploy-nth-to-eks) repo on GitHub.com. The code repo contains the following files and folders.
+ `nth folder` – The Helm chart, values files, and the scripts to scan and deploy the AWS CloudFormation template for Node Termination Handler.
+ `config/config.json` – The configuration parameter file for the application. This file contains all the parameters needed for CDK to be deployed.
+ `cdk` – AWS CDK source code.
+ `setup.sh` – The script used to deploy the AWS CDK application to create the required CI/CD pipeline and other required resources.
+ `uninstall.sh` – The script used to clean up the resources.

To use the example code, follow the instructions in the *Epics* section.

## Best practices
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-best-practices"></a>

For best practices when automating AWS Node Termination Handler, see the following:
+ [EKS Best Practices Guides](https://aws.github.io/aws-eks-best-practices/)
+ [Node Termination Handler - Configuration](https://github.com/aws/aws-node-termination-handler/tree/main/config/helm/aws-node-termination-handler)

## Epics
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repo. | To clone the repo by using SSH (Secure Shell), run the following the command.<pre>git clone git@github.com:aws-samples/deploy-nth-to-eks.git</pre>To clone the repo by using HTTPS, run the following the command.<pre>git clone https://github.com/aws-samples/deploy-nth-to-eks.git</pre>Cloning the repo creates a folder named `deploy-nth-to-eks`.Change to that directory.<pre>cd deploy-nth-to-eks</pre> | App developer, AWS DevOps, DevOps engineer | 
| Set the kubeconfig file. | Set your AWS credentials in your terminal and confirm that you have rights to assume the cluster role. You can use the following example code.<pre>aws eks update-kubeconfig --name <Cluster_Name> --region <region>--role-arn <Role_ARN></pre> | AWS DevOps, DevOps engineer, App developer | 

### Deploy the CI/CD pipeline
<a name="deploy-the-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the parameters. | In the `config/config.json` file, set up the following required parameters.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline.html) | App developer, AWS DevOps, DevOps engineer | 
| Create the CI/CD pipeline to deploy NTH. | Run the setup.sh script.<pre>./setup.sh</pre>The script will deploy the AWS CDK application that will create the CodeCommit repo with example code, the pipeline, and CodeBuild projects based on the user input parameters in `config/config.json` file.This script will ask for the password as it installs npm packages with the sudo command. | App developer, AWS DevOps, DevOps engineer | 
| Review the CI/CD pipeline. | Open the AWS Management Console, and review the following resources created in the stack.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline.html)After the pipeline runs successfully, Helm release `aws-node-termination-handler` is installed in the EKS cluster. Also, a Pod named `aws-node-termination-handler` is running in the `kube-system` namespace in the cluster. | App developer, AWS DevOps, DevOps engineer | 

### Test NTH deployment
<a name="test-nth-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Simulate an Auto Scaling group scale-in event. | To simulate an automatic scaling scale-in event, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline.html) |  | 
| Review the logs. | During the scale-in event, the NTH Pod will cordon and drain the corresponding worker node (the EC2 instance that will be terminated as part of the scale-in event). To check the logs, use the code in the *Additional information* section. | App developer, AWS DevOps, DevOps engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up all AWS resources. | To clean up the resources created by this pattern, run the following command.<pre>./uninstall.sh</pre>This will clean up all the resources created in this pattern by deleting the CloudFormation stack. | DevOps engineer | 

## Troubleshooting
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The npm registry isn’t set correctly. | During the installation of this solution, the script installs npm install to download all the required packages. If, during the installation, you see a message that says "Cannot find module," the npm registry might not be set correctly. To see the current registry setting, run the following command.<pre>npm config get registry</pre>To set the registry with `https://registry.npmjs.org/`, run the following command.<pre>npm config set registry https://registry.npmjs.org</pre> | 
| Delay SQS message delivery. | As part of your troubleshooting, if you want to delay the SQS message delivery to NTH Pod, you can adjust the SQS delivery delay parameter. For more information, see [Amazon SQS delay queues](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html). | 

## Related resources
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-resources"></a>
+ [AWS Node Termination Handler source code](https://github.com/aws/aws-node-termination-handler)
+ [EC2 workshop](https://ec2spotworkshops.com/using_ec2_spot_instances_with_eks/070_selfmanagednodegroupswithspot/deployhandler.html)
+ [AWS CodePipeline](https://aws.amazon.com/codepipeline/)
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://aws.amazon.com/eks/)
+ [AWS Cloud Development Kit](https://aws.amazon.com/cdk/)
+ [AWS CloudFormation](https://aws.amazon.com/cloudformation/)

## Additional information
<a name="automate-deployment-of-node-termination-handler-in-amazon-eks-by-using-a-ci-cd-pipeline-additional"></a>

1. Find the NTH Pod name.

```
kubectl get pods -n kube-system |grep aws-node-termination-handler
aws-node-termination-handler-65445555-kbqc7   1/1     Running   0          26m
kubectl get pods -n kube-system |grep aws-node-termination-handler
aws-node-termination-handler-65445555-kbqc7   1/1     Running   0          26m
```

2. Check the logs. An example log looks like the following. It shows that the node has been cordoned and drained before sending the Auto Scaling group lifecycle hook completion signal.

```
kubectl -n kube-system logs aws-node-termination-handler-65445555-kbqc7
022/07/17 20:20:43 INF Adding new event to the event store event={"AutoScalingGroupName":"eksctl-my-cluster-target-nodegroup-ng-10d99c89-NodeGroup-ZME36IGAP7O1","Description":"ASG Lifecycle Termination event received. Instance will be interrupted at 2022-07-17 20:20:42.702 +0000 UTC \n","EndTime":"0001-01-01T00:00:00Z","EventID":"asg-lifecycle-term-33383831316538382d353564362d343332362d613931352d383430666165636334333564","InProgress":false,"InstanceID":"i-0409f2a9d3085b80e","IsManaged":true,"Kind":"SQS_TERMINATE","NodeLabels":null,"NodeName":"ip-192-168-75-60.us-east-2.compute.internal","NodeProcessed":false,"Pods":null,"ProviderID":"aws:///us-east-2c/i-0409f2a9d3085b80e","StartTime":"2022-07-17T20:20:42.702Z","State":""}
2022/07/17 20:20:44 INF Requesting instance drain event-id=asg-lifecycle-term-33383831316538382d353564362d343332362d613931352d383430666165636334333564 instance-id=i-0409f2a9d3085b80e kind=SQS_TERMINATE node-name=ip-192-168-75-60.us-east-2.compute.internal provider-id=aws:///us-east-2c/i-0409f2a9d3085b80e
2022/07/17 20:20:44 INF Pods on node node_name=ip-192-168-75-60.us-east-2.compute.internal pod_names=["aws-node-qchsw","aws-node-termination-handler-65445555-kbqc7","kube-proxy-mz5x5"]
2022/07/17 20:20:44 INF Draining the node
2022/07/17 20:20:44 ??? WARNING: ignoring DaemonSet-managed Pods: kube-system/aws-node-qchsw, kube-system/kube-proxy-mz5x5
2022/07/17 20:20:44 INF Node successfully cordoned and drained node_name=ip-192-168-75-60.us-east-2.compute.internal reason="ASG Lifecycle Termination event received. Instance will be interrupted at 2022-07-17 20:20:42.702 +0000 UTC \n"
2022/07/17 20:20:44 INF Completed ASG Lifecycle Hook (NTH-K8S-TERM-HOOK) for instance i-0409f2a9d3085b80e
```

# Automatically build and deploy a Java application to Amazon EKS using a CI/CD pipeline
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline"></a>

*MAHESH RAGHUNANDANAN, Jomcy Pappachen, and James Radtke, Amazon Web Services*

## Summary
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-summary"></a>

This pattern describes how to create a continuous integration and continuous delivery (CI/CD) pipeline that automatically builds and deploys a Java application with recommended DevSecOps practices to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster on the AWS Cloud. This pattern uses a greeting application developed with a Spring Boot Java framework and that uses Apache Maven.

You can use this pattern’s approach to build the code for a Java application, package the application artifacts as a Docker image, security scan the image, and upload the image as a workload container on Amazon EKS. This pattern's approach is useful if you want to migrate from a tightly coupled monolithic architecture to a microservices architecture. The approach also helps you to monitor and manage the entire lifecycle of a Java application, which ensures a higher level of automation and helps avoid errors or bugs.

## Prerequisites and limitations
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ AWS Command Line Interface (AWS CLI) version 2, installed and configured. For more information about this, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) in the AWS CLI documentation.

  AWS CLI version 2 must be configured with the same AWS Identity and Access Management (IAM) role that creates the Amazon EKS cluster, because only that role is authorized to add other IAM roles to the `aws-auth` `ConfigMap`. For information and steps to configure AWS CLI, see [Configuring settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) in the AWS CLI documentation.
+ IAM roles and permissions with full access to AWS CloudFormation. For more information about this, see [Controlling access with IAM](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html) in the CloudFormation documentation.
+ An existing Amazon EKS cluster, with details of the IAM role name and the Amazon Resource Name (ARN) of the IAM role for worker nodes in the EKS cluster.
+ Kubernetes Cluster Autoscaler, installed and configured in your Amazon EKS cluster. For more information, see [Scale cluster compute with Karpenter and Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) in the Amazon EKS documentation. 
+ Access to code in the GitHub repository.

**Important**  
AWS Security Hub CSPM is enabled as part of the CloudFormation templates that are included in the code for this pattern. By default, after Security Hub CSPM is enabled, it comes with a 30–day free trial. After the trial, there is a cost associated with this AWS service. For more information about pricing, see [AWS Security Hub CSPM pricing](https://aws.amazon.com/security-hub/pricing/).

**Product versions**
+ Helm version 3.4.2 or later
+ Apache Maven version 3.6.3 or later
+ BridgeCrew Checkov version 2.2 or later
+ Aqua Security Trivy version 0.37 or later

## Architecture
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-architecture"></a>

**Technology stack**
+ AWS CodeBuild
+ AWS CodeCommit
+ Amazon CodeGuru
+ AWS CodePipeline
+ Amazon Elastic Container Registry (Amazon ECR)
+ Amazon EKS
+ Amazon EventBridge
+ AWS Security Hub CSPM
+ Amazon Simple Notification Service (Amazon SNS)

**Target architecture**

![\[Workflow for deploying a Java application to Amazon EKS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/95a5b5c2-d7fb-41eb-9089-455318c0d585/images/4f5fd8c2-2b6d-4945-aa64-fcf317521711.png)


The diagram shows the following workflow:

1. The developer updates the Java application code in the base branch of the CodeCommit repository, which creates a pull request (PR).

1. As soon as the PR is submitted, Amazon CodeGuru Reviewer automatically reviews the code, analyzes it based on best practices for Java, and gives recommendations to the developer.

1. After the PR is merged to the base branch, an Amazon EventBridge event is created.

1. The EventBridge event initiates the CodePipeline pipeline, which starts.

1. CodePipeline runs the CodeSecurity Scan stage (continuous security).

1. AWS CodeBuild starts the security scan process in which the Dockerfile and Kubernetes deployment Helm files are scanned by using Checkov, and application source code is scanned based on incremental code changes. The application source code scan is performed by the [CodeGuru Reviewer Command Line Interface (CLI) wrapper](https://github.com/aws/aws-codeguru-cli).
**Note**  
As of November 7, 2025, you can't create new repository associations in Amazon CodeGuru Reviewer. To learn about services with capabilities similar to CodeGuru Reviewer, see [Amazon CodeGuru Reviewer availability change](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/codeguru-reviewer-availability-change.html) in the CodeGuru Reviewer documentation. 

1. If the security scan stage is successful, the Build stage (continuous integration) is initiated.

1. In the Build stage, CodeBuild builds the artifact, packages the artifact to a Docker image, scans the image for security vulnerabilities by using Aqua Security Trivy, and stores the image in Amazon ECR.

1. The vulnerabilities detected from step 8 are uploaded to Security Hub CSPM for further analysis by developers or engineers. Security Hub CSPM provides an overview and recommendations for remediating the vulnerabilities.

1. Email notifications of sequential phases within the CodePipeline pipeline are sent through Amazon SNS.

1. After the continuous integration phases are complete, CodePipeline enters the Deploy stage (continuous delivery).

1. The Docker image is deployed to Amazon EKS as a container workload (pod) by using Helm charts.

1. The application pod is configured with Amazon CodeGuru Profiler agent, which sends the profiling data of the application (CPU, heap usage, and latency) to CodeGuru Profiler, which helps developers understand the behavior of the application.

## Tools
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-tools"></a>

**AWS services**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+  [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [Amazon CodeGuru Profiler](https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html) collects runtime performance data from your live applications, and provides recommendations that can help you fine-tune your application performance.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) is a serverless event bus service that helps you connect your applications with real-time data from a variety of sources, including AWS Lambda functions, HTTP invocation endpoints using API destinations, or event buses in other AWS accounts.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security state on AWS. It also helps you check your AWS environment against security industry standards and best practices.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

**Other services**
+ [Helm](https://helm.sh/docs/) is an open-source package manager for Kubernetes.
+ [Apache Maven](https://maven.apache.org/) is a software project management and comprehension tool.
+ [BridgeCrew Checkov](https://www.checkov.io/1.Welcome/What%20is%20Checkov.html) is a static code analysis tool for scanning infrastructure as code (IaC) files for misconfigurations that might lead to security or compliance problems.
+ [Aqua Security Trivy](https://github.com/aquasecurity/trivy) is a comprehensive scanner for vulnerabilities in container images, file systems, and Git repositories, in addition to configuration issues.

**Code **

The code for this pattern is available in the GitHub [aws-codepipeline-devsecops-amazoneks](https://github.com/aws-samples/aws-codepipeline-devsecops-amazoneks) repository.

## Best practices
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-best-practices"></a>
+ This pattern follows [IAM security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) to apply the principle of least privilege for IAM entities across all phases of the solution. If you want to extend the solution with additional AWS services or third-party tools, we recommend that you review the section on [applying least-privilege permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) in the IAM documentation.
+ If you have multiple Java applications, we recommend that you create separate CI/CD pipelines for each application.
+ If you have a monolith application, we recommend that you break the application into microservices where possible. Microservices are more flexible, they make it easier to deploy applications as containers, and they provide better visibility into the overall build and deployment of the application.

## Epics
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-epics"></a>

### Set up the environment
<a name="set-up-the-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the GitHub repository. | To clone the repository, run the following command.<pre>git clone https://github.com/aws-samples/aws-codepipeline-devsecops-amazoneks</pre> | App developer, DevOps engineer | 
| Create an S3 bucket and upload the code. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS DevOps, Cloud administrator, DevOps engineer | 
| Create an CloudFormation stack. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS DevOps, DevOps engineer | 
| Validate the CloudFormation stack deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS DevOps, DevOps engineer | 
| Delete the S3 bucket. | Empty and delete the S3 bucket that you created earlier. For more information, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html) in the Amazon S3 documentation. | AWS DevOps, DevOps engineer | 

### Configure the Helm charts
<a name="configure-the-helm-charts"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure the Helm charts of your Java application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | DevOps engineer | 
| Validate Helm charts for syntax errors. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | DevOps engineer | 

### Set up the Java CI/CD pipeline
<a name="set-up-the-java-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the CI/CD pipeline. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS DevOps | 

### Activate integration between Security Hub CSPM and Aqua Security
<a name="activate-integration-between-ash-and-aqua-security"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Turn on Aqua Security integration. | This step is required for uploading the Docker image vulnerability findings reported by Trivy to Security Hub CSPM. Because CloudFormation doesn’t support Security Hub CSPM integrations, this process must be done manually.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | AWS administrator, DevOps engineer | 

### Configure CodeBuild to run Helm or kubectl commands
<a name="configure-acb-to-run-helm-or-kubectl-commands"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Allow CodeBuild to run Helm or kubectl commands in the Amazon EKS cluster. | For CodeBuild to be authenticated to use Helm or `kubectl` commands with the Amazon EKS cluster, you must add the IAM roles to the `aws-auth` `ConfigMap`. In this case, add the ARN of the IAM role `EksCodeBuildkubeRoleARN`, which is the IAM role created for the CodeBuild service to access the Amazon EKS cluster and deploy workloads on it. This is a one-time activity.The following procedure must be completed before the deployment approval stage in CodePipeline.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html)The `aws_auth` `ConfigMap` is configured, and access is granted.  | DevOps | 

### Validate the CI/CD pipeline
<a name="validate-the-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify that the CI/CD pipeline automatically initiates. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html)For more information about starting the pipeline by using CodePipeline, see [Start a pipeline in ](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-about-starting.html)CodePipeline, [Start a pipeline manually](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-rerun-manually.html), and [Start a pipeline on a schedule](https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-trigger-source-schedule.html) in the CodePipeline documentation. | DevOps | 
| Approve the deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline.html) | DevOps | 
| Validate application profiling. | After the deployment is complete and the application pod is deployed in Amazon EKS, the Amazon CodeGuru Profiler agent that is configured in the application will try to send profiling data of the application (CPU, heap summary, latency, and bottlenecks) to CodeGuru Profiler.For the initial deployment of an application, CodeGuru Profiler takes about 15 minutes to visualize the profiling data. | AWS DevOps | 

## Related resources
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-resources"></a>
+ [AWS CodePipeline documentation](https://docs.aws.amazon.com/codepipeline/index.html)
+ [Scanning images with Trivy in an AWS CodePipeline](https://aws.amazon.com/blogs/containers/scanning-images-with-trivy-in-an-aws-codepipeline/) (AWS blog post)
+ [Improving your Java applications using Amazon CodeGuru Profiler](https://aws.amazon.com/blogs/devops/improving-your-java-applications-using-amazon-codeguru-profiler) (AWS blog post)
+ [AWS Security Finding Format (ASFF) syntax](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-syntax.html)
+ [Amazon EventBridge event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html)
+ [Helm upgrade](https://helm.sh/docs/helm/helm_upgrade/)

## Additional information
<a name="automatically-build-and-deploy-a-java-application-to-amazon-eks-using-a-ci-cd-pipeline-additional"></a>
+ CodeGuru Profiler should not be confused with the AWS X-Ray service in terms of functionality. We recommend that you use CodeGuru Profiler to identify the most expensive lines of codes that might cause bottlenecks or security issues, and fix them before they become a potential risk. The X-Ray service is for application performance monitoring.
+ In this pattern, event rules are associated with the default event bus. If needed, you can extend the pattern to use a custom event bus.
+ This pattern uses CodeGuru Reviewer as a static application security testing (SAST) tool for application code. You can also use this pipeline for other tools, such as SonarQube or Checkmarx. You can add the scan setup instructions for any of these tools to `buildspec/buildspec_secscan.yaml` to replace the CodeGuru scan instructions.
**Note**  
As of November 7, 2025, you can't create new repository associations in Amazon CodeGuru Reviewer. To learn about services with capabilities similar to CodeGuru Reviewer, see [Amazon CodeGuru Reviewer availability change](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/codeguru-reviewer-availability-change.html) in the CodeGuru Reviewer documentation.

# Copy Amazon ECR container images across AWS accounts and AWS Regions
<a name="copy-ecr-container-images-across-accounts-regions"></a>

*Faisal Shahdad, Amazon Web Services*

## Summary
<a name="copy-ecr-container-images-across-accounts-regions-summary"></a>

This pattern shows you how to use a serverless approach to replicate tagged images from existing Amazon Elastic Container Registry (Amazon ECR) repositories to other AWS accounts and AWS Regions. The solution uses AWS Step Functions to manage the replication workflow and AWS Lambda functions to copy large container images.

Amazon ECR uses native [cross-Region](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry-settings-examples.html#registry-settings-examples-crr-single) and [cross-account](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry-settings-examples.html#registry-settings-examples-crossaccount) replication features that replicate container images across Regions and accounts. But these features replicate images only from the moment replication is turned on. There is no mechanism to replicate existing images in different Regions and accounts. 

This pattern helps artificial intelligence (AI) teams distribute containerized machine learning (ML) models, frameworks (for example, PyTorch, TensorFlow, and Hugging Face), and dependencies to other accounts and Regions. This can help you overcome service limits and optimize GPU compute resources. You can also selectively replicate Amazon ECR repositories from specific source accounts and Regions. For more information, see [Cross-Region replication in Amazon ECR has landed](https://aws.amazon.com/blogs/containers/cross-region-replication-in-amazon-ecr-has-landed/).

## Prerequisites and limitations
<a name="copy-ecr-container-images-across-accounts-regions-prereqs"></a>

**Prerequisites**
+ Two or more active AWS accounts (one source account and one destination account, minimally)
+ Appropriate AWS Identity and Access Management (IAM) permissions in all accounts
+ Docker for building the Lambda container image
+ AWS Command Line Interface (AWS CLI) configured for all accounts

**Limitations**
+ **Untagged image exclusion –** The solution copies only container images that have explicit tags. It skips untagged images that exist with `SHA256` digests.
+ **Lambda execution timeout constraints –** AWS Lambda is limited to a maximum 15-minute execution timeout, which may be insufficient to copy large container images or repositories.
+ **Manual container image management –** The `crane-app.py` Python code requires rebuilding and redeploying the Lambda container image.
+ **Limited parallel processing capacity –** The `MaxConcurrency` state setting limits how many repositories you can copy at the same time. However, you can modify this setting in the source account’s AWS CloudFormation template. Note that higher concurrency values can cause you to exceed service rate limits and account-level Lambda execution quotas.

## Architecture
<a name="copy-ecr-container-images-across-accounts-regions-architecture"></a>

**Target stack**

The pattern has four main components:
+ **Source account infrastructure –** CloudFormation template that creates the orchestration components
+ **Destination account infrastructure –** CloudFormation template that creates cross-account access roles
+ **Lambda function –** Python-based function that uses Crane for efficient image copying
+ **Container image –** Docker container that packages the Lambda function with required tools

**Target architecture**

![\[alt text not found\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/787185e7-664b-4ed8-b30f-1d9507f13377/images/cc7d9823-3dc8-4090-a203-910b1ac4447c.png)


**Step Functions workflow**

The Step Functions state machine orchestrates the following, as shown in the following diagram:
+ `PopulateRepositoryList`** –** Scans Amazon ECR repositories and populates Amazon DynamoDB
+ `GetRepositoryList`** –** Retrieves unique repository list from DynamoDB
+ `DeduplicateRepositories`** –** Ensures that there is no duplicate processing
+ `CopyRepositories`** –** Handles parallel copying of repositories
+ `NotifySuccess`/`NotifyFailure`** –** Amazon Simple Notification Service (Amazon SNS) notifications based on execution outcome

![\[alt text not found\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/787185e7-664b-4ed8-b30f-1d9507f13377/images/1b740084-ba2b-4956-aa12-ebbf52be5e7d.png)


## Tools
<a name="copy-ecr-container-images-across-accounts-regions-tools"></a>

**Amazon tools**
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications you run on AWS in real time.
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html) is a fully managed NoSQL database service that provides fast, predictable, and scalable performance.
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) helps you coordinate and manage the exchange of messages between publishers and clients, including web servers and email addresses.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.
+ [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) is a serverless orchestration service that helps you combine Lambda functions and other AWS services to build business-critical applications.

**Other tools**
+ [Crane](https://michaelsauter.github.io/crane/index.html) is a Docker orchestration tool. It’s similar to Docker Compose but has additional features.
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating system level to deliver software in containers.

**Code repository**
+ The code for this pattern is available in the GitHub [sample-ecr-copy repository](https://github.com/aws-samples/sample-ecr-copy). You can use the CloudFormation template from the repository to create the underlying resources.

## Best practices
<a name="copy-ecr-container-images-across-accounts-regions-best-practices"></a>

Follow the principle of least privilege and grant the minimum permissions required to perform a task. For more information, see [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#grant-least-priv) and [Security best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the IAM documentation.

## Epics
<a name="copy-ecr-container-images-across-accounts-regions-epics"></a>

### Prepare your environment
<a name="prepare-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure AWS CLI profiles. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, Data engineer, ML engineer | 
| Gather required information. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, Data engineer, ML engineer | 
| Clone the repository. | Clone the pattern’s repository to your local workstation:<pre>git clone https://github.com/aws-samples/sample-ecr-copy</pre> | DevOps engineer, Data engineer, ML engineer | 

### Deploy infrastructure for the destination account
<a name="deploy-infrastructure-for-the-destination-account"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Validate the template. | Validate the CloudFormation template:<pre>aws cloudformation validate-template \<br />  --template-body file://"Destination Account cf_template.yml" \<br />  --profile destination-account</pre> | DevOps engineer, ML engineer, Data engineer | 
| Deploy the destination infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Verify the deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 

### Build and deploy the Lambda container image
<a name="build-and-deploy-the-lam-container-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare the container build. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Build the container image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Create a repository and upload the image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Verify the image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 

### Deploy the source account infrastructure
<a name="deploy-the-source-account-infrastructure"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Prepare deployment parameters. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, DevOps engineer, ML engineer | 
| Validate the source template. | Validate the source CloudFormation template:<pre>aws cloudformation validate-template \<br />  --template-body file://"Source Account Cf template.yml" \<br />  --profile source-account</pre> | Data engineer, ML engineer, DevOps engineer | 
| Deploy the source infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 
| Verify the deployment and collect outputs. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 
| Confirm your email subscription. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | Data engineer, ML engineer, DevOps engineer | 

### Run and monitor the copy process
<a name="run-and-monitor-the-copy-process"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run and monitor the copy process. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 
| Run the step function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 
| Monitor progress. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, ML engineer, Data engineer | 
| Check the results. | Wait for the process to complete (updated every 30 seconds):<pre>while true; do<br />  STATUS=$(aws stepfunctions describe-execution \<br />    --execution-arn $EXECUTION_ARN \<br />    --profile source-account \<br />    --region $SOURCE_REGION \<br />    --query 'status' \<br />    --output text)<br />  <br />  echo "Current status: $STATUS"<br />  <br />  if [[ "$STATUS" == "SUCCEEDED" || "$STATUS" == "FAILED" || "$STATUS" == "TIMED_OUT" || "$STATUS" == "ABORTED" ]]; then<br />    break<br />  fi<br />  <br />  sleep 30<br />done<br /><br />echo "Final execution status: $STATUS"</pre> | DevOps engineer, ML engineer, Data engineer | 
| Verify the images. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | DevOps engineer, Data engineer, ML engineer | 

## Troubleshooting
<a name="copy-ecr-container-images-across-accounts-regions-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Step functions fail to run. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/copy-ecr-container-images-across-accounts-regions.html) | 

## Related resources
<a name="copy-ecr-container-images-across-accounts-regions-resources"></a>
+ [Crane documentation](https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md)
+ [What is Amazon Elastic Container Registry?](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html)
+ [What is AWS Lambda?](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)
+ [What is Step Functions?](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html)

## Additional information
<a name="copy-ecr-container-images-across-accounts-regions-additional"></a>

**Configuration parameters**


| 
| 
| Parameter | Description | Example | 
| --- |--- |--- |
| `SourceAccountId` | Source AWS account ID | `11111111111` | 
| `DestinationAccountId` | Destination AWS account ID | `22222222222` | 
| `DestinationRegion` | Target AWS Region | `us-east-2` | 
| `SourceRegion` | Source AWS Region | `us-east-1` | 
| `NotificationEmail` | Email for notifications | `abc@xyz.com` | 
| `RepositoryList` | Repositories to copy | `repo1,repo2,repo3` | 
| `LambdaImageUri` | Lambda container image URI | `${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com/ecr-copy-lambda:latest` | 

# Create an Amazon ECS task definition and mount a file system on EC2 instances using Amazon EFS
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs"></a>

*Durga Prasad Cheepuri, Amazon Web Services*

## Summary
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-summary"></a>

This pattern provides code samples and steps to create an Amazon Elastic Container Service (Amazon ECS) task definition that runs on Amazon Elastic Compute Cloud (Amazon EC2) instances in the Amazon Web Services (AWS) Cloud, while using Amazon Elastic File System (Amazon EFS) to mount a file system on those EC2 instances. Amazon ECS tasks that use Amazon EFS automatically mount the file systems that you specify in the task definition and make these file systems available to the task’s containers across all Availability Zones in an AWS Region.

To meet your persistent storage and shared storage requirements, you can use Amazon ECS and Amazon EFS together. For example, you can use Amazon EFS to store persistent user data and application data for your applications with active and standby ECS container pairs running in different Availability Zones for high availability. You can also use Amazon EFS to store shared data that can be accessed in parallel by ECS containers and distributed job workloads.

To use Amazon EFS with Amazon ECS, you can add one or more volume definitions to a task definition. A volume definition includes an Amazon EFS file system ID, access point ID, and a configuration for AWS Identity and Access Management (IAM) authorization or Transport Layer Security (TLS) encryption in transit. You can use container definitions within task definitions to specify the task definition volumes that get mounted when the container runs. When a task that uses an Amazon EFS file system runs, Amazon ECS ensures that the file system is mounted and available to the containers that need access to it.

## Prerequisites and limitations
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ A virtual private cloud (VPC) with a virtual private network (VPN) endpoint or a router
+ (Recommended) [Amazon ECS container agent 1.38.0 or later](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-versions.html) for compatibility with Amazon EFS access points and IAM authorization features (For more information, see the AWS blog post [New for Amazon EFS – IAM Authorization and Access Points](https://aws.amazon.com/blogs/aws/new-for-amazon-efs-iam-authorization-and-access-points/).)

**Limitations**
+ Amazon ECS container agent versions earlier than 1.35.0 don’t support Amazon EFS file systems for tasks that use the EC2 launch type.

## Architecture
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-architecture"></a>

The following diagram shows an example of an application that uses Amazon ECS to create a task definition and mount an Amazon EFS file system on EC2 instances in ECS containers.

![\[Amazon ECS architecture with task definition, ECS service, containers, and EFS file system integration.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/090a3f03-a4c6-47e3-b1ae-b0eb5c5b269c/images/343e0f1d-44ee-4ec2-8392-aeddc0e48b83.png)


The diagram shows the following workflow:

1. Create an Amazon EFS file system.

1. Create a task definition with a container.

1. Configure the container instances to mount the Amazon EFS file system. The task definition references the volume mounts, so the container instance can use the Amazon EFS file system. ECS tasks have access to the same Amazon EFS file system, regardless of which container instance those tasks are created on.

1. Create an Amazon ECS service with three instances of the task definition.

**Technology stack**
+ Amazon EC2
+ Amazon ECS
+ Amazon EFS

## Tools
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-tools"></a>
+ [Amazon EC2](https://docs.aws.amazon.com/ec2/?id=docs_gateway) – Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. You can use Amazon EC2 to launch as many or as few virtual servers as you need, and you can scale out or scale in.
+ [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast container management service for running, stopping, and managing containers on a cluster. You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of EC2 instances that you manage.
+ [Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) – Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
+ [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – The AWS Command Line Interface (AWS CLI) is an open-source tool for interacting with AWS services through commands in your command-line shell. With minimal configuration, you can run AWS CLI commands that implement functionality equivalent to that provided by the browser-based AWS Management Console from a command prompt.

## Epics
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-epics"></a>

### Create an Amazon EFS file system
<a name="create-an-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EFS file system by using the AWS Management Console. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.html) | AWS DevOps | 

### Create an Amazon ECS task definition by using either an Amazon EFS file system or the AWS CLI
<a name="create-an-amazon-ecs-task-definition-by-using-either-an-amazon-efs-file-system-or-the-aws-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a task definition using an Amazon EFS file system. | Create a task definition by using the [new Amazon ECS console](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html) or [classic Amazon ECS console](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition-classic.html) with the following configurations:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.html) | AWS DevOps | 
| Create a task definition using the AWS CLI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs.html) | AWS DevOps | 

## Related resources
<a name="create-an-amazon-ecs-task-definition-and-mount-a-file-system-on-ec2-instances-using-amazon-efs-resources"></a>
+ [Amazon ECS task definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html)
+ [Amazon EFS volumes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/efs-volumes.html)

## Attachments
<a name="attachments-090a3f03-a4c6-47e3-b1ae-b0eb5c5b269c"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/090a3f03-a4c6-47e3-b1ae-b0eb5c5b269c/attachments/attachment.zip)

# Deploy Lambda functions with container images
<a name="deploy-lambda-functions-with-container-images"></a>

*Ram Kandaswamy, Amazon Web Services*

## Summary
<a name="deploy-lambda-functions-with-container-images-summary"></a>

AWS Lambda supports containers images as a deployment model. This pattern shows how to deploy Lambda functions through container images. 

Lambda is a serverless, event-driven compute service that you can use to run code for virtually any type of application or backend service without provisioning or managing servers. With container image support for Lambda functions, you get the benefits of up to 10 GB of storage for your application artifact and the ability to use familiar container image development tools.

The example in this pattern uses Python as the underlying programming language, but you can use other languages, such as Java, Node.js, or Go. For the source, consider a Git-based system such as GitHub, GitLab, or Bitbucket, or use Amazon Simple Storage Service (Amazon S3).

## Prerequisites and limitations
<a name="deploy-lambda-functions-with-container-images-prereqs"></a>

**Prerequisites **
+ Amazon Elastic Container Registry (Amazon ECR) activated
+ Application code
+ Docker images with the runtime interface client and the latest version of Python
+ Working knowledge of Git

**Limitations **
+ Maximum image size supported is 10 GB.
+ Maximum runtime for a Lambda based container deployment is 15 minutes.

## Architecture
<a name="deploy-lambda-functions-with-container-images-architecture"></a>

**Target architecture **

![\[Four-step process to create the Lambda function.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e421cc58-d33e-493d-b0bb-c3ffe39c2eb9/images/7f36d3d8-d161-497a-b036-26d886a16c69.png)


 

1. You create a Git repository and commit the application code to the repository.

1. The AWS CodeBuild project is triggered by commit changes.

1. The CodeBuild project creates the Docker image and publishes the built image to Amazon ECR.

1. You create the Lambda function using the image in Amazon ECR.

**Automation and scale**

This pattern can be automated by using AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), or API operations from an SDK. Lambda can automatically scale based on the number of requests, and you can tune it by using the concurrency parameters. For more information, see the [Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/lambda-concurrency.html).

## Tools
<a name="deploy-lambda-functions-with-container-images-tools"></a>

**AWS services**
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) AWS CloudFormationhelps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions. This pattern uses [AWS CloudFormation Application Composer](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/app-composer-for-cloudformation.html), which helps you visually view and edit CloudFormation templates.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [GitHub](https://docs.github.com/en/repositories/creating-and-managing-repositories/quickstart-for-repositories), [GitLab](https://docs.gitlab.com/ee/user/get_started/get_started_projects.html), and [Bitbucket](https://support.atlassian.com/bitbucket-cloud/docs/tutorial-learn-bitbucket-with-git/) are some of the commonly used Git-based source control system to keep track of source code changes.

## Best practices
<a name="deploy-lambda-functions-with-container-images-best-practices"></a>
+ Make your function as efficient and small as possible to avoid loading unnecessary files.
+ Strive to have static layers higher up in your Docker file list, and place layers that change more often lower down. This improves caching, which improves performance.
+ The image owner is responsible for updating and patching the image. Add that update cadence to your operational processes. For more information, see the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html#function-code).

## Epics
<a name="deploy-lambda-functions-with-container-images-epics"></a>

### Create a project in CodeBuild
<a name="create-a-project-in-codebuild"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Git repository. | Create a Git repository that will contain the application source code, the Dockerfile, and the `buildspec.yaml` file.  | Developer | 
| Create a CodeBuild project. | To use a CodeBuild project to create the custom Lambda image, do the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-lambda-functions-with-container-images.html) | Developer | 
| Edit the Dockerfile. | The Dockerfile should be located in the top-level directory where you're developing the application. The Python code should be in the `src` folder.When you create the image, use the [official Lambda supported images](https://gallery.ecr.aws/lambda?page=1). Otherwise, a bootstrap error will occur, making the packing process more difficult.For details, see the [Additional information](#deploy-lambda-functions-with-container-images-additional) section. | Developer | 
| Create a repository in Amazon ECR. | Create a container repository in Amazon ECR. In the following example command, the name of the repository created is `cf-demo`:<pre>aws ecr create-repository --cf-demo </pre>The repository will be referenced in the `buildspec.yaml` file. | AWS administrator, Developer | 
| Push the image to Amazon ECR. | You can use CodeBuild to perform the image-build process. CodeBuild needs permission to interact with Amazon ECR and to work with S3. As part of the process, the Docker image is built and pushed to the Amazon ECR registry. For details on the template and the code, see the [Additional information](#deploy-lambda-functions-with-container-images-additional) section. | Developer | 
| Verify that the image is in the repository. | To verify that the image is in the repository, on the Amazon ECR console, choose **Repositories**. The image should be listed, with tags and with the results of a vulnerability scan report if that feature was turned on in the Amazon ECR settings.  For more information, see the [AWS documentation](https://docs.aws.amazon.com/cli/latest/reference/ecr/put-registry-scanning-configuration.html). | Developer | 

### Create the Lambda function to run the image
<a name="create-the-lambda-function-to-run-the-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Lambda function. | On the Lambda console, choose **Create function**, and then choose **Container image**. Enter the function name and the URI for the image that is in the Amazon ECR repository, and then choose **Create function**. For more information, see the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html). | App developer | 
| Test the Lambda function. | To invoke and test the function, choose **Test**. For more information, see the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/testing-functions.html). | App developer | 

## Troubleshooting
<a name="deploy-lambda-functions-with-container-images-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Build is not succeeding. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-lambda-functions-with-container-images.html) | 

## Related resources
<a name="deploy-lambda-functions-with-container-images-resources"></a>
+ [Base images for Lambda](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-images.html)
+ [Docker sample for CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html)
+ [Pass temporary credentials](https://aws.amazon.com/premiumsupport/knowledge-center/codebuild-temporary-credentials-docker/)

## Additional information
<a name="deploy-lambda-functions-with-container-images-additional"></a>

**Edit the Dockerfile**

The following code shows the commands that you edit in the Dockerfile:

```
FROM public.ecr.aws/lambda/python:3.xx

# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT} 
COPY requirements.txt  ${LAMBDA_TASK_ROOT} 

# install dependencies
RUN pip3 install --user -r requirements.txt

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.lambda_handler" ]
```

In the `FROM` command, use appropriate value for the Python version that is supported by Lambda (for example, `3.12`). This will be the base image that is available in the public Amazon ECR image repository. 

The `COPY app.py ${LAMBDA_TASK_ROOT}` command copies the code to the task root directory, which the Lambda function will use. This command uses the environment variable so we don’t have to worry about the actual path. The function to be run is passed as an argument to the `CMD [ "app.lambda_handler" ]` command.

The `COPY requirements.txt` command captures the dependencies necessary for the code. 

The `RUN pip install --user -r requirements.txt` command installs the dependencies to the local user directory. 

To build your image, run the following command.

```
docker build -t <image name> .
```

**Add the image in Amazon ECR**

In the following code, replace `aws_account_id` with the account number, and replace `us-east-1` if you are using a different Region. The `buildspec` file uses the CodeBuild build number to uniquely identify image versions as a tag value. You can change this to fit your requirements.

*The buildspec custom code*

```
phases:
  install:
    runtime-versions:
       python: 3.xx
  pre_build:
    commands:
      - python3 --version
      - pip3 install --upgrade pip
      - pip3 install --upgrade awscli
      - sudo docker info
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - ls
      - cd app
      - docker build -t cf-demo:$CODEBUILD_BUILD_NUMBER .
      - docker container ls
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker image...
      - aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.us-east-1.amazonaws.com
      - docker tag cf-demo:$CODEBUILD_BUILD_NUMBER aws_account_id.dkr.ecr.us-east-1.amazonaws.com/cf-demo:$CODEBUILD_BUILD_NUMBER
      - docker push aws_account_id.dkr.ecr.us-east-1.amazonaws.com/cf-demo:$CODEBUILD_BUILD_NUMBER
```

# Deploy Java microservices on Amazon ECS using AWS Fargate
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate"></a>

*Vijay Thompson and Sandeep Bondugula, Amazon Web Services*

## Summary
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-summary"></a>

This pattern provides guidance for deploying containerized Java microservices on Amazon Elastic Container Service (Amazon ECS) by using AWS Fargate. The pattern doesn't use Amazon Elastic Container Registry (Amazon ECR) for container management; instead, Docker images are pulled in from a Docker hub. 

## Prerequisites and limitations
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-prereqs"></a>

**Prerequisites**
+ An existing Java microservices application on a Docker hub
+ A public Docker repository
+ An active AWS account
+ Familiarity with AWS services, including Amazon ECS and Fargate
+ Docker, Java, and Spring Boot framework
+ Amazon Relational Database Service (Amazon RDS) up and running (optional)
+ A virtual private cloud (VPC) if the application requires Amazon RDS (optional)

## Architecture
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-architecture"></a>

**Source technology stack**
+ Java microservices (for example, implemented in Spring Boot) and deployed on Docker

**Source architecture**

![\[Source architecture for Java microservices deployed on Docker\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/65185957-2b8b-43a6-964c-95ce0a45ba17/images/0a946ca8-fe37-4ede-85cb-a80a1c36105d.png)


**Target technology stack**
+ An Amazon ECS cluster that hosts each microservice by using Fargate
+ A VPC network to host the Amazon ECS cluster and associated security groups 
+ A cluster/task definition for each microservice that spins up containers by using Fargate

**Target architecture**

![\[Target architecture on Java microservices on Amazon ECS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/65185957-2b8b-43a6-964c-95ce0a45ba17/images/b21349ea-21fc-4688-b76a-1bde479858aa.png)


## Tools
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-tools"></a>

**Tools**
+ [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) eliminates the need to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines. 
+ [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html) helps you run containers without needing to manage servers or Amazon Elastic Compute Cloud (Amazon EC2) instances. It’s used in conjunction with Amazon Elastic Container Service (Amazon ECS).
+ [Docker](https://www.docker.com/) is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called *containers *that have everything the software needs to run, including libraries, system tools, code, and runtime. 

**Docker code**

The following Dockerfile specifies the Java Development Kit (JDK) version that is used, where the Java archive (JAR) file exists, the port number that is exposed, and the entry point for the application.

```
FROM openjdk:11
ADD target/Spring-docker.jar Spring-docker.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","Spring-docker.jar"]
```

## Epics
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-epics"></a>

### Create new task definitions
<a name="create-new-task-definitions"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a task definition. | Running a Docker container in Amazon ECS requires a task definition. Open the Amazon ECS console at [https://console.aws.amazon.com/ecs/](https://console.aws.amazon.com/ecs/), choose **Task definitions**, and then create a new task definition. For more information, see the [Amazon ECS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html). | AWS systems administrator, App developer | 
| Choose launch type. | Choose **Fargate** as the launch type. | AWS systems administrator, App developer | 
| Configure the task. | Define a task name and configure the application with the appropriate amount of task memory and CPU. | AWS systems administrator, App developer | 
| Define the container. | Specify the container name. For the image, enter the Docker site name, the repository name, and the tag name of the Docker image (`docker.io/sample-repo/sample-application:sample-tag-name`). Set memory limits for the application, and set port mappings (`8080, 80`)  for the allowed ports. | AWS systems administrator, App developer | 
| Create the task. | When the task and container configurations are in place, create the task. For detailed instructions, see the links in the *Related resources* section. | AWS systems administrator, App developer | 

### Configure the cluster
<a name="configure-the-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and configure a cluster. | Choose **Networking only** as the cluster type, configure the name, and then create the cluster or use an existing cluster if available. For more information, see the [Amazon ECS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html). | AWS systems administrator, App developer | 

### Configure Task
<a name="configure-task"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a task. | Inside the cluster, choose **Run new task**. | AWS systems administrator, App developer | 
| Choose launch type. | Choose **Fargate **as the launch type. | AWS systems administrator, App developer | 
| Choose task definition, revision, and platform version. | Choose the task that you want to run, the revision of the task definition, and the platform version. | AWS systems administrator, App developer | 
| Select the cluster. | Choose the cluster where you want to run the task from. | AWS systems administrator, App developer | 
| Specify the number of tasks. | Configure the number of tasks that should run. If you're launching with two or more tasks, a load balancer is required to distribute the traffic among the tasks. | AWS systems administrator, App developer | 
| Specify the task group. | (Optional) Specify a task group name to identify a set of related tasks as a task group. | AWS systems administrator, App developer | 
| Configure the cluster VPC, subnets, and security groups. | Configure the cluster VPC and the subnets on which you want to deploy the application. Create or update security groups  (HTTP, HTTPS, and port 8080) to provide access to inbound and outbound connections. | AWS systems administrator, App developer | 
| Configure public IP settings. | Enable or disable the public IP, depending on whether you want to use a public IP address for Fargate tasks. The default, recommended option is **Enabled**. | AWS systems administrator, App developer | 
| Review settings and create the task | Review your settings, and then choose **Run Task**. | AWS systems administrator, App developer | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy the application URL. | When the task status has been updated to *Running*, select the task. In the Networking section, copy the public IP. | AWS systems administrator, App developer | 
| Test your application. | In your browser, enter the public IP to test the application. | AWS systems administrator, App developer | 

## Related resources
<a name="deploy-java-microservices-on-amazon-ecs-using-aws-fargate-resources"></a>
+ [Docker Basics for Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html) (Amazon ECS documentation)
+ [Amazon ECS on AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html) (Amazon ECS documentation)
+ [Creating a Task Definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html) (Amazon ECS documentation)
+ [Creating a Cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html) (Amazon ECS documentation)
+ [Configuring Basic Service Parameters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/basic-service-params.html) (Amazon ECS documentation)
+ [Configuring a Network](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-configure-network.html) (Amazon ECS documentation)
+ [Deploying Java Microservices on Amazon ECS](https://aws.amazon.com/blogs/compute/deploying-java-microservices-on-amazon-ec2-container-service/) (blog post)

# Deploy Kubernetes resources and packages using Amazon EKS and a Helm chart repository in Amazon S3
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3"></a>

*Sagar Panigrahi, Amazon Web Services*

## Summary
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-summary"></a>

This pattern helps you to manage Kubernetes applications efficiently, regardless of their complexity. The pattern integrates Helm into your existing continuous integration and continuous delivery (CI/CD)  pipelines to deploy applications into a Kubernetes cluster. Helm is a Kubernetes package manager that helps you manage Kubernetes applications. Helm charts help to define, install, and upgrade complex Kubernetes applications. Charts can be versioned and stored in Helm repositories, which improves mean time to restore (MTTR) during outages. 

This pattern uses Amazon Elastic Kubernetes Service (Amazon EKS) for the Kubernetes cluster. It uses Amazon Simple Storage Service (Amazon S3) as a Helm chart repository, so that the charts can be centrally managed and accessed by developers across the organization.

## Prerequisites and limitations
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-prereqs"></a>

**Prerequisites**
+ An active Amazon Web Services (AWS) account with a virtual private cloud (VPC)
+ An Amazon EKS cluster 
+ Worker nodes set up within the Amazon EKS cluster and ready to take workloads
+ Kubectl for configuring the Amazon EKS kubeconfig file for the target cluster in the client machine
+ AWS Identity and Access Management (IAM) access to create the S3 bucket
+ IAM (programmatic or role) access to Amazon S3 from the client machine
+ Source code management and a CI/CD pipeline

**Limitations**
+ There is no support at this time for upgrading, deleting, or managing custom resource definitions (CRDs).
+ If you are using a resource that refers to a CRD, the CRD must be installed separately (outside of the chart).

**Product versions**
+ Helm v3.6.3

## Architecture
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-architecture"></a>

**Target technology stack**
+ Amazon EKS
+ Amazon VPC
+ Amazon S3
+ Source code management
+ Helm
+ Kubectl

**Target architecture **

![\[Client Helm and Kubectl deploy a Helm chart repo in Amazon S3 for Amazon EKS clusters.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/d3f993e6-4d96-4cb9-a075-c4debe431fd7/images/2f09f7bb-440a-4c4b-b29f-08d136d1ada4.png)


 

**Automation and scale**
+ AWS CloudFormation can be used to automate the infrastructure creation. For more information, see [Creating Amazon EKS resources with AWS CloudFormation](https://docs.aws.amazon.com/eks/latest/userguide/creating-resources-with-cloudformation.html) in the Amazon EKS documentation.
+ Helm is to be incorporated into your existing CI/CD automation tool to automate the packaging and versioning of Helm charts (out of scope for this pattern).
+ GitVersion or Jenkins build numbers can be used to automate the versioning of charts.

## Tools
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-tools"></a>

**Tools**
+ [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) – Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service for running Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
+ [Helm](https://helm.sh/docs/) – Helm is a package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster.
+ [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/gsg/GetStartedWithS3.html) – Amazon Simple Storage Service (Amazon S3) is storage for the internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web.
+ [Kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) – Kubectl is a command line utility for running commands against Kubernetes clusters.

**Code**

The example code is attached.

## Epics
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-epics"></a>

### Configure and initialize Helm
<a name="configure-and-initialize-helm"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the Helm client. | To download and install the Helm client on your local system, use the following command. <pre>sudo curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash</pre> | DevOps engineer | 
| Validate the Helm installation. | To validate that Helm is able to communicate with the Kubernetes API server within the Amazon EKS cluster, run `helm version`. | DevOps engineer | 

### Create and install a Helm chart in the Amazon EKS cluster
<a name="create-and-install-a-helm-chart-in-the-amazon-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Helm chart for NGINX. | To create a helm chart named `my-nginx` on the client machine, run `helm create my-nginx`. | DevOps engineer | 
| Review the structure of the chart. | To review the structure of the chart, run the tree command `tree my-nginx/`. | DevOps engineer | 
| Deactivate service account creation in the chart. | In `values.yaml`, under the `serviceAccount` section, set the `create` key to `false`. This is turned off because there is no requirement to create a service account for this pattern. | DevOps engineer | 
| Validate (lint) the modified chart for syntactical errors. | To validate the chart for any syntactical error before installing it in the target cluster, run `helm lint my-nginx/`. | DevOps engineer | 
| Install the chart to deploy Kubernetes resources. | To run the Helm chart installation, use the following command. <pre>helm install --name my-nginx-release --debug my-nginx/ --namespace helm-space </pre>The optional `debug` flag outputs all debug messages during the installation. The `namespace` flag specifies the namespace in which the resources part of this chart will be created. | DevOps engineer | 
| Review the resources in the Amazon EKS cluster. | To review the resources that were created as part of the Helm chart in the `helm-space` namespace, use the following command. <pre>kubectl get all -n helm-space</pre> | DevOps engineer | 

### Roll back to a previous version of a Kubernetes application
<a name="roll-back-to-a-previous-version-of-a-kubernetes-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify and upgrade the release. | To modify the chart, in `values.yaml`, change the `replicaCount` value to `2`. Then upgrade the already installed release by running the following command.<pre>helm upgrade my-nginx-release my-nginx/ --namespace helm-space</pre> | DevOps engineer | 
| Review the history of the Helm release. | To list all the revisions for a specific release that have been installed using Helm, run the following command. <pre>helm history my-nginx-release</pre> | DevOps engineer | 
| Review the details for a specific revision. | Before switching or rolling back to a working version, and for an additional layer of validation before installing a revision, view which values were passed to each of the revisions by using the following command.<pre>helm get --revision=2 my-nginx-release</pre> | DevOps engineer | 
| Roll back to a previous version. | To roll back to a previous revision, use the following command. <pre>helm rollback my-nginx-release 1 </pre>This example is rolling back to revision number 1. | DevOps engineer | 

### Initialize an S3 bucket as a Helm repository
<a name="initialize-an-s3-bucket-as-a-helm-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an S3 bucket for Helm charts. | Create a unique S3 bucket. In the bucket, create a folder called `charts`. The example in this pattern uses `s3://my-helm-charts/charts` as the target chart repository. | Cloud administrator | 
| Install the Helm plugin for Amazon S3. | To install the helm-s3 plugin on your client machine, use the following command. <pre>helm plugin install https://github.com/hypnoglow/helm-s3.git --version 0.10.0</pre>Note: Helm V3 support is available with plugin version 0.9.0 and above. | DevOps engineer | 
| Initialize the Amazon S3 Helm repository.  | To initialize the target folder as a Helm repository, use the following command. <pre>helm S3 init s3://my-helm-charts/charts </pre>The command creates an `index.yaml` file in the target to track all the chart information that is stored at that location. | DevOps engineer | 
| Add the Amazon S3 repository to Helm. | To add the repository in the client machine, use the following command.<pre>helm repo add my-helm-charts s3://my-helm-charts/charts </pre>This command adds an alias to the target repository in the Helm client machine. | DevOps engineer | 
| Review the repository list. | To view the list of repositories in the Helm client machine, run `helm repo list`. | DevOps engineer | 

### Package and store charts in the Amazon S3 Helm repository
<a name="package-and-store-charts-in-the-amazon-s3-helm-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Package the chart. | To package the `my-nginx` chart that you created, run `helm package ./my-nginx/`. The command packages all the contents of the `my-nginx` chart folder into an archive file, which is named using the version number that is mentioned in the `Chart.yaml` file. | DevOps engineer | 
| Store the package in the Amazon S3 Helm repository. | To upload the package to the Helm repository in Amazon S3, run the following command, using the correct name of the `.tgz` file.<pre>helm s3 push ./my-nginx-0.1.0.tgz my-helm-charts</pre> | DevOps engineer | 
| Search for the Helm chart. | To confirm that the chart appears both locally and in the Helm repository in Amazon S3, run the following command.<pre>helm search repo my-nginx</pre> | DevOps engineer | 

### Modify, version, and package a chart
<a name="modify-version-and-package-a-chart"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify and package the chart. | In `values.yaml`, set the `replicaCount` value to `1`. Then package the chart by running `helm package ./my-nginx/`, this time changing the version in `Chart.yaml` to `0.1.1`. The versioning is ideally updated through automation using tools such as GitVersion or Jenkins build numbers in a CI/CD pipeline. Automating the version number is out of scope for this pattern. | DevOps engineer | 
| Push the new version to the Helm repository in Amazon S3. | To push the new package with version of 0.1.1 to the `my-helm-charts` Helm repository in Amazon S3, run the following command.<pre>helm s3 push ./my-nginx-0.1.1.tgz my-helm-charts</pre> | DevOps engineer | 

### Search for and install a chart from the Amazon S3 Helm repository
<a name="search-for-and-install-a-chart-from-the-amazon-s3-helm-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Search for all versions of the my-nginx chart. | To view all the available versions of a chart, run the following command with the `--versions` flag.<pre>helm search repo my-nginx --versions</pre>Without the flag, Helm by default displays the latest uploaded version of a chart. | DevOps engineer | 
| Install a chart from the Amazon S3 Helm repository. | The search results from the previous task show the multiple versions of the `my-nginx` chart. To install the new version (0.1.1) from the Amazon S3 Helm repository, use the following command.<pre>helm upgrade my-nginx-release my-helm-charts/my-nginx --version 0.1.1 --namespace helm-space</pre> | DevOps engineer | 

## Related resources
<a name="deploy-kubernetes-resources-and-packages-using-amazon-eks-and-a-helm-chart-repository-in-amazon-s3-resources"></a>
+ [HELM documentation](https://helm.sh/docs/)
+ [helm-s3 plugin (MIT License)](https://github.com/hypnoglow/helm-s3.git)
+ [HELM client binary](https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3)
+ [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)

## Attachments
<a name="attachments-d3f993e6-4d96-4cb9-a075-c4debe431fd7"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/d3f993e6-4d96-4cb9-a075-c4debe431fd7/attachments/attachment.zip)

# Deploy a CockroachDB cluster in Amazon EKS by using Terraform
<a name="deploy-cockroachdb-on-eks-using-terraform"></a>

*Sandip Gangapadhyay and Kalyan Senthilnathan, Amazon Web Services*

## Summary
<a name="deploy-cockroachdb-on-eks-using-terraform-summary"></a>

This pattern provides a HashiCorp Terraform module for deploying a multi-node [CockroachDB](https://www.cockroachlabs.com/docs/stable/) cluster on Amazon Elastic Kubernetes Service (Amazon EKS) by using the [CockroachDB operator](https://www.cockroachlabs.com/docs/v25.4/cockroachdb-operator-overview). CockroachDB is a distributed SQL database that provides automatic horizontal sharding, high availability, and consistent performance across geographically distributed clusters. This pattern uses Amazon EKS as the managed Kubernetes platform and implements [cert-manager](https://cert-manager.io/docs/) for TLS-secured node communication. It also uses a [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) for traffic distribution and creates CockroachDB [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) with pods that automatically replicate data for fault tolerance and performance.

**Intended audience**

To implement this pattern, we recommend that you are familiar with the following:
+ HashiCorp Terraform concepts and infrastructure as code (IaC) practices
+ AWS services, particularly Amazon EKS
+ Kubernetes fundamentals, including StatefulSets, operators, and service configurations
+ Distributed SQL databases
+ Security concepts, such as TLS certificate management.
+ DevOps practices, CI/CD workflows, and infrastructure automation

## Prerequisites and limitations
<a name="deploy-cockroachdb-on-eks-using-terraform-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ Permissions to deploy resources in an Amazon EKS cluster
+ An Amazon EKS cluster version v1.23 or later, with nodes labeled `node=cockroachdb`
+ [Amazon Elastic Block Store Container Storage Interface (CSI) Driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver) version 1.19.0 or later, installed in the Amazon EKS cluster
+ Terraform CLI version 1.0.0 or later, [installed](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli)
+ kubectl, [installed](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html)
+ Git, [installed](https://git-scm.com/install/)
+ AWS Command Line Interface (AWS CLI) version 2.9.18 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)

**Limitations**
+ The CockroachDB Kubernetes operator does not support multiple Kubernetes clusters for multi-Region deployments. For more limitation, see [Orchestrate CockroachDB Across Multiple Kubernetes Clusters](https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html#eks) (CockroachDB documentation) and [CockroachDB Kubernetes Operator](https://github.com/cockroachdb/cockroach-operator) (GitHub).
+ Automatic pruning of persistent volume claims (PVCs) is currently disabled by default. This means that after decommissioning and removing a node, the operator will not remove the persistent volume that was mounted to its pod. For more information, see [Automatic PVC pruning](https://www.cockroachlabs.com/docs/stable/scale-cockroachdb-kubernetes.html#automatic-pvc-pruning) in the CockroachDB documentation.

**Product versions**
+ CockroachDB version 22.2.2

## Architecture
<a name="deploy-cockroachdb-on-eks-using-terraform-architecture"></a>

**Target architecture**

The following diagram shows a highly available CockroachDB deployment across three AWS Availability Zones within a virtual private cloud (VPC). The CockroachDB pods are managed through Amazon EKS. The architecture illustrates how users access the database through a Network Load Balancer, which distributes traffic to the CockroachDB pods. The pods run on Amazon Elastic Compute Cloud (Amazon EC2) instances in each Availability Zone, which provides resilience and fault tolerance.

![\[A highly available CockroachDB deployment across three AWS Availability Zones within a VPC.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e22d81ab-b85c-4709-8579-4c9cdb4afdb6/images/4b163abf-6fdc-4310-840c-bda621ab25dd.png)


**Resources created**

Deploying the Terraform module used in this pattern creates the following resources:

1. **Network Load Balancer** – This resource serves as the entry point for client requests and evenly distributes traffic across the CockroachDB instances.

1. **CockroachDB StatefulSet** – The StatefulSet defines the desired state of the CockroachDB deployment within the Amazon EKS cluster. It manages the ordered deployment, scaling, and updates of CockroachDB pods.

1. **CockroachDB pods** – These pods are instances of CockroachDB running as containers within Kubernetes pods. These pods store and manage the data across the distributed cluster.

1. **CockroachDB database** – This is the distributed database that is managed by CockroachDB, spanning multiple pods. It replicates data for high availability, fault tolerance, and performance.

## Tools
<a name="deploy-cockroachdb-on-eks-using-terraform-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.

**Other tools**
+ [HashiCorp Terraform](https://www.terraform.io/docs) is an infrastructure as code (IaC) tool that helps you use code to provision and manage cloud infrastructure and resources.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/) is a command-line interface that helps you run commands against Kubernetes clusters.

**Code repository**

The code for this pattern is available in the GitHub [Deploy a CockroachDB cluster in Amazon EKS using Terraform](https://github.com/aws-samples/crdb-cluster-eks-terraform) repository. The code repository contains the following files and folders for Terraform:
+ `modules` folder – This folder contains the Terraform module for CockroachDB
+ `main` folder – This folder contains the root module that calls CockroachDB child module to create the CockroachDB database cluster.

## Best practices
<a name="deploy-cockroachdb-on-eks-using-terraform-best-practices"></a>
+ Do not scale down to fewer than three nodes. This is considered an anti-pattern on CockroachDB and can cause errors. For more information, see [Cluster Scaling](https://www.cockroachlabs.com/docs/stable/scale-cockroachdb-kubernetes.html) in the CockroachDB documentation.
+ Implement Amazon EKS autoscaling by using Karpernter or Cluster Autoscaler. This allows the CockroachDB cluster to scale horizontally and new nodes automatically. For more information, see [Scale cluster compute with Karpenter and Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html) in the Amazon EKS documentation.
**Note**  
Due to the `podAntiAffinity` Kubernetes scheduling rule, only one CockroachDB pod can be schedule in one Amazon EKS node.
+ For Amazon EKS security best practices, see [Best Practices for Security](https://docs.aws.amazon.com/eks/latest/best-practices/security.html) in the Amazon EKS documentation.
+ For SQL performance best practices for CockroachDB, see [SQL Performance Best Practices](https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html) in the CockroachDB documentation.
+ For more information about setting up an Amazon Simple Storage Service (Amazon S3) remote backend for the Terraform state file, see [Amazon S3](https://developer.hashicorp.com/terraform/language/backend/s3) in the Terraform documentation.

## Epics
<a name="deploy-cockroachdb-on-eks-using-terraform-epics"></a>

### Set up your environment
<a name="set-up-your-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the code repository. | Enter the following command to clone the repository:<pre>git clone https://github.com/aws-samples/crdb-cluster-eks-terraform.git</pre> | DevOps engineer, Git | 
| Update the Terraform variables. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | DevOps engineer, Terraform | 

### Deploy the resources
<a name="deploy-the-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the infrastructure. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | DevOps engineer, Terraform | 

### Verify the deployment
<a name="verify-the-deployment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Verify resource creation. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | DevOps engineer | 
| (Optional) Scale up or down. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | DevOps engineer, Terraform | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete the infrastructure. | Scaling nodes to `0` will reduce compute costs. However, you will still incur charges for the persistent Amazon EBS volumes that were created by this module. To eliminate storage costs, follow these steps to delete all volumes:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | Terraform | 

## Troubleshooting
<a name="deploy-cockroachdb-on-eks-using-terraform-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Error validating provider credentials | When you run the Terraform `apply` or `destroy` command, you might encounter the following error:`Error: configuring Terraform AWS Provider: error validating provider  credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 123456a9-fbc1-40ed-b8d8-513d0133ba7f, api error InvalidClientTokenId: The security token included in the request is invalid.`This error is caused by the expiration of the security token for the credentials used in your local machine’s configuration. For instructions on how to resolve the error, see [Set and view configuration settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-methods) in the AWS CLI documentation. | 
| CockroachDB pods in pending state | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-cockroachdb-on-eks-using-terraform.html) | 

## Related resources
<a name="deploy-cockroachdb-on-eks-using-terraform-resources"></a>
+ [Deploy CockroachDB in a Single Kubernetes Cluster](https://www.cockroachlabs.com/docs/dev/deploy-cockroachdb-with-kubernetes.html) (CockroachDB documentation)
+ [Orchestrate CockroachDB Across Multiple Kubernetes Clusters](https://www.cockroachlabs.com/docs/dev/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html) (CockroachDB documentation)
+ [AWS Provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) (Terraform documentation)

## Attachments
<a name="attachments-e22d81ab-b85c-4709-8579-4c9cdb4afdb6"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/e22d81ab-b85c-4709-8579-4c9cdb4afdb6/attachments/attachment.zip)

# Deploy a sample Java microservice on Amazon EKS and expose the microservice using an Application Load Balancer
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer"></a>

*Vijay Thompson and Akkamahadevi Hiremath, Amazon Web Services*

## Summary
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-summary"></a>

This pattern describes how to deploy a sample Java microservice as a containerized application on Amazon Elastic Kubernetes Service (Amazon EKS) by using the `eksctl` command line utility and Amazon Elastic Container Registry (Amazon ECR). You can use an Application Load Balancer to load balance the application traffic.

## Prerequisites and limitations
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ The AWS Command Line Interface (AWS CLI) version 1.7 or later, installed and configured on macOS, Linux, or Windows
+ A running [Docker daemon](https://docs.docker.com/config/daemon/)
+ The `eksctl` command line utility, installed and configured on macOS, Linux, or Windows (For more information, see [Getting started with Amazon EKS – eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) in the Amazon EKS documentation.)
+ The `kubectl` command line utility, installed and configured on macOS, Linux, or Windows (For more information, see [Installing or updating kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation.)

**Limitations**
+ This pattern doesn’t cover the installation of an SSL certificate for the Application Load Balancer.

## Architecture
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-architecture"></a>

**Target technology stack**
+ Amazon ECR
+ Amazon EKS
+ Elastic Load Balancing

**Target architecture**

The following diagram shows an architecture for containerizing a Java microservice on Amazon EKS.

![\[A Java microservice deployed as a containerized application on Amazon EKS.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/e1dd8ab0-9e1e-4d2b-b7af-89d3e583e57c/images/aaca4fd9-5aaa-4df5-aebd-02a2ed881c3b.png)


## Tools
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-tools"></a>
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) automatically distributes your incoming traffic across multiple targets, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses, in one or more Availability Zones.
+ [eksctl](https://eksctl.io/) helps you create clusters on Amazon EKS.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) makes it possible to run commands against Kubernetes clusters.
+ [Docker](https://www.docker.com/) helps you build, test, and deliver applications in packages called containers.

## Epics
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-epics"></a>

### Create an Amazon EKS cluster by using eksctl
<a name="create-an-amazon-eks-cluster-by-using-eksctl"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EKS cluster.  | To create an Amazon EKS cluster that uses two t2.small Amazon EC2 instances as nodes, run the following command:<pre>eksctl create cluster --name <your-cluster-name> --version <version-number> --nodes=1 --node-type=t2.small</pre>The process can take between 15 to 20 minutes. After the cluster is created, the appropriate Kubernetes configuration is added to your [kubeconfig](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html) file. You can use the `kubeconfig` file with `kubectl`** **to deploy the application in later steps. | Developer, System Admin | 
| Verify the Amazon EKS cluster. | To verify that the cluster is created and that you can connect to it, run the `kubectl get nodes` command. | Developer, System Admin | 

### Create an Amazon ECR repository and push the Docker image.
<a name="create-an-amazon-ecr-repository-and-push-the-docker-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECR repository. | Follow the instructions from [Creating a private repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) in the Amazon ECR documentation. | Developer, System Admin | 
| Create a POM XML file. | Create a `pom.xml` file based on the *Example POM file *code in the [Additional information](#deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional) section of this pattern. | Developer, System Admin | 
| Create a source file. | Create a source file called `HelloWorld.java` in the `src/main/java/eksExample` path based on the following example:<pre>package eksExample;<br />import static spark.Spark.get;<br /><br />public class HelloWorld {<br />    public static void main(String[] args) {<br />        get("/", (req, res) -> {<br />            return "Hello World!";<br />        });<br />    }<br />}</pre>Be sure to use the following directory structure:<pre>├── Dockerfile<br />├── deployment.yaml<br />├── ingress.yaml<br />├── pom.xml<br />├── service.yaml<br />└── src<br />    └── main<br />        └── java<br />            └── eksExample<br />                └── HelloWorld.java</pre> |  | 
| Create a Dockerfile. | Create a `Dockerfile` based on the *Example Dockerfile *code in the [Additional information](#deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional) section of this pattern. | Developer, System Admin | 
| Build and push the Docker image. | In the directory where you want your `Dockerfile` to build, tag, and push the image to Amazon ECR, run the following commands:<pre>aws ecr get-login-password --region <region>| docker login --username <username> --password-stdin <account_number>.dkr.ecr.<region>.amazonaws.com<br />docker buildx build --platform linux/amd64 -t hello-world-java:v1 .<br />docker tag hello-world-java:v1 <account_number>.dkr.ecr.<region>.amazonaws.com/<repository_name>:v1<br />docker push <account_number>.dkr.ecr.<region>.amazonaws.com/<repository_name>:v1</pre>Modify the AWS Region, account number, and repository details in the preceding commands. Be sure to note the image URL for later use.A macOS system with an M1 chip has a problem building an image that’s compatible with Amazon EKS running on an AMD64 platform. To resolve this issue, use [docker buildx](https://docs.docker.com/engine/reference/commandline/buildx/) to build a Docker image that works on Amazon EKS. |  | 

### Deploy the Java microservices
<a name="deploy-the-java-microservices"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a deployment file.  | Create a YAML file called `deployment.yaml` based on the *Example deployment file *code in the [Additional information](#deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional) section of this pattern.Use the image URL that you copied earlier as the path of the image file for the Amazon ECR repository. | Developer, System Admin | 
| Deploy the Java microservices on the Amazon EKS cluster.  | To create a deployment in your Amazon EKS cluster, run the `kubectl apply -f deployment.yaml` command. | Developer, System Admin | 
| Verify the status of the pods. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer.html) | Developer, System Admin | 
| Create a service. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer.html) | Developer, System Admin | 
| Install the AWS Load Balancer Controller add-on. | Follow the instructions from [Installing the AWS Load Balancer Controller add-on](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) in the Amazon EKS documentation.You must have the add-on installed to create an Application Load Balancer or Network Load Balancer for a Kubernetes service. | Devloper, System Admin | 
| Create an ingress resource. | Create a YAML file called `ingress.yaml` based on the *Example ingress resource file *code in the [Additional information](#deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional) section of this pattern. | Developer, System Admin | 
| Create an Application Load Balancer. | To deploy the ingress resource and create an Application Load Balancer, run the `kubectl apply -f ingress.yaml` command. | Developer, System Admin | 

### Test the application
<a name="test-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test and verify the application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer.html) | Developer, System Admin | 

## Related resources
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-resources"></a>
+ [Creating a private repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) (Amazon ECR documentation)
+ [Pushing a Docker image](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html) (Amazon ECR documentation)
+ [Ingress Controllers](https://www.eksworkshop.com/beginner/130_exposing-service/ingress_controller_alb/) (Amazon EKS Workshop)
+ [Docker buildx](https://docs.docker.com/engine/reference/commandline/buildx/) (Docker docs)

## Additional information
<a name="deploy-a-sample-java-microservice-on-amazon-eks-and-expose-the-microservice-using-an-application-load-balancer-additional"></a>

**Example POM file**

```
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>


  <groupId>helloWorld</groupId>
  <artifactId>helloWorld</artifactId>
  <version>1.0-SNAPSHOT</version>


  <dependencies>
    <dependency>
      <groupId>com.sparkjava</groupId><artifactId>spark-core</artifactId><version>2.0.0</version>
    </dependency>
  </dependencies>
  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId><artifactId>maven-jar-plugin</artifactId><version>2.4</version>
        <configuration><finalName>eksExample</finalName><archive><manifest>
              <addClasspath>true</addClasspath><mainClass>eksExample.HelloWorld</mainClass><classpathPrefix>dependency-jars/</classpathPrefix>
            </manifest></archive>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><version>3.1</version>
        <configuration><source>1.8</source><target>1.8</target></configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId><artifactId>maven-assembly-plugin</artifactId>
        <executions>
          <execution>
            <goals><goal>attached</goal></goals><phase>package</phase>
            <configuration>
              <finalName>eksExample</finalName>
              <descriptorRefs><descriptorRef>jar-with-dependencies</descriptorRef></descriptorRefs>
              <archive><manifest><mainClass>eksExample.HelloWorld</mainClass></manifest></archive>
            </configuration>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</project>
```

**Example Dockerfile**

```
FROM bellsoft/liberica-openjdk-alpine-musl:17

RUN apk add maven
WORKDIR /code

# Prepare by downloading dependencies
ADD pom.xml /code/pom.xml
RUN ["mvn", "dependency:resolve"]
RUN ["mvn", "verify"]

# Adding source, compile and package into a fat jar
ADD src /code/src
RUN ["mvn", "package"]

EXPOSE 4567
CMD ["java", "-jar", "target/eksExample-jar-with-dependencies.jar"]
```

**Example deployment file**

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: microservice-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: java-microservice
  template:
    metadata:
      labels:
        app.kubernetes.io/name: java-microservice
    spec:
      containers:
      - name: java-microservice-container
        image: .dkr.ecr.amazonaws.com/:
        ports:
        - containerPort: 4567
```

**Example service file**

```
apiVersion: v1
kind: Service
metadata:
  name: "service-java-microservice"
spec:
  ports:
    - port: 80
      targetPort: 4567
      protocol: TCP
  type: NodePort
  selector:
    app.kubernetes.io/name: java-microservice
```

**Example ingress resource file**

```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: "java-microservice-ingress"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/load-balancer-name: apg2
    alb.ingress.kubernetes.io/target-type: ip
  labels:
    app: java-microservice
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: "service-java-microservice"
                port:
                  number: 80
```

# Deploy a gRPC-based application on an Amazon EKS cluster and access it with an Application Load Balancer
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer"></a>

*Kirankumar Chandrashekar and Huy Nguyen, Amazon Web Services*

## Summary
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-summary"></a>

This pattern describes how to host a gRPC-based application on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster and securely access it through an Application Load Balancer.

[gRPC](https://grpc.io/) is an open-source remote procedure call (RPC) framework that can run in any environment. You can use it for microservice integrations and client-server communications. For more information about gRPC, see the AWS blog post [Application Load Balancer support for end-to-end HTTP/2 and gRPC](https://aws.amazon.com/blogs/aws/new-application-load-balancer-support-for-end-to-end-http-2-and-grpc/).

This pattern shows you how to host a gRPC-based application that runs on Kubernetes pods on Amazon EKS. The gRPC client connects to an Application Load Balancer through the HTTP/2 protocol with an SSL/TLS encrypted connection. The Application Load Balancer forwards traffic to the gRPC application that runs on Amazon EKS pods. The number of gRPC pods can be automatically scaled based on traffic by using the [Kubernetes Horizontal Pod Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html). The Application Load Balancer's target group performs health checks on the Amazon EKS nodes, evaluates if the target is healthy, and forwards traffic only to healthy nodes.

## Prerequisites and limitations
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ [Docker](https://www.docker.com/), installed and configured on Linux, macOS, or Windows.
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html), installed and configured on Linux, macOS, or Windows.
+ [eksctl](https://github.com/eksctl-io/eksctl#installation), installed and configured on Linux, macOS, or Windows.
+ `kubectl`, installed and configured to access resources on your Amazon EKS cluster. For more information, see [Installing or updating kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation. 
+ [gRPCurl](https://github.com/fullstorydev/grpcurl), installed and configured.
+ A new or existing Amazon EKS cluster. For more information, see [Getting started with Amazon EKS.](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
+ Your computer terminal configured to access the Amazon EKS cluster. For more information, see [Configure your computer to communicate with your cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl) in the Amazon EKS documentation.
+ [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html), provisioned in the Amazon EKS cluster.
+ An existing DNS host name with a valid SSL or SSL/TLS certificate. You can obtain a certificate for your domain by using AWS Certificate Manager (ACM) or uploading an existing certificate to ACM. For more information about these two options, see [Requesting a public certificate](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html) and [Importing certificates into AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html) in the ACM documentation.

## Architecture
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-architecture"></a>

The following diagram shows the architecture implemented by this pattern.

![\[Architecture for gRPC-based application on Amazon EKS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/abf727c1-ff8b-43a7-923f-bce825d1b459/images/281936fa-bc43-4b4e-a343-ba1eab97df38.png)


 

The following diagram shows a workflow where SSL/TLS traffic is received from a gRPC client that offloads to an Application Load Balancer. Traffic is forwarded in plaintext to the gRPC server because it comes from a virtual private cloud (VPC).

![\[Workflow for sending SSL/TLS traffic to a gRPC server\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/abf727c1-ff8b-43a7-923f-bce825d1b459/images/09e0c3f6-0c39-40b7-908f-8c4c693a5f02.png)


## Tools
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command line shell.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses in one or more Availability Zones.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable. 
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.  

**Tools**
+ [eksctl](https://eksctl.io/) is a simple CLI tool for creating clusters on Amazon EKS.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) is a command line utility for running commands against Kubernetes clusters.
+ [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) helps you manage AWS Elastic Load Balancers for a Kubernetes cluster.
+ [gRPCurl ](https://github.com/fullstorydev/grpcurl)is a command line tool that helps you interact with gRPC services.

**Code repository**

The code for this pattern is available in the GitHub [grpc-traffic-on-alb-to-eks](https://github.com/aws-samples/grpc-traffic-on-alb-to-eks.git) repository.

## Epics
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-epics"></a>

### Build and push the gRPC server’s Docker image to Amazon ECR
<a name="build-and-push-the-grpc-serverrsquor-s-docker-image-to-amazon-ecr"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon ECR repository. | Sign in to the AWS Management Console, open the [Amazon ECR console](https://console.aws.amazon.com/ecr/), and then create an Amazon ECR repository. For more information, see [Creating a repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) in the Amazon ECR documentation. Make sure that you record the Amazon ECR repository’s URL.You can also create an Amazon ECR repository with AWS CLI by running the following command:<pre>aws ecr create-repository --repository-name helloworld-grpc</pre> | Cloud administrator | 
| Build the Docker image.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html) | DevOps engineer | 
| Push the Docker image to Amazon ECR. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html) | DevOps engineer | 

### Deploy the Kubernetes manifests to the Amazon EKS cluster
<a name="deploy-the-kubernetes-manifests-to-the-amazon-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Modify the values in the Kubernetes manifest file. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html) | DevOps engineer | 
| Deploy the Kubernetes manifest file.  | Deploy the `grpc-sample.yaml` file to the Amazon EKS cluster by running the following `kubectl` command: <pre>kubectl apply -f ./kubernetes/grpc-sample.yaml</pre> | DevOps engineer | 

### Create the DNS record for the Application Load Balancer's FQDN
<a name="create-the-dns-record-for-the-application-load-balancerapos-s-fqdn"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Record the FQDN for the Application Load Balancer. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer.html) | DevOps engineer | 

### Test the solution
<a name="test-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the gRPC server.  | Use gRPCurl to test the endpoint by running the following command:<pre>grpcurl grpc.example.com:443 list <br />grpc.reflection.v1alpha.ServerReflection<br />helloworld.helloworld</pre>Replace `grpc.example.com` with your DNS name. | DevOps engineer | 
| Test the gRPC server using a gRPC client.  | In the `helloworld_client_ssl.py` sample gRPC client, replace the host name from `grpc.example.com` with the host name used for the gRPC server.  The following code sample shows the response from the gRPC server for the client's request:<pre>python ./app/helloworld_client_ssl.py<br />message: "Hello to gRPC server from Client"<br /><br />message: "Thanks for talking to gRPC server!! Welcome to hello world. Received message is \"Hello to gRPC server from Client\""<br />received: true</pre>This shows that the client can talk to the server and that the connection is successful. | DevOps engineer | 

### Clean up
<a name="clean-up"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Remove the DNS record. | Remove the DNS record that points to the Application Load Balancer's FQDN that you created earlier.  | Cloud administrator | 
| Remove the load balancer. | On the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), choose **Load Balancers**, and then remove the load balancer that the Kubernetes controller created for your ingress resource. | Cloud administrator | 
| Delete the Amazon EKS cluster. | Delete the Amazon EKS cluster by using `eksctl`:<pre>eksctl delete cluster -f ./eks.yaml</pre> | AWS DevOps | 

## Related resources
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-resources"></a>
+ [Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html)
+ [Target groups for your Application Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-group-protocol-version)

## Additional information
<a name="deploy-a-grpc-based-application-on-an-amazon-eks-cluster-and-access-it-with-an-application-load-balancer-additional"></a>

**Sample ingress resource:**

```
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/ssl-redirect: "443"
    alb.ingress.kubernetes.io/backend-protocol-version: "GRPC"
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:<AWS-Region>:<AccountId>:certificate/<certificate_ID>
  labels:
    app: grpcserver
    environment: dev
  name: grpcserver
  namespace: grpcserver
spec:
  ingressClassName: alb
  rules:
  - host: grpc.example.com # <----- replace this as per your host name for which the SSL certtficate is available in ACM
    http:
      paths:
      - backend:
          service:
            name: grpcserver
            port:
              number: 9000
        path: /
        pathType: Prefix
```

**Sample deployment resource:**

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grpcserver
  namespace: grpcserver
spec:
  selector:
    matchLabels:
      app: grpcserver
  replicas: 1
  template:
    metadata:
      labels:
        app: grpcserver
    spec:
      containers:
      - name: grpc-demo
        image: <your_aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/helloworld-grpc:1.0   #<------- Change to the URI that the Docker image is pushed to
        imagePullPolicy: Always
        ports:
        - name: grpc-api
          containerPort: 9000
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
      restartPolicy: Always
```

**Sample output:**

```
NAME             CLASS           HOSTS                          Address                PORTS          AGE
 grpcserver      <none>      <DNS-HostName>                  <ELB-address>              80            27d
```

# Deploy containerized applications on AWS IoT Greengrass V2 running as a Docker container
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container"></a>

*Salih Bakir, Giuseppe Di Bella, and Gustav Svalander, Amazon Web Services*

## Summary
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-summary"></a>

AWS IoT Greengrass Version 2, when deployed as a Docker container, doesn't natively support running Docker application containers. This pattern shows you how to create a custom container image based on the latest version of AWS IoT Greengrass V2 that enables Docker-in-Docker (DinD) functionality. With DinD, you can run containerized applications within the AWS IoT Greengrass V2 environment.

You can deploy this pattern as a stand-alone solution or integrate it with container orchestration platforms like Amazon ECS Anywhere. In either deployment model, you maintain full AWS IoT Greengrass V2 functionality including AWS IoT SiteWise Edge processing capabilities, while enabling scalable container-based deployments. 

## Prerequisites and limitations
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-prereqs"></a>

**Prerequisites **
+ An active AWS account.
+ For general AWS IoT Greengrass Version 2 prerequisites, see [Prerequisites](https://docs.aws.amazon.com/greengrass/v2/developerguide/getting-started-prerequisites.html) in the AWS IoT Greengrass Version 2 documentation. 
+ Docker Engine, installed and configured on Linux, macOS, or Windows.
+ Docker Compose (if you use the Docker Compose command line interface (CLI) to run Docker images).
+ A Linux operating system.
+ A hypervisor with a host server that supports virtualization.
+ System requirements:
  + 2 GB of RAM (minimum)
  + 5 GB of available disk space (minimum)
  + For AWS IoT SiteWise Edge, an x86\$164 quad-core CPU with 16 GB of RAM and 50 GB of available disk space. For more information about AWS IoT SiteWise data processing, see [Data processing pack requirements](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/configure-gateway-ggv2.html#w2aac17c19c13b7) in the AWS IoT SiteWise documentation.

**Product versions**
+ AWS IoT Greengrass Version 2 version 2.5.3 or later
+ Docker-in-Docker version 1.0.0 or later
+ Docker Compose version 1.22 or later
+ Docker Engine version 20.10.12 or later

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

## Architecture
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-architecture"></a>

**Target technology stack**
+ **Data sources** – IoT devices, sensors, or industrial equipment that generates data for processing
+ **AWS IoT Greengrass V2** – Running as a Docker container with D-in-D capabilities, deployed on edge infrastructures
+ **Containerized applications** – Custom applications running within the AWS IoT Greengrass V2 environment as nested Docker containers
+ **(Optional) Amazon ECS Anywhere** – Container orchestration that manages the AWS IoT Greengrass V2 container deployment
+ **Other AWS services** – AWS IoT Core, AWS IoT SiteWise, and other AWS services for data processing and management

**Target architecture **

The following diagram shows an example target deployment architecture that uses Amazon ECS Anywhere, which is a container management tool.

![\[Deployment architecture using Amazon ECS Anywhere.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2ecf5354-40e0-4fd9-9798-086719059784/images/5ed2652e-9604-4809-8962-b167e1991658.png)


The diagram shows the following workflow:

**1: Container image storage** – Amazon ECR stores the AWS IoT Greengrass container images and any custom application containers needed for edge processing.

**2 **and** 3: Container deployment** – Amazon ECS Anywhere deploys the AWS IoT Greengrass container image from Amazon ECR to the edge location, managing the container lifecycle and deployment process.

**4: Component deployment** – The deployed AWS IoT Greengrass core automatically deploys its relevant components based on its configuration. Components include AWS IoT SiteWise Edge and other necessary edge processing components within the containerized environment.

**5: Data ingestion** – After it’s fully configured, AWS IoT Greengrass begins ingesting telemetry and sensor data from various IoT data sources at the edge location.

**6: Data processing and cloud integration** – The containerized AWS IoT Greengrass core processes data locally using its deployed components (including AWS IoT SiteWise Edge for industrial data). Then, it sends processed data to AWS Cloud services for further analysis and storage.

## Tools
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-tools"></a>

**AWS services**
+ [Amazon ECS Anywhere](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch-type-external.html) helps you deploy, use, and manage Amazon ECS tasks and services on your own infrastructure.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) provides scalable computing capacity in the AWS Cloud. You can launch as many virtual servers as you need and quickly scale them up or down.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [AWS IoT Greengrass](https://docs.aws.amazon.com/greengrass/v2/developerguide/what-is-iot-greengrass.html) is an open source Internet of Things (IoT) edge runtime and cloud service that helps you build, deploy, and manage IoT applications on your devices.
+ [AWS IoT SiteWise](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/what-is-sitewise.html) helps you collect, model, analyze, and visualize data from industrial equipment at scale.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container applications.
+ [Docker Engine](https://docs.docker.com/engine/) is an open source containerization technology for building and containerizing applications.

**Code repository**

The code for this pattern is available in the GitHub [AWS IoT Greengrass v2 Docker-in-Docker](https://github.com/aws-samples/aws-iot-greengrass-docker-in-docker) repository.

## Epics
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-epics"></a>

### Build the AWS IoT Greengrass V2 Docker-in-Docker image
<a name="build-the-gg2-docker-in-docker-image"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone and navigate to the repository. | To clone the repository, use the following command: `git clone https://github.com/aws-samples/aws-iot-greengrass-v2-docker-in-docker.git`To navigate to the `docker` directory, use the following command:`cd aws-iot-greengrass-v2-docker-in-docker/docker` | DevOps engineer, AWS DevOps | 
| Build the Docker image. | To build the Docker image with the default (latest) version, run the following command:`docker build -t x86_64/aws-iot-greengrass:latest .`Or, to build the Docker image with a specific version, run the following command:`docker build --build-arg GREENGRASS_RELEASE_VERSION=2.12.0 -t x86_64/aws-iot-greengrass:2.12.0 .`To verify the build, run the following command:`docker images \| grep aws-iot-greengrass`  | AWS DevOps, DevOps engineer, App developer | 
| (Optional) Push to Amazon ECR. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | App developer, AWS DevOps, DevOps engineer | 

### Configure AWS credentials
<a name="configure-aws-credentials"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Select authentication method. | Choose one of the following options:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | AWS administrator | 
| Configure authentication method. | For the authentication method you selected, use the following configuration guidance:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | AWS administrator | 

### Run with Docker Compose
<a name="run-with-docker-compose"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure `docker-compose.yml`. | Update the `docker-compose.yml` file with environment variables as follows:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 
| Start and verify container. | To start in the foreground, run the following command:`docker-compose up --build`Or, to start in the background, run the following command:`docker-compose up --build -d`To verify status, run the following command:`docker-compose ps`To monitor logs, run the following command:`docker-compose logs -f` | DevOps engineer | 

### Run with Docker CLI
<a name="run-with-docker-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run container with Docker CLI. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 
| Verify container. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 

### Manage containerized applications
<a name="manage-containerized-applications"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy applications. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | App developer | 
| Access and test Docker-in-Docker. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 

### (Optional) Integrate with Amazon ECS Anywhere
<a name="optional-integrate-with-ecs-anywhere"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Amazon ECS cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | AWS administrator | 
| Deploy Amazon ECS task. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | AWS administrator | 

### Stop and cleanup
<a name="stop-and-cleanup"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Stop container. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | DevOps engineer | 

## Troubleshooting
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Container fails to start with permission errors. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)`--privileged` grants extended privileges to the container. | 
| Provisioning fails with credential errors. | To verify credentials are configured correctly, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Make sure that IAM permissions include `iot:CreateThing`, `iot:CreatePolicy`, `iot:AttachPolicy`, `iam:CreateRole`, and `iam:AttachRolePolicy`. | 
| Cannot connect to Docker daemon inside container. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | 
| Container runs out of disk space. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Ensure minimum disk space: 5 GB for basic operations and 50 GB for AWS IoT SiteWise Edge | 
| Build issues. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html) | 
| Network connectivity issues. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Verify that the firewall allows outbound HTTPS (443) and MQTT (8883) traffic. | 
| Greengrass components fail to deploy. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Check component-specific logs in the `/greengrass/v2/logs/` directory. | 
| Container exits immediately after starting. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container.html)Verify all required environment variables are set correctly if `PROVISION=true`. Make sure that the `--init` flag is used when starting the container. | 

## Related resources
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-resources"></a>

**AWS resources**
+ [Amazon Elastic Container Service](https://aws.amazon.com/ecs/)
+ [Configure edge data processing for AWS IoT SiteWise models and assets](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/edge-processing.html)
+ [What is AWS IoT Greengrass](https://docs.aws.amazon.com/greengrass/v2/developerguide/what-is-iot-greengrass.html)

**Other resources**
+ [Docker documentation](https://docs.docker.com/)

## Additional information
<a name="deploy-containerized-applications-on-aws-iot-greengrass-version-2-running-as-a-docker-container-additional"></a>
+ For AWS IoT SiteWise Edge data processing, Docker must be available within the AWS IoT Greengrass environment.
+ To run a nested container, you must run the AWS IoT Greengrass container with administrator-level credentials.

# Deploy containers by using Elastic Beanstalk
<a name="deploy-containers-by-using-elastic-beanstalk"></a>

*Thomas Scott and Jean-Baptiste Guillois, Amazon Web Services*

## Summary
<a name="deploy-containers-by-using-elastic-beanstalk-summary"></a>

On the Amazon Web Services (AWS) Cloud, AWS Elastic Beanstalk supports Docker as an available platform, so that containers can run with the created environment. This pattern shows how to deploy containers using the Elastic Beanstalk service. The deployment of this pattern will use the web server environment based on the Docker platform.

To use Elastic Beanstalk for deploying and scaling web applications and services, you upload your code and the deployment is automatically handled. Capacity provisioning, load balancing, automatic scaling, and application health monitoring are also included. When you use Elastic Beanstalk, you can take full control over the AWS resources that it creates on your behalf. There is no additional charge for Elastic Beanstalk. You pay only for the AWS resources that are used to store and run your applications.

This pattern includes instructions for deployment using the [AWS Elastic Beanstalk Command Line Interface (EB CLI)](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-advanced.html) and the AWS Management Console.

**Use cases**

Use cases for Elastic Beanstalk include the following: 
+ Deploy a prototype environment to demo a frontend application. (This pattern uses a Dockerfile** **as the example.)
+ Deploy an API to handle API requests for a given domain.
+ Deploy an orchestration solution using Docker-Compose (`docker-compose.yml` is** **not used as the practical example in this pattern).

## Prerequisites and limitations
<a name="deploy-containers-by-using-elastic-beanstalk-prereqs"></a>

**Prerequisites **
+ An AWS account
+ AWS EB CLI locally installed
+ Docker installed on a local machine

**Limitations **
+ There is a Docker pull limit of 100 pulls per 6 hours per IP address on the free plan.

## Architecture
<a name="deploy-containers-by-using-elastic-beanstalk-architecture"></a>

**Target technology stack **
+ Amazon Elastic Compute Cloud (Amazon EC2) instances
+ Security group
+ Application Load Balancer
+ Auto Scaling group

**Target architecture **

![\[Architecture for deploying containers with Elastic Beanstalk.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/dfabcdc2-747f-40e2-a603-08ea31ba71d3/images/1d17ff09-1aea-4c72-adb5-eaf741601428.png)


**Automation and scale**

AWS Elastic Beanstalk can automatically scale based on the number of requests made. AWS resources created for an environment include one Application Load Balancer, an Auto Scaling group, and one or more Amazon EC2 instances. 

The load balancer sits in front of the Amazon EC2 instances, which are part of the Auto Scaling group. Amazon EC2 Auto Scaling automatically starts additional Amazon EC2 instances to accommodate increasing load on your application. If the load on your application decreases, Amazon EC2 Auto Scaling stops instances, but it keeps at least one instance running.

**Automatic scaling triggers**

The Auto Scaling group in your Elastic Beanstalk environment uses two Amazon CloudWatch alarms to initiate scaling operations. The default triggers scale when the average outbound network traffic from each instance is higher than 6 MB or lower than 2 MB over a period of five minutes. To use Amazon EC2 Auto Scaling effectively, configure triggers that are appropriate for your application, instance type, and service requirements. You can scale based on several statistics including latency, disk I/O, CPU utilization, and request count. For more information, see [Auto Scaling triggers](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-autoscaling-triggers.html).

## Tools
<a name="deploy-containers-by-using-elastic-beanstalk-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command-line shell.
+ [AWS EB Command Line Interface (EB CLI)](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install.html) is a command-line client that you can use to create, configure, and manage Elastic Beanstalk environments.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses in one or more Availability Zones.

**Other services**
+ [Docker](https://www.docker.com/) packages software into standardized units called containers that include libraries, system tools, code, and runtime.

**Code**

The code for this pattern is available in the GitHub [Cluster Sample Application](https://github.com/aws-samples/cluster-sample-app) repository.

## Epics
<a name="deploy-containers-by-using-elastic-beanstalk-epics"></a>

### Build with a Dockerfile
<a name="build-with-a-dockerfile"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the remote repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 
| Initialize the Elastic Beanstalk Docker project. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 
| Test the project locally. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 

### Deploy using EB CLI
<a name="deploy-using-eb-cli"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Run deployment command | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 
| Access the deployed version. | After the deployment command has finished, access the project using the `eb open` command. | App developer, AWS administrator, AWS DevOps | 

### Deploy using the console
<a name="deploy-using-the-console"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the application by using the browser. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-containers-by-using-elastic-beanstalk.html) | App developer, AWS administrator, AWS DevOps | 
| Access the deployed version. | After deployment, access the deployed application, and choose the URL provided. | App developer, AWS administrator, AWS DevOps | 

## Related resources
<a name="deploy-containers-by-using-elastic-beanstalk-resources"></a>
+ [Web server environments](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-webserver.html)
+ [Install the EB CLI on macOS](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-osx.html)
+ [Manually install the EB CLI](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-advanced.html)

## Additional information
<a name="deploy-containers-by-using-elastic-beanstalk-additional"></a>

**Advantages of using Elastic Beanstalk**
+ Automatic infrastructure provisioning
+ Automatic management of the underlying platform
+ Automatic patching and updates to support the application
+ Automatic scaling of the application
+ Ability to customize the number of nodes
+ Ability to access the infrastructure components if needed
+ Ease of deployment over other container deployment solutions

# Generate a static outbound IP address using a Lambda function, Amazon VPC, and a serverless architecture
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture"></a>

*Thomas Scott, Amazon Web Services*

## Summary
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-summary"></a>

This pattern describes how to generate a static outbound IP address in the Amazon Web Services (AWS) Cloud by using a serverless architecture. Your organization can benefit from this approach if it wants to send files to a separate business entity by using Secure File Transfer Protocol (SFTP). This means that the business entity must have access to an IP address that allows files through its firewall. 

The pattern’s approach helps you create an AWS Lambda function that uses an [Elastic IP address](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) as the outbound IP address. By following the steps in this pattern, you can create a Lambda function and a virtual private cloud (VPC) that routes outbound traffic through an internet gateway with a static IP address. To use the static IP address, you attach the Lambda function to the VPC and its subnets. 

## Prerequisites and limitations
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-prereqs"></a>

**Prerequisites **
+ An active AWS account. 
+ AWS Identity and Access Management (IAM) permissions to create and deploy a Lambda function, and to create a VPC and its subnets. For more information about this, see [Execution role and user permissions](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html#vpc-permissions) in the AWS Lambda documentation.
+ If you plan to use infrastructure as code (IaC) to implement this pattern’s approach, you need an integrated development environment (IDE) such as AWS Cloud9. For more information about this, see [What is AWS Cloud9?](https://docs.aws.amazon.com/cloud9/latest/user-guide/welcome.html) in the AWS Cloud9 documentation.

## Architecture
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-architecture"></a>

The following diagram shows the serverless architecture for this pattern.

![\[AWS Cloud VPC architecture with two availability zones, public and private subnets, NAT gateways, and a Lambda function.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/eb1d0b05-df33-45ae-b27e-36090055b300/images/c15cc6da-ce4e-4ea0-9feb-de1c845d3ce8.png)


The diagram shows the following workflow:

1. Outbound traffic leaves `NAT gateway 1` in `Public subnet 1`.

1. Outbound traffic leaves `NAT gateway 2` in `Public subnet 2`.

1. The Lambda function can run in `Private subnet 1` or `Private subnet 2`.

1. `Private subnet 1` and `Private subnet 2` route traffic to the NAT gateways in the public subnets.

1. The NAT gateways send outbound traffic to the internet gateway from the public subnets.

1. Outbound data is transferred from the internet gateway to the external server.



**Technology stack  **
+ Lambda
+ Amazon Virtual Private Cloud (Amazon VPC)

 

**Automation and scale**

You can ensure high availability (HA) by using two public and two private subnets in different Availability Zones. Even if one Availability Zone becomes unavailable, the pattern’s solution continues to work.

## Tools
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-tools"></a>
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) – AWS Lambda is a compute service that supports running code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running.
+ [Amazon VPC](https://docs.aws.amazon.com/vpc/) – Amazon Virtual Private Cloud (Amazon VPC) provisions a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

## Epics
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-epics"></a>

### Create a new VPC
<a name="create-a-new-vpc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a new VPC. | Sign in to the AWS Management Console, open the Amazon VPC console, and then create a VPC named `Lambda VPC` that has `10.0.0.0/25`** **as the IPv4 CIDR range.For more information about creating a VPC, see [Getting started with Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-getting-started.html#getting-started-create-vpc) in the Amazon VPC documentation.  | AWS administrator | 

### Create two public subnets
<a name="create-two-public-subnets"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the first public subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the second public subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

### Create two private subnets
<a name="create-two-private-subnets"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the first private subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the second private subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

### Create two Elastic IP addresses for your NAT gateways
<a name="create-two-elastic-ip-addresses-for-your-nat-gateways"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Create the first Elastic IP address. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html)This Elastic IP address is used for your first NAT gateway.  | AWS administrator | 
| Create the second Elastic IP address. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html)This Elastic IP address is used for your second NAT gateway. | AWS administrator | 

### Create an internet gateway
<a name="create-an-internet-gateway"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an internet gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Attach the internet gateway to the VPC. | Select the internet gateway that you just created, and then choose **Actions, Attach to VPC**. | AWS administrator | 

### Create two NAT gateways
<a name="create-two-nat-gateways"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the first NAT gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the second NAT gateway. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

### Create route tables for your public and private subnets
<a name="create-route-tables-for-your-public-and-private-subnets"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the route table for the public-one subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the route table for the public-two subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the route table for the private-one subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Create the route table for the private-two subnet. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

### Create the Lambda function, add it to the VPC, and test the solution
<a name="create-the-lambda-function-add-it-to-the-vpc-and-test-the-solution"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a new Lambda function. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Add the Lambda function to your VPC. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 
| Write code to call an external service. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture.html) | AWS administrator | 

## Related resources
<a name="generate-a-static-outbound-ip-address-using-a-lambda-function-amazon-vpc-and-a-serverless-architecture-resources"></a>
+ [Configuring a Lambda function to access resources in a VPC](https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html)

# Identify duplicate container images automatically when migrating to an Amazon ECR repository
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository"></a>

*Rishabh Yadav and Rishi Singla, Amazon Web Services*

## Summary
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-summary"></a>

The pattern provides an automated solution to identify whether images that are stored in different container repositories are duplicates. This check is useful when you plan to migrate images from other container repositories to Amazon Elastic Container Registry (Amazon ECR).

For foundational information, the pattern also describes the components of a container image, such as the image digest, manifest, and tags. When you plan a migration to Amazon ECR, you might decide to synchronize your container images across container registries by comparing the digests of the images. Before you migrate your container images, you need to check whether these images already exist in the Amazon ECR repository to prevent duplication. However, it can be difficult to detect duplication by comparing image digests, and this might lead to issues in the initial migration phase.  This pattern compares the digests of two similar images that are stored in different container registries and explains why the digests vary, to help you compare images accurately.

## Prerequisites and limitations
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-prereqs"></a>
+ An active AWS account
+ Access to the [Amazon ECR public registry](https://gallery.ecr.aws/)
+ Familiarity with the following AWS services:
  + [AWS CodeCommit](https://aws.amazon.com/codecommit/)
  + [AWS CodePipeline](https://aws.amazon.com/codepipeline/)
  + [AWS CodeBuild](https://aws.amazon.com/codebuild/)
  + [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/)
  + [Amazon Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3/)
+ Configured CodeCommit credentials (see [instructions](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html))

## Architecture
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-architecture"></a>

**Container image components**

The following diagram illustrates some of the components of a container image. These components are described after the diagram.

![\[Manifest,configuration, file system layers, and digests.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7db5020c-6f5b-4e91-b91a-5b8ae844be1b/images/71b99c67-a934-4f94-8af8-2a8431fb91f5.png)


**Terms and definitions**

The following terms are defined in the [Open Container Initiative (OCI) Image Specification](https://github.com/opencontainers/image-spec/blob/main/spec.md).
+ **Registry:** A service for image storage and management.
+ **Client:** A tool that communicates with registries and works with local images.
+ **Push:** The process for uploading images to a registry.
+ **Pull:** The process for downloading images from a registry.
+ **Blob:** The binary form of content that is stored by a registry and can be addressed by a digest.
+ **Index:** A construct that identifies multiple image manifests for different computer platforms (such as x86-64 or ARM 64-bit) or media types. For more information, see the [OCI Image Index Specification](https://github.com/opencontainers/image-spec/blob/main/image-index.md).
+ **Manifest:** A JSON document that defines an image or artifact that is uploaded through the manifest's endpoint. A manifest can reference other blobs in a repository by using descriptors. For more information, see the [OCI Image Manifest Specification](https://github.com/opencontainers/image-spec/blob/main/manifest.md).
+ **Filesystem layer:** System libraries and other dependencies for an image.
+ **Configuration:** A blob that contains artifact metadata and is referenced in the manifest. For more information, see the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md).
+ **Object or artifact:** A conceptual content item that's stored as a blob and associated with an accompanying manifest with a configuration.
+ **Digest:** A unique identifier that's created from a cryptographic hash of the contents of a manifest. The image digest helps uniquely identify an immutable container image. When you pull an image by using its digest, you will download the same image every time on any operating system or architecture. For more information, see the [OCI Image Specification](https://github.com/opencontainers/image-spec/blob/main/descriptor.md#digests).
+ **Tag:** A human-readable manifest identifier. Compared with image digests, which are immutable, tags are dynamic. A tag that points to an image can change and move from one image to another, although the underlying image digest remains the same.

**Target architecture**

The following diagram displays the high-level architecture of the solution provided by this pattern to identify duplicate container images by comparing images that are stored in Amazon ECR and private repositories.

![\[Automatically detecting duplicates with CodePipeline and CodeBuild.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/7db5020c-6f5b-4e91-b91a-5b8ae844be1b/images/5ee62bc8-db8d-48a3-9e79-f3392b6e9bf7.png)


## Tools
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-tools"></a>

**AWS services**
+ [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html)is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html) is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.
+ [AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html) helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.

**Code **

The code for this pattern is available in the GitHub repository** **[Automated solution to identify duplicate container images between repositories](https://github.com/aws-samples/automated-solution-to-identify-duplicate-container-images-between-repositories/).

## Best practices
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-best-practices"></a>
+ [CloudFormation best practices](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html)
+ [AWS CodePipeline best practices](https://docs.aws.amazon.com/codepipeline/latest/userguide/best-practices.html)

## Epics
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-epics"></a>

### Pull container images from Amazon ECR public and private repositories
<a name="pull-container-images-from-ecr-public-and-private-repositories"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Pull an image from the Amazon ECR public repository. | From the terminal, run the following command to pull the image `amazonlinux` from the Amazon ECR public repository.<pre>$~ % docker pull public.ecr.aws/amazonlinux/amazonlinux:2018.03 </pre>When the image has been pulled to your local machine, you’ll see the following pull digest, which represents the image index.<pre>2018.03: Pulling from amazonlinux/amazonlinux<br />4ddc0f8d367f: Pull complete <br /><br />Digest: sha256:f972d24199508c52de7ad37a298bda35d8a1bd7df158149b381c03f6c6e363b5<br /><br />Status: Downloaded newer image for public.ecr.aws/amazonlinux/amazonlinux:2018.03<br />public.ecr.aws/amazonlinux/amazonlinux:2018.03</pre> | App developer, AWS DevOps, AWS administrator | 
| Push the image to an Amazon ECR private repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | AWS administrator, AWS DevOps, App developer | 
| Pull the same image from the Amazon ECR private repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | App developer, AWS DevOps, AWS administrator | 

### Compare the image manifests
<a name="compare-the-image-manifests"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Find the manifest of the image stored in the Amazon ECR public repository. | From the terminal, run the following command to pull the manifest of the image `public.ecr.aws/amazonlinux/amazonlinux:2018.03` from the Amazon ECR public repository.<pre>$~ % docker manifest inspect public.ecr.aws/amazonlinux/amazonlinux:2018.03<br />{<br />   "schemaVersion": 2,<br />   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",<br />   "manifests": [<br />      {<br />         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",<br />         "size": 529,<br />         "digest": "sha256:52db9000073d93b9bdee6a7246a68c35a741aaade05a8f4febba0bf795cdac02",<br />         "platform": {<br />            "architecture": "amd64",<br />            "os": "linux"<br />         }<br />      }<br />   ]<br />}</pre> | AWS administrator, AWS DevOps, App developer | 
| Find the manifest of the image stored in the Amazon ECR private repository. | From the terminal, run the following command to pull the manifest of the image `<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest` from the Amazon ECR private repository.<pre>$~ % docker manifest inspect <account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest                                          <br />{<br />	"schemaVersion": 2,<br />	"mediaType": "application/vnd.docker.distribution.manifest.v2+json",<br />	"config": {<br />		"mediaType": "application/vnd.docker.container.image.v1+json",<br />		"size": 1477,<br />		"digest": "sha256:f7cee5e1af28ad4e147589c474d399b12d9b551ef4c3e11e02d982fce5eebc68"<br />	},<br />	"layers": [<br />		{<br />			"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",<br />			"size": 62267075,<br />			"digest": "sha256:4ddc0f8d367f424871a060e2067749f32bd36a91085e714dcb159952f2d71453"<br />		}<br />	]<br />}</pre> | AWS DevOps, AWS systems administrator, App developer | 
| Compare the digest pulled by Docker with the manifest digest for the image in the Amazon ECR private repository. | Another question is why the digest provided by the **docker pull** command differs from the manifest's digest for the image `<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest`.The digest used for **docker pull** represents the digest of the image manifest, which is stored in a registry. This digest is considered the root of a hash chain, because the manifest contains the hash of the content that will be downloaded and imported into Docker.The image ID used within Docker can be found in this manifest as `config.digest`. This represents the image configuration that Docker uses. So you could say that the manifest is the envelope, and the image is the content of the envelope. The manifest digest is always different from the image ID. However, a specific manifest should always produce the same image ID. Because the manifest digest is a hash chain, we cannot guarantee that it will always be the same for a given image ID. In most cases, it produces the same digest, although Docker cannot guarantee that. The possible difference in the manifest digest stems from Docker not storing the blobs that are compressed with gzip locally. Therefore, exporting layers might produce a different digest, although the uncompressed content remains the same. The image ID verifies that uncompressed content is the same; that is, the image ID is now a content addressable identifier (`chainID`).To confirm this information, you can compare the output of the **docker inspect** command on the Amazon ECR public and private repositories:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html)The results verify that both images have the same image ID digest and layer digest.ID: `f7cee5e1af28ad4e147589c474d399b12d9b551ef4c3e11e02d982fce5eebc68`Layers: `d5655967c2c4e8d68f8ec7cf753218938669e6c16ac1324303c073c736a2e2a2`Additionally, the digests are based on the bytes of the object that's managed locally (the local file is a tar of the container image layer) or the blob that's pushed to the registry server. However, when you push the blob to a registry, the tar is compressed and the digest is computed in the compressed tar file. Therefore, the difference in the **docker pull** digest value arises from compression that is applied at the registry (Amazon ECR private or public) level.This explanation is specific to using a Docker client. You won’t see this behavior with other clients such as **nerdctl** or **Finch**, because they don’t automatically compress the image during push and pull operations. | AWS DevOps, AWS systems administrator, App developer | 

### Automatically identify duplicate images between Amazon ECR public and private repositories
<a name="automatically-identify-duplicate-images-between-ecr-public-and-private-repositories"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the repository. | Clone the Github repository for this pattern into a local folder:<pre>$git clone https://github.com/aws-samples/automated-solution-to-identify-duplicate-container-images-between-repositories</pre> | AWS administrator, AWS DevOps | 
| Set up a CI/CD pipeline. | The GitHub repository includes a `.yaml` file that creates an CloudFormation stack to set up a pipeline in AWS CodePipeline.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html)The pipeline will be set up with two stages (CodeCommit and CodeBuild, as shown in the architecture diagram) to identify images in the private repository that also exist in the public repository. The pipeline is configured with the following resources:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | AWS administrator, AWS DevOps | 
| Populate the CodeCommit repository. | To populate the CodeCommit repository, perform these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | AWS administrator, AWS DevOps | 
| Clean up. | To avoid incurring future charges, delete the resources by following these steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | AWS administrator | 

## Troubleshooting
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| When you try to push, pull, or otherwise interact with a CodeCommit repository from the terminal or command line, you are prompted to provide a user name and password, and you must supply the Git credentials for your IAM user. | The most common causes for this error are the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html)Depending on your operating system and local environment, you might need to install a credential manager, configure the credential manager that is included in your operating system, or customize your local environment to use credential storage. For example, if your computer is running macOS, you can use the Keychain Access utility to store your credentials. If your computer is running Windows, you can use the Git Credential Manager that is installed with Git for Windows. For more information, see [Setup for HTTPS users using Git credentials](https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html) in the CodeCommit documentation and [Credential Storage](https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage) in the Git documentation. | 
| You encounter HTTP 403 or "no basic auth credentials" errors when you push an image to the Amazon ECR repository. | You might encounter these error messages from the **docker push** or **docker pull** command, even if you have successfully authenticated to Docker by using the **aws ecr get-login-password** command. Known causes are:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository.html) | 

## Related resources
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-resources"></a>
+ [Automated solution to identify duplicate container images between repositories](https://github.com/aws-samples/automated-solution-to-identify-duplicate-container-images-between-repositories/) (GitHub repository)
+ [Amazon ECR public gallery](https://gallery.ecr.aws/)
+ [Private images in Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/images.html) (Amazon ECR documentation)
+ [AWS::CodePipeline::Pipeline resource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codepipeline-pipeline.html) (CloudFormation documentation)
+ [OCI Image Format Specification](https://github.com/opencontainers/image-spec/blob/main/spec.md)

## Additional information
<a name="identify-duplicate-container-images-automatically-when-migrating-to-ecr-repository-additional"></a>

**Output of Docker inspection for image in Amazon ECR public repository**

```
[
    {
        "Id": "sha256:f7cee5e1af28ad4e147589c474d399b12d9b551ef4c3e11e02d982fce5eebc68",
        "RepoTags": [
            "<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest",
            "public.ecr.aws/amazonlinux/amazonlinux:2018.03"
        ],
        "RepoDigests": [
            "<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository@sha256:52db9000073d93b9bdee6a7246a68c35a741aaade05a8f4febba0bf795cdac02",
            "public.ecr.aws/amazonlinux/amazonlinux@sha256:f972d24199508c52de7ad37a298bda35d8a1bd7df158149b381c03f6c6e363b5"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2023-02-23T06:20:11.575053226Z",
        "Container": "ec7f2fc7d2b6a382384061247ef603e7d647d65f5cd4fa397a3ccbba9278367c",
        "ContainerConfig": {
            "Hostname": "ec7f2fc7d2b6",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/sh",
                "-c",
                "#(nop) ",
                "CMD [\"/bin/bash\"]"
            ],
            "Image": "sha256:c1bced1b5a65681e1e0e52d0a6ad17aaf76606149492ca0bf519a466ecb21e51",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {}
        },
        "DockerVersion": "20.10.17",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/bash"
            ],
            "Image": "sha256:c1bced1b5a65681e1e0e52d0a6ad17aaf76606149492ca0bf519a466ecb21e51",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 167436755,
        "VirtualSize": 167436755,
        "GraphDriver": {
            "Data": {
                "MergedDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/merged",
                "UpperDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/diff",
                "WorkDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/work"
            },
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:d5655967c2c4e8d68f8ec7cf753218938669e6c16ac1324303c073c736a2e2a2"
            ]
        },
        "Metadata": {
            "LastTagTime": "2023-03-02T10:28:47.142155987Z"
        }
    }
]
```

**Output of Docker inspection for image in Amazon ECR private repository**

```
[
    {
        "Id": "sha256:f7cee5e1af28ad4e147589c474d399b12d9b551ef4c3e11e02d982fce5eebc68",
        "RepoTags": [
            "<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository:latest",
            "public.ecr.aws/amazonlinux/amazonlinux:2018.03"
        ],
        "RepoDigests": [
            "<account-id>.dkr.ecr.us-east-1.amazonaws.com/test_ecr_repository@sha256:52db9000073d93b9bdee6a7246a68c35a741aaade05a8f4febba0bf795cdac02",
            "public.ecr.aws/amazonlinux/amazonlinux@sha256:f972d24199508c52de7ad37a298bda35d8a1bd7df158149b381c03f6c6e363b5"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2023-02-23T06:20:11.575053226Z",
        "Container": "ec7f2fc7d2b6a382384061247ef603e7d647d65f5cd4fa397a3ccbba9278367c",
        "ContainerConfig": {
            "Hostname": "ec7f2fc7d2b6",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/sh",
                "-c",
                "#(nop) ",
                "CMD [\"/bin/bash\"]"
            ],
            "Image": "sha256:c1bced1b5a65681e1e0e52d0a6ad17aaf76606149492ca0bf519a466ecb21e51",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {}
        },
        "DockerVersion": "20.10.17",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/bash"
            ],
            "Image": "sha256:c1bced1b5a65681e1e0e52d0a6ad17aaf76606149492ca0bf519a466ecb21e51",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 167436755,
        "VirtualSize": 167436755,
        "GraphDriver": {
            "Data": {
                "MergedDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/merged",
                "UpperDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/diff",
                "WorkDir": "/var/lib/docker/overlay2/c2c2351a82b26cbdf7782507500e5adb5c2b3a2875bdbba79788a4b27cd6a913/work"
            },
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:d5655967c2c4e8d68f8ec7cf753218938669e6c16ac1324303c073c736a2e2a2"
            ]
        },
        "Metadata": {
            "LastTagTime": "2023-03-02T10:28:47.142155987Z"
        }
    }
]
```

# Install SSM Agent on Amazon EKS worker nodes by using Kubernetes DaemonSet
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset"></a>

*Mahendra Revanasiddappa, Amazon Web Services*

## Summary
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-summary"></a>

**Note, September 2021:** The latest Amazon EKS optimized AMIs install SSM Agent automatically. For more information, see the [release notes](https://github.com/awslabs/amazon-eks-ami/releases/tag/v20210621) for the June 2021 AMIs.

In Amazon Elastic Kubernetes Service (Amazon EKS), because of security guidelines, worker nodes don't have Secure Shell (SSH) key pairs attached to them. This pattern shows how you can use the Kubernetes DaemonSet resource type to install AWS Systems Manager Agent (SSM Agent) on all worker nodes, instead of installing it manually or replacing the Amazon Machine Image (AMI) for the nodes. DaemonSet uses a cron job on the worker node to schedule the installation of SSM Agent. You can also use this pattern to install other packages on worker nodes.

When you're troubleshooting issues in the cluster, installing SSM Agent on demand enables you to establish an SSH session with the worker node, to collect logs or to look into instance configuration, without SSH key pairs.

## Prerequisites and limitations
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-prereqs"></a>

**Prerequisites**
+ An existing Amazon EKS cluster with Amazon Elastic Compute Cloud (Amazon EC2) worker nodes.
+ Container instances should have the required permissions to communicate with the SSM service. The AWS Identity and Access Management (IAM) managed role **AmazonSSMManagedInstanceCore** provides the required permissions for SSM Agent to run on EC2 instances. For more information, see the [AWS Systems Manager documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-instance-profile.html).

**Limitations**
+ This pattern isn't applicable to AWS Fargate, because DaemonSets aren't supported on the Fargate platform.
+ This pattern applies only to Linux-based worker nodes.
+ The DaemonSet pods run in privileged mode. If the Amazon EKS cluster has a webhook that blocks pods in privileged mode, the SSM Agent will not be installed.

## Architecture
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-architecture"></a>

The following diagram illustrates the architecture for this pattern.

![\[Using Kubernetes DaemonSet to install SSM Agent on Amazon EKS worker nodes.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/016d53f3-45c1-4913-b542-67124e1462b8/images/3a6dfd00-e54b-44d5-843a-4c26ce9826c9.png)


## Tools
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-tools"></a>

**Tools**
+ [kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) is a command-line utility that is used to interact with an Amazon EKS cluster. This pattern uses `kubectl` to deploy a DaemonSet on the Amazon EKS cluster, which will install SSM Agent on all worker nodes.
+ [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) makes it easy for you to run Kubernetes on AWS without having to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
+ [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) lets you manage your EC2 instances, on-premises instances, and virtual machines (VMs) through an interactive, one-click, browser-based shell or through the AWS Command Line Interface (AWS CLI).

**Code**

Use the following code to create a DaemonSet configuration file that will install SSM Agent on the Amazon EKS cluster. Follow the instructions in the [Epics](#install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-epics) section.

```
cat << EOF > ssm_daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    k8s-app: ssm-installer
  name: ssm-installer
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: ssm-installer
  template:
    metadata:
      labels:
        k8s-app: ssm-installer
    spec:
      containers:
      - name: sleeper
        image: busybox
        command: ['sh', '-c', 'echo I keep things running! && sleep 3600']
      initContainers:
      - image: amazonlinux
        imagePullPolicy: Always
        name: ssm
        command: ["/bin/bash"]
        args: ["-c","echo '* * * * * root yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm & rm -rf /etc/cron.d/ssmstart' > /etc/cron.d/ssmstart"]
        securityContext:
          allowPrivilegeEscalation: true
        volumeMounts:
        - mountPath: /etc/cron.d
          name: cronfile
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      volumes:
      - name: cronfile
        hostPath:
          path: /etc/cron.d
          type: Directory
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
EOF
```

## Epics
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-epics"></a>

### Set up kubectl
<a name="set-up-kubectl"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install and configure kubectl to access the EKS cluster. | If `kubectl` isn't already installed and configured to access the Amazon EKS cluster, see [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation. | DevOps | 

### Deploy the DaemonSet
<a name="deploy-the-daemonset"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the DaemonSet configuration file. | Use the code in the [Code](#install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-tools) section earlier in this pattern to create a DaemonSet configuration file called `ssm_daemonset.yaml`, which will be deployed to the Amazon EKS cluster. The pod launched by DaemonSet has a main container and an `init` container. The main container has a `sleep` command. The `init` container includes a `command` section that creates a cron job file to install SSM Agent at the path `/etc/cron.d/`. The cron job runs only once, and the file it creates is automatically deleted after the job is complete. When the init container has finished, the main container waits for 60 minutes before exiting. After 60 minutes, a new pod is launched. This pod installs SSM Agent, if it’s missing, or updates SSM Agent to the latest version.If required, you can modify the `sleep` command to restart the pod once a day or to run more often.  | DevOps | 
| Deploy the DaemonSet on the Amazon EKS cluster. | To deploy the DaemonSet configuration file you created in the previous step on the Amazon EKS cluster, use the following command:<pre>kubectl apply -f ssm_daemonset.yaml </pre>This command creates a DaemonSet to run the pods on worker nodes to install SSM Agent. | DevOps | 

## Related resources
<a name="install-ssm-agent-on-amazon-eks-worker-nodes-by-using-kubernetes-daemonset-resources"></a>
+ [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) (Amazon EKS documentation)
+ [Setting up Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html) (AWS Systems Manager documentation)

# Install the SSM Agent and CloudWatch agent on Amazon EKS worker nodes using preBootstrapCommands
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands"></a>

*Akkamahadevi Hiremath, Amazon Web Services*

## Summary
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-summary"></a>

This pattern provides code samples and steps to install the AWS Systems Manager Agent (SSM Agent) and Amazon CloudWatch agent on Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes in the Amazon Web Services (AWS) Cloud during Amazon EKS cluster creation. You can install the SSM Agent and CloudWatch agent by using the `preBootstrapCommands` property from the `eksctl` [config file schema](https://eksctl.io/usage/schema/) (Weaveworks documentation). Then, you can use the SSM Agent to connect to your worker nodes without using an Amazon Elastic Compute Cloud (Amazon EC2) key pair. Additionally, you can use the CloudWatch agent to monitor memory and disk utilization on your Amazon EKS worker nodes.

## Prerequisites and limitations
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ The [eksctl command line utility](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html), installed and configured on macOS, Linux, or Windows
+ The [kubectl command line utility](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html), installed and configured on macOS, Linux, or Windows

**Limitations**
+ We recommend that you avoid adding long-running scripts to the `preBootstrapCommands`** **property, because this delays the node from joining the Amazon EKS cluster during scaling activities. We recommend that you create a [custom Amazon Machine Image (AMI)](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.customenv.html) instead.
+ This pattern applies to Amazon EC2 Linux instances only.

## Architecture
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-architecture"></a>

**Technology stack**
+ Amazon CloudWatch
+ Amazon Elastic Kubernetes Service (Amazon EKS)
+ AWS Systems Manager Parameter Store

**Target architecture**

The following diagram shows an example of a user connecting to Amazon EKS worker nodes using SSM Agent which was installed using the `preBootstrapCommands`.

![\[User connecting to Amazon EKS worker nodes via Systems Manager, with SSM Agent and CloudWatch agent on each node.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/b37a3cdb-204f-4014-8317-3600a793dac7/images/9a5760af-23bb-4616-97b0-b401a9d080cf.png)


The diagram shows the following workflow:

1. The user creates an Amazon EKS cluster by using the `eksctl` configuration file with the `preBootstrapCommands` property, which installs the SSM Agent and CloudWatch agent.

1. Any new instances that join the cluster later due to scaling activities get created with the pre-installed SSM Agent and CloudWatch agent.

1. The user connects to Amazon EC2 by using the SSM Agent and then monitors memory and disk utilization by using the CloudWatch agent.

## Tools
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-tools"></a>
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) helps you monitor the metrics of your AWS resources and the applications that you run on AWS in real time.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) provides secure, hierarchical storage for configuration data management and secrets management.
+ [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) helps you manage your EC2 instances, on-premises instances, and virtual machines through an interactive, one-click, browser-based shell or through the AWS Command Line Interface (AWS CLI).
+ [eksctl](https://eksctl.io/usage/schema/) is a command-line utility for creating and managing Kubernetes clusters on Amazon EKS.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) is a command-line utility for communicating with the cluster API server.

## Epics
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-epics"></a>

### Create an Amazon EKS cluster
<a name="create-an-amazon-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Store the CloudWatch agent configuration file. | Store the CloudWatch agent configuration file in the [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) in the AWS Region where you want to create your Amazon EKS cluster. To do this, [create a parameter](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-create-console.html) in AWS Systems Manager Parameter Store and note the name of the parameter (for example, `AmazonCloudwatch-linux`).For more information, see the *Example CloudWatch agent configuration file *code in the [Additional information](#install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-additional) section of this pattern. | DevOps engineer | 
| Create the eksctl configuration file and cluster.  | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands.html) | AWS DevOps | 

### Verify that the SSM Agent and CloudWatch agent work
<a name="verify-that-the-ssm-agent-and-cloudwatch-agent-work"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the SSM Agent. | Use SSH to connect to your Amazon EKS cluster nodes by using any of the methods covered in [Start a session](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#start-ec2-console%20%20or%20https:%2F%2Fdocs.aws.amazon.com%2Fsystems-manager%2Flatest%2Fuserguide%2Fsession-manager-working-with-sessions-start.html%23sessions-start-cli) from the AWS Systems Manager documentation. | AWS DevOps | 
| Test the CloudWatch agent. | Use the CloudWatch console to validate the CloudWatch agent:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands.html) | AWS DevOps | 

## Related resources
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-resources"></a>
+ [Installing and running the CloudWatch agent on your servers](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html) (Amazon CloudWatch documentation)
+ [Create a Systems Manager parameter (console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-create-console.html) (AWS Systems Manager documentation)
+ [Create the CloudWatch agent configuration file](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file.html) (Amazon CloudWatch documentation)
+ [Starting a session (AWS CLI)](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#sessions-start-cli) (AWS Systems Manager documentation)
+ [Starting a session (Amazon EC2 console)](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#start-ec2-console) (AWS Systems Manager documentation)

## Additional information
<a name="install-the-ssm-agent-and-cloudwatch-agent-on-amazon-eks-worker-nodes-using-prebootstrapcommands-additional"></a>

**Example CloudWatch agent configuration file**

In the following example, the CloudWatch agent is configured to monitor disk and memory utilization on Amazon Linux instances:

```
{
    "agent": {
        "metrics_collection_interval": 60,
        "run_as_user": "cwagent"
    },
    "metrics": {
        "append_dimensions": {
            "AutoScalingGroupName": "${aws:AutoScalingGroupName}",
            "ImageId": "${aws:ImageId}",
            "InstanceId": "${aws:InstanceId}",
            "InstanceType": "${aws:InstanceType}"
        },
        "metrics_collected": {
            "disk": {
                "measurement": [
                    "used_percent"
                ],
                "metrics_collection_interval": 60,
                "resources": [
                    "*"
                ]
            },
            "mem": {
                "measurement": [
                    "mem_used_percent"
                ],
                "metrics_collection_interval": 60
            }
        }
    }
}
```

**Example eksctl configuration file**

```
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: test
  region: us-east-2
  version: "1.24"
managedNodeGroups:
  - name: test
    minSize: 2
    maxSize: 4
    desiredCapacity: 2
    volumeSize: 20
    instanceType: t3.medium
    preBootstrapCommands:
    - sudo yum install amazon-ssm-agent -y
    - sudo systemctl enable amazon-ssm-agent
    - sudo systemctl start amazon-ssm-agent
    - sudo yum install amazon-cloudwatch-agent -y
    - sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c ssm:AmazonCloudwatch-linux
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
```

**Additional code details**
+ In the last line of the `preBootstrapCommands` property, `AmazonCloudwatch-linux` is the name of the parameter created in AWS System Manager Parameter Store. You must include `AmazonCloudwatch-linux` in Parameter Store in the same AWS Region where you created the Amazon EKS cluster. You can also specify a file path, but we recommend using Systems Manager for easier automation and reusability.
+ If you use `preBootstrapCommands` in the `eksctl` configuration file, you see two launch templates in the AWS Management Console. The first launch template includes the commands specified in `preBootstrapCommands`. The second template includes the commands specified in `preBootstrapCommands` and default Amazon EKS user data. This data is required to get the nodes to join the cluster. The node group’s Auto Scaling group uses this user data to spin up new instances.
+ If you use the `iam` attribute in the `eksctl` configuration file, you must list the default Amazon EKS policies with any additional policies required in your attached AWS Identity and Access Management (IAM) policies. In the code snippet from the *Create the eksctl configuration file and cluster *step, `CloudWatchAgentServerPolicy` and `AmazonSSMMangedInstanceCore` are additional policies added to make sure that the CloudWatch agent and SSM Agent work as expected. The `AmazonEKSWorkerNodePolicy`, `AmazonEKS_CNI_Policy`, `AmazonEC2ContainerRegistryReadOnly` policies are mandatory policies required for the Amazon EKS cluster to function correctly.

# Migrate NGINX Ingress Controllers when enabling Amazon EKS Auto Mode
<a name="migrate-nginx-ingress-controller-eks-auto-mode"></a>

*Olawale Olaleye and Shamanth Devagari, Amazon Web Services*

## Summary
<a name="migrate-nginx-ingress-controller-eks-auto-mode-summary"></a>

[EKS Auto Mode](https://docs.aws.amazon.com/eks/latest/userguide/automode.html) for Amazon Elastic Kubernetes Service (Amazon EKS) can reduce the operational overhead of running your workloads on Kubernetes clusters. This mode allows AWS to also set up and manage the infrastructure on your behalf. When you enable EKS Auto Mode on an existing cluster, you must carefully plan the migration of [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/overview/about/) configurations. This is because the direct transfer of Network Load Balancers isn't possible.

You can use a blue/green deployment strategy to migrate an NGINX Ingress Controller instance when you enable EKS Auto Mode in an existing Amazon EKS cluster.

## Prerequisites and limitations
<a name="migrate-nginx-ingress-controller-eks-auto-mode-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An [Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) running Kubernetes version 1.29 or later
+ Amazon EKS add-ons running [minimum versions](https://docs.aws.amazon.com/eks/latest/userguide/auto-enable-existing.html#auto-addons-required)
+ Latest version of [kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html#kubectl-install-update)
+ An existing [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/deploy/#aws) instance
+ (Optional) A [hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-working-with.html) in Amazon Route 53 for DNS-based traffic shifting

## Architecture
<a name="migrate-nginx-ingress-controller-eks-auto-mode-architecture"></a>

A *blue/green deployment* is a deployment strategy in which you create two separate but identical environments. Blue/green deployments provide near-zero downtime release and rollback capabilities. The fundamental idea is to shift traffic between two identical environments that are running different versions of your application.

The following image shows the migration of Network Load Balancers from two different NGINX Ingress Controller instances when enabling EKS Auto Mode. You use a blue/green deployment to shift traffic between the two Network Load Balancers.

![\[Using a blue/green deployment strategy to migrate NGINX Ingress Controller instances.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/57e8c14f-cb50-4027-8ef6-ce8ea3f2db25/images/211a029a-90d8-4c92-8200-19e54062f936.png)


The original namespace is the *blue* namespace. This is where the original NGINX Ingress Controller service and instance run, before you enable EKS Auto Mode. The original service and instance connect to a Network Load Balancer that has a DNS name that is configured in Route 53. The [AWS Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.11/) deployed this Network Load Balancer in the target virtual private cloud (VPC).

The diagram shows the following workflow to set up an environment for a blue/green deployment:

1. Install and configure another NGINX Ingress Controller instance in a different namespace, a *green* namespace.

1. In Route 53, configure a DNS name for a new Network Load Balancer.

## Tools
<a name="migrate-nginx-ingress-controller-eks-auto-mode-tools"></a>

**AWS services**
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, and IP addresses in one or more Availability Zones.
+ [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) is a highly available and scalable DNS web service.
+ [Amazon Virtual Private Cloud (Amazon VPC)](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) helps you launch AWS resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

**Other tools**
+ [Helm](https://helm.sh/) is an open source package manager for Kubernetes that helps you install and manage applications on your Kubernetes cluster.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) is a command-line interface that helps you run commands against Kubernetes clusters.
+ [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/overview/about/) connects Kubernetes apps and services with request handling, auth, self-service custom resources, and debugging.

## Epics
<a name="migrate-nginx-ingress-controller-eks-auto-mode-epics"></a>

### Review the existing environment
<a name="review-the-existing-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Confirm that the original NGINX Ingress Controller instance is operational. | Enter the following command to verify that the resources in the `ingress-nginx` namespace are operational. If you have deployed NGINX Ingress Controller in another namespace, update the namespace name in this command.<pre>kubectl get all -n ingress-nginx</pre>In the output, confirm that NGINX Ingress Controller pods are in running state. The following is an example output:<pre>NAME                                           READY   STATUS      RESTARTS      AGE<br />pod/ingress-nginx-admission-create-xqn9d       0/1     Completed   0             88m<br />pod/ingress-nginx-admission-patch-lhk4j        0/1     Completed   1             88m<br />pod/ingress-nginx-controller-68f68f859-xrz74   1/1     Running     2 (10m ago)   72m<br /><br />NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)                      AGE<br />service/ingress-nginx-controller             LoadBalancer   10.100.67.255    k8s-ingressn-ingressn-abcdefg-12345.elb.eu-west-1.amazonaws.com   80:30330/TCP,443:31462/TCP   88m<br />service/ingress-nginx-controller-admission   ClusterIP      10.100.201.176   <none>                                                                          443/TCP                      88m<br /><br />NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE<br />deployment.apps/ingress-nginx-controller   1/1     1            1           88m<br /><br />NAME                                                 DESIRED   CURRENT   READY   AGE<br />replicaset.apps/ingress-nginx-controller-68f68f859   1         1         1       72m<br />replicaset.apps/ingress-nginx-controller-d8c96cf68   0         0         0       88m<br /><br />NAME                                       STATUS     COMPLETIONS   DURATION   AGE<br />job.batch/ingress-nginx-admission-create   Complete   1/1           4s         88m<br />job.batch/ingress-nginx-admission-patch    Complete   1/1           5s         88m</pre> | DevOps engineer | 

### Deploy a sample HTTPd workload to use the NGINX Ingress Controller
<a name="deploy-a-sample-httpd-workload-to-use-the-nginx-ingress-controller"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the Kubernetes resources. | Enter the following commands to create a sample Kubernetes deployment, service, and ingress:<pre>kubectl create deployment demo --image=httpd --port=80</pre><pre>kubectl expose deployment demo</pre><pre> kubectl create ingress demo --class=nginx \<br />  --rule nginxautomode.local.dev/=demo:80</pre> | DevOps engineer | 
| Review the deployed resources. | Enter the following command to view a list of the deployed resources:<pre>kubectl get all,ingress</pre>In the output, confirm that the sample HTTPd pod is in a running state. The following is an example output:<pre>NAME                        READY   STATUS    RESTARTS   AGE<br />pod/demo-7d94f8cb4f-q68wc   1/1     Running   0          59m<br /><br />NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE<br />service/demo         ClusterIP   10.100.78.155   <none>        80/TCP    59m<br />service/kubernetes   ClusterIP   10.100.0.1      <none>        443/TCP   117m<br /><br />NAME                   READY   UP-TO-DATE   AVAILABLE   AGE<br />deployment.apps/demo   1/1     1            1           59m<br /><br />NAME                              DESIRED   CURRENT   READY   AGE<br />replicaset.apps/demo-7d94f8cb4f   1         1         1       59m<br /><br />NAME                             CLASS   HOSTS                                  ADDRESS                                                                         PORTS   AGE<br />ingress.networking.k8s.io/demo   nginx   nginxautomode.local.dev                k8s-ingressn-ingressn-abcdefg-12345.elb.eu-west-1.amazonaws.com                 80      56m</pre> | DevOps engineer | 
| Confirm the service is reachable. | Enter the following command to confirm that the service is reachable through the DNS name of the Network Load Balancer:<pre>curl -H "Host: nginxautomode.local.dev" http://k8s-ingressn-ingressn-abcdefg-12345.elb.eu-west-1.amazonaws.com</pre>The following is the expected output:<pre><html><body><h1>It works!</h1></body></html></pre> | DevOps engineer | 
| (Optional) Create a DNS record. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-nginx-ingress-controller-eks-auto-mode.html) | DevOps engineer, AWS DevOps | 

### Enable EKS Auto Mode on the existing cluster
<a name="enable-eks-auto-mode-on-the-existing-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable EKS Auto Mode. | Follow the instructions in [Enable EKS Auto Mode on an existing cluster](https://docs.aws.amazon.com/eks/latest/userguide/auto-enable-existing.html) (Amazon EKS documentation). | AWS DevOps | 

### Install a new NGINX Ingress Controller
<a name="install-a-new-nginx-ingress-controller"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Configure a new NGINX Ingress Controller instance. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-nginx-ingress-controller-eks-auto-mode.html) | DevOps engineer | 
| Deploy the new NGINX Instance Controller instance. | Enter the following command to apply the modified manifest file:<pre>kubectl apply -f deploy.yaml</pre> | DevOps engineer | 
| Confirm successful deployment. | Enter the following command to verify that the resources in the `ingress-nginx-v2` namespace are operational:<pre>kubectl get all -n ingress-nginx-v2</pre>In the output, confirm that NGINX Ingress Controller pods are in a running state. The following is an example output:<pre>NAME                                            READY   STATUS      RESTARTS   AGE<br />pod/ingress-nginx-admission-create-7shrj        0/1     Completed   0          24s<br />pod/ingress-nginx-admission-patch-vkxr5         0/1     Completed   1          24s<br />pod/ingress-nginx-controller-757bfcbc6d-4fw52   1/1     Running     0          24s<br /><br />NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)                      AGE<br />service/ingress-nginx-controller             LoadBalancer   10.100.208.114   k8s-ingressn-ingressn-2e5e37fab6-848337cd9c9d520f.elb.eu-west-1.amazonaws.com   80:31469/TCP,443:30658/TCP   24s<br />service/ingress-nginx-controller-admission   ClusterIP      10.100.150.114   <none>                                                                          443/TCP                      24s<br /><br />NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE<br />deployment.apps/ingress-nginx-controller   1/1     1            1           24s<br /><br />NAME                                                  DESIRED   CURRENT   READY   AGE<br />replicaset.apps/ingress-nginx-controller-757bfcbc6d   1         1         1       24s<br /><br />NAME                                       STATUS     COMPLETIONS   DURATION   AGE<br />job.batch/ingress-nginx-admission-create   Complete   1/1           4s         24s<br />job.batch/ingress-nginx-admission-patch    Complete   1/1           5s         24s</pre> | DevOps engineer | 
| Create a new ingress for the sample HTTPd workload. | Enter the following command to create a new ingress for the existing sample HTTPd workload:<pre>kubectl create ingress demo-new --class=nginx-v2 \<br />  --rule nginxautomode.local.dev/=demo:80</pre> | DevOps engineer | 
| Confirm that the new ingress works. | Enter the following command to confirm that the new ingress works:<pre>curl -H "Host: nginxautomode.local.dev" k8s-ingressn-ingressn-2e5e37fab6-848337cd9c9d520f.elb.eu-west-1.amazonaws.com</pre>The following is the expected output:<pre><html><body><h1>It works!</h1></body></html></pre> | DevOps engineer | 

### Cut over
<a name="cut-over"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Cut over to the new namespace. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-nginx-ingress-controller-eks-auto-mode.html) | AWS DevOps, DevOps engineer | 
| Review the two ingresses. | Enter the following command to review the two ingresses that were created for the sample HTTPd workload:<pre>kubectl get ingress</pre>The following is an example output:<pre>NAME       CLASS      HOSTS                                  ADDRESS                                                                         PORTS   AGE<br />demo       nginx      nginxautomode.local.dev   k8s-ingressn-ingressn-abcdefg-12345.elb.eu-west-1.amazonaws.com                              80      95m<br />demo-new   nginx-v2   nginxautomode.local.dev   k8s-ingressn-ingressn-2e5e37fab6-848337cd9c9d520f.elb.eu-west-1.amazonaws.com                80      33s</pre> | DevOps engineer | 

## Related resources
<a name="migrate-nginx-ingress-controller-eks-auto-mode-resources"></a>
+ [Enable EKS Auto Mode on an existing cluster](https://docs.aws.amazon.com/eks/latest/userguide/auto-enable-existing.html) (Amazon EKS documentation)
+ [Troubleshoot load balancers created by the Kubernetes service controller in Amazon EKS](https://repost.aws/knowledge-center/eks-load-balancers-troubleshooting) (AWS re:Post Knowledge Center)
+ [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/) (NGINX documentation)

# Migrate your container workloads from Azure Red Hat OpenShift (ARO) to Red Hat OpenShift Service on AWS (ROSA)
<a name="migrate-container-workloads-from-aro-to-rosa"></a>

*Naveen Ramasamy, Srikanth Rangavajhala, and Gireesh Sreekantan, Amazon Web Services*

## Summary
<a name="migrate-container-workloads-from-aro-to-rosa-summary"></a>

This pattern provides step-by-step instructions for migrating container workloads from Azure Red Hat OpenShift (ARO) to [Red Hat OpenShift Service on AWS (ROSA)](https://aws.amazon.com/rosa/). ROSA is a managed Kubernetes service provided by Red Hat in collaboration with AWS. It helps you deploy, manage, and scale your containerized applications by using the Kubernetes platform, and benefits from both Red Hat's expertise in Kubernetes and the AWS Cloud infrastructure.

Migrating container workloads from ARO, from other clouds, or from on premises to ROSA involves transferring applications, configurations, and data from one platform to another. This pattern helps ensure a smooth transition while optimizing for AWS Cloud services, security, and cost efficiency. It covers two methods for migrating your workloads to ROSA clusters: CI/CD and Migration Toolkit for Containers (MTC).

This pattern covers both methods. The method you choose depends on the complexity and certainty of your migration process. If you have full control over your application's state and can guarantee a consistent setup through a pipeline, we recommend that you use the CI/CD method. However, if your application state involves uncertainties, unforeseen changes, or a complex ecosystem, we recommend that you use MTC as a reliable and controlled path to migrate your application and its data to a new cluster. For a detailed comparison of the two methods, see the [Additional information](#migrate-container-workloads-from-aro-to-rosa-additional) section.

Benefits of migrating to ROSA:
+ ROSA seamlessly integrates with AWS as a native service. It is easily accessible through the AWS Management Console and billed through a single AWS account. It offers full compatibility with other AWS services and provides collaborative support from both AWS and Red Hat.
+ ROSA supports hybrid and multi-cloud deployments. It enables applications to run consistently across on-premises data centers and multiple cloud environments.
+ ROSA benefits from Red Hat's security focus, and provides features such as role-based access control (RBAC), image scanning, and vulnerability assessments to ensure a secure container environment.
+ ROSA is designed to scale applications easily and provides high availability options. It allows applications to grow as needed while maintaining reliability.
+ ROSA automates and simplifies the deployment of a Kubernetes cluster compared with manual setup and management methods. This accelerates the development and deployment process.
+ ROSA benefits from AWS Cloud services, and provides seamless integration with AWS offerings such as database services, storage solutions, and security services.

## Prerequisites and limitations
<a name="migrate-container-workloads-from-aro-to-rosa-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ Permissions configured for AWS services that ROSA relies on to deliver functionality. For more information, see [Prerequisites](https://docs.aws.amazon.com/rosa/latest/userguide/set-up.html) in the ROSA documentation.
+ ROSA enabled on the [ROSA console](https://console.aws.amazon.com/rosa). For instructions, see the [ROSA documentation](https://docs.aws.amazon.com/rosa/latest/userguide/set-up.html#enable-rosa).
+ The ROSA cluster installed and configured. For more information, see [Get started with ROSA](https://docs.aws.amazon.com/rosa/latest/userguide/getting-started.html)  in the ROSA documentation. To understand the different methods for setting up a ROSA cluster, see the AWS Prescriptive Guidance guide [ROSA implementation strategies](https://docs.aws.amazon.com/prescriptive-guidance/latest/red-hat-openshift-on-aws-implementation/).
+ Network connectivity established from the on-premises network to AWS through [AWS Direct Connect](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html) (preferred) or [AWS Virtual Private Network (Site-to-Site VPN)](https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html).
+ An Amazon Elastic Compute Cloud (Amazon EC2) instance or another virtual server to install tools such as `aws client`, OpenShift CLI (`oc`) client, ROSA client, and Git binary.

Additional prerequisites for the CI/CD method:
+ Access to the on-premises Jenkins server with permissions to create a new pipeline, add stages, add OpenShift clusters, and perform builds.
+ Access to the Git repository where application source code is maintained, with permissions to create a new Git branch and perform commits to the new branch.

Additional prerequisites for the MTC method:
+ An Amazon Simple Storage Service (Amazon S3) bucket, which will be used as a replication repository.
+ Administrative access to the source ARO cluster. This is required to set up the MTC connection.

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="migrate-container-workloads-from-aro-to-rosa-architecture"></a>

ROSA provides three network deployment patterns: public, private, and [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html). PrivateLink enables Red Hat site reliability engineering (SRE) teams to manage the cluster by using a private subnet connected to the cluster’s PrivateLink endpoint in an existing VPC.

Choosing the PrivateLink option provides the most secure configuration. For that reason, we recommend it for sensitive workloads or strict compliance requirements. For information about the public and private network deployment options, see the [Red Hat OpenShift documentation](https://docs.openshift.com/rosa/architecture/rosa-architecture-models.html#rosa-hcp-architecture_rosa-architecture-models).

**Important**  
You can create a PrivateLink cluster only at installation time. You cannot change a cluster to use PrivateLink after installation.

The following diagram illustrates the PrivateLink architecture for a ROSA cluster that uses Direct Connect to connect to the on-premises and ARO environments.

![\[ROSA cluster that uses AWS Direct Connect and AWS PrivateLink.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/527cedfb-ec21-42be-bf21-d4e4e4f9db51/images/eff9b017-6fc7-4874-b610-849a42071ef4.png)


**AWS permissions to ROSA**

For AWS permissions to ROSA, we recommend that you use AWS Security Token Service (AWS STS) with short-lived, dynamic tokens. This method uses least-privilege predefined roles and policies to grant ROSA minimal permissions to operate in the AWS account, and supports ROSA installation, control plane, and compute functionality.

**CI/CD pipeline redeployment**

CI/CD pipeline redeployment is the recommended method for users who have a mature CI/CD pipeline. When you choose this option, you can use any [DevOps deployment strategy ](https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/deployment-strategies.html)to gradually shift your application load to deployments on ROSA.

**Note**  
This pattern assumes a common use case where you have an on-premises Git, JFrog Artifactory, and Jenkins pipeline. This approach requires that you establish network connectivity from your on-premises network to AWS through Direct Connect, and that you set up the ROSA cluster before you follow the instructions in the [Epics](#migrate-container-workloads-from-aro-to-rosa-epics) section. See the [Prerequisites](#migrate-container-workloads-from-aro-to-rosa-prereqs) section for details.

The following diagram shows the workflow for this method.

![\[Migrating containers from ARO to ROSA by using the CI/CD method.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/527cedfb-ec21-42be-bf21-d4e4e4f9db51/images/f658590e-fbd9-4297-a02c-0b516694d436.png)


**MTC method**

You can use the [Migration Toolkit for Containers (MTC)](https://docs.openshift.com/container-platform/4.13/migration_toolkit_for_containers/about-mtc.html)** **to migrate your containerized workloads between different Kubernetes environments, such as from ARO to ROSA. MTC simplifies the migration process by automating several key tasks and providing a comprehensive framework for managing the migration lifecycle.

The following diagram shows the workflow for this method.

![\[Migrating containers from ARO to ROSA by using the MTC method.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/527cedfb-ec21-42be-bf21-d4e4e4f9db51/images/979bbc7b-2e39-4dd1-b4f0-ea1032880a38.png)


## Tools
<a name="migrate-container-workloads-from-aro-to-rosa-tools"></a>

**AWS services**
+ [AWS DataSync](https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html) is an online data transfer and discovery service that helps you move files or object data to, from, and between AWS storage services.
+ [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. With this connection, you can create virtual interfaces directly to public AWS services while bypassing internet service providers in your network path.
+ [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) helps you create unidirectional, private connections from your virtual private clouds (VPCs) to services outside of the VPC.
+ [Red Hat OpenShift Service on AWS (ROSA)](https://docs.aws.amazon.com/rosa/latest/userguide/what-is-rosa.html) is a managed service that helps Red Hat OpenShift users to build, scale, and manage containerized applications on AWS.
+ [AWS Security Token Service (AWS STS)](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html) helps you request temporary, limited-privilege credentials for users.

**Other tools**
+ [Migration Toolkit for Containers (MTC)](https://docs.openshift.com/container-platform/4.13/migration_toolkit_for_containers/about-mtc.html) provides a console and API for migrating containerized applications from ARO to ROSA.

## Best practices
<a name="migrate-container-workloads-from-aro-to-rosa-best-practices"></a>
+ For [resilience](https://docs.aws.amazon.com/ROSA/latest/userguide/disaster-recovery-resiliency.html) and if you have security compliance workloads, set up a Multi-AZ ROSA cluster that uses PrivateLink. For more information, see the [ROSA documentation](https://docs.aws.amazon.com/rosa/latest/userguide/getting-started-classic-private-link.html).
**Note**  
PrivateLink cannot be configured after installation.
+ The S3 bucket that you use for replication repository should not be public. Use the appropriate S3 bucket policies to restrict access.
+ If you choose the MTC method, use the **Stage** migration option to reduce the downtime window during cutover.
+ Review your service quotas before and after you provision the ROSA cluster. If necessary, request a quota increase according to your requirements. For more information, see the [Service Quotas documentation](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html).
+ Review the [ROSA security guidelines](https://docs.aws.amazon.com/ROSA/latest/userguide/security.html) and implement security best practices.
+ We recommend that you remove the default cluster administrator after installation. For more information, see the [Red Hat OpenShift documentation](https://docs.openshift.com/container-platform/4.13/post_installation_configuration/cluster-tasks.html).
+ Use machine pool automatic scaling to scale down unused worker nodes in the ROSA cluster to optimize costs. For more information, see the [ROSA Workshop](https://catalog.workshops.aws/aws-openshift-workshop/en-US/5-nodes-storage/3-autoscale-machine-pool).
+ Use the Red Hat Cost Management service for OpenShift Container Platform to better understand and track costs for clouds and containers. For more information, see the [ROSA Workshop](https://catalog.workshops.aws/aws-openshift-workshop/en-US/10-cost-management).
+ Monitor and audit ROSA cluster infrastructure services and applications by using AWS services. For more information, see the [ROSA Workshop](https://catalog.workshops.aws/aws-openshift-workshop/en-US/8-observability).

## Epics
<a name="migrate-container-workloads-from-aro-to-rosa-epics"></a>

### Option 1: Use a CI/CD pipeline
<a name="option-1-use-a-ci-cd-pipeline"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Add the new ROSA cluster to Jenkins. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS systems administrator, AWS DevOps | 
| Add the `oc` client to your Jenkins nodes. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS systems administrator, AWS DevOps | 
| Create a new Git branch. | Create a new branch in your Git repository for `rosa-dev`. This branch separates the code or configuration parameter changes for ROSA from your existing pipeline. | AWS DevOps | 
| Tag images for ROSA. | In your build stage, use a different tag to identify the images that are built from the ROSA pipeline. | AWS administrator, AWS systems administrator, AWS DevOps | 
| Create a pipeline. | Create a new Jenkins pipeline that's similar to your existing pipeline. For this pipeline, use the `rosa-dev` Git branch that you created earlier, and make sure to include the Git checkout, code scan, and build stages that are identical to your existing pipeline. | AWS administrator, AWS systems administrator, AWS DevOps | 
| Add a ROSA deployment stage. | In the new pipeline, add a stage to deploy to the ROSA cluster and reference the ROSA cluster that you added to the Jenkins global configuration. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Start a new build. | In Jenkins, select your pipeline and choose **Build now**, or start a new build by committing a change to the `rosa-dev` branch in Git. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Verify the deployment. | Use the **oc** command or the [ROSA console](https://console.aws.amazon.com/rosa) to verify that the application has been deployed on your target ROSA cluster. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Copy data to the target cluster. | For stateful workloads, copy the data from the source cluster to the target cluster by using AWS DataSync or open source utilities such as **rsync**, or you can use the MTC method. For more information, see the [AWS DataSync documentation](https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html). | AWS administrator, AWS DevOps, AWS systems administrator | 
| Test your application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS DevOps, AWS systems administrator | 
| Cut over. | If your testing is successful, use the appropriate Amazon Route 53 policy to move the traffic from the ARO-hosted application to the ROSA-hosted application. When you complete this step, your application's workload will fully transition to the ROSA cluster. | AWS administrator, AWS systems administrator | 

### Option 2: Use MTC
<a name="option-2-use-mtc"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Install the MTC operator. | Install the MTC operator on both ARO and ROSA clusters:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS DevOps, AWS systems administrator | 
| Configure network traffic to the replication repository. | If you're using a proxy server, configure it to allow network traffic between the replication repository and the clusters. The replication repository is an intermediate storage object that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Add the source cluster to MTC. | On the MTC web console, add the ARO source cluster. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Add Amazon S3 as your replication repository. | On the MTC web console, add the Amazon S3 bucket (see [Prerequisites](#migrate-container-workloads-from-aro-to-rosa-prereqs)) as the replication repository. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Create a migration plan. | On the MTC web console, create a migration plan and specify the data transfer type as **Copy**. This will instruct MTC to copy the data from the source (ARO) cluster to the S3 bucket, and from the bucket to the target (ROSA) cluster. | AWS administrator, AWS DevOps, AWS systems administrator | 
| Run the migration plan. | Run the migration plan by using the **Stage** or **Cutover** option:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-container-workloads-from-aro-to-rosa.html) | AWS administrator, AWS DevOps, AWS systems administrator | 

## Troubleshooting
<a name="migrate-container-workloads-from-aro-to-rosa-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| Connectivity issues | When you migrate your container workloads from ARO to ROSA, you might encounter connectivity issues that should be resolved to ensure a successful migration. To address these connectivity issues (listed in this table) during migration, meticulous planning, coordination with your network and security teams, and thorough testing are vital. Implementing a gradual migration strategy and verifying connectivity at each step will help minimize potential disruptions and ensure a smooth transition from ARO to ROSA. | 
| Network configuration differences | ARO and ROSA might have variations in their network configurations, such as virtual network (VNet) settings, subnets, and network policies. For proper communication between services, make sure that the network settings align between the two platforms. | 
| Security group and firewall rules | ROSA and ARO might have different default security group and firewall settings. Make sure to adjust and update these rules to permit necessary traffic to maintain connectivity among containers and services during migration.  | 
| IP address and DNS changes | When you migrate workloads, IP addresses and DNS names might change. Reconfigure applications that rely on static IPs or specific DNS names.  | 
| External service access | If your application depends on external services such as databases or APIs, you might have to update their connection settings to make sure they can communicate with the new services from ROSA. | 
| Azure Private Link configuration | If you use Azure Private Link or private endpoint services in ARO, you will need to set up the equivalent functionality in ROSA to ensure private connectivity between resources. | 
| Site-to-Site VPN or Direct Connect setup  | If there are existing Site-to-Site VPN or Direct Connect connections between your on-premises network and ARO, you will need to establish similar connections with ROSA for uninterrupted communication with your local resources. | 
| Ingress and load balancer settings | Configurations for ingress controllers and load balancers might differ between ARO and ROSA. Reconfigure these settings to maintain external access to your services. | 
| Certificate and TLS handling | If your applications use SSL certificates or TLS, make sure that the certificates are valid and configured correctly in ROSA. | 
| Container registry access | If your containers are hosted in an external container registry, set up the proper authentication and access permissions for ROSA. | 
| Monitoring and logging | Update monitoring and logging configurations to reflect the new infrastructure on ROSA so you can continue to monitor the health and performance of your containers effectively. | 

## Related resources
<a name="migrate-container-workloads-from-aro-to-rosa-resources"></a>

**AWS**** references**
+ [What is Red Hat OpenShift Service on AWS?](https://docs.aws.amazon.com/ROSA/latest/userguide/what-is-rosa.html) (ROSA documentation)
+ [Get started with ROSA](https://docs.aws.amazon.com/ROSA/latest/userguide/getting-started.html) (ROSA documentation)
+ [Red Hat OpenShift Service on AWS implementation strategies](https://docs.aws.amazon.com/prescriptive-guidance/latest/red-hat-openshift-on-aws-implementation/) (AWS Prescriptive Guidance)
+ [Red Hat OpenShift Service on AWS Now GA](https://aws.amazon.com/blogs/aws/red-hat-openshift-service-on-aws-now-generally-availably/) (AWS blog post)
+ [ROSA Workshop](https://catalog.workshops.aws/aws-openshift-workshop/en-US/0-introduction)
+ [ROSA FAQ](https://aws.amazon.com/rosa/faqs/)
+ [ROSA Workshop FAQ](https://www.rosaworkshop.io/rosa/14-faq/)
+ [ROSA pricing](https://aws.amazon.com/rosa/pricing/)

**Red Hat OpenShift documentation**
+ [Installing a cluster quickly on AWS](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-aws-default.html)
+ [Installing a cluster on AWS in a restricted network](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-restricted-networks-aws-installer-provisioned.html)
+ [Installing a cluster on AWS into an existing VPC](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-aws-vpc.html)
+ [Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-aws-user-infra.html)
+ [Installing a cluster on AWS in a restricted network with user-provisioned infrastructure](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-restricted-networks-aws.html)
+ [Installing a cluster on AWS with customizations](https://docs.openshift.com/container-platform/4.13/installing/installing_aws/installing-aws-customizations.html)
+ [Getting started with the OpenShift CLI](https://docs.openshift.com/container-platform/4.13/cli_reference/openshift_cli/getting-started-cli.html)

## Additional information
<a name="migrate-container-workloads-from-aro-to-rosa-additional"></a>

**Choosing between MTC and CI/CD pipeline redeployment options**

Migrating applications from one OpenShift cluster to another requires careful consideration. Ideally, you would want a smooth transition by using a CI/CD pipeline to redeploy the application and handle the migration of persistent volume data. However, in practice, a running application on a cluster is susceptible to unforeseen changes over time. These changes can cause the application to gradually deviate from its original deployment state. MTC offers a solution for scenarios where the exact contents of a namespace are uncertain and a seamless migration of all application components to a new cluster is important.

Making the right choice requires evaluating your specific scenario and weighing the benefits of each approach. By doing so, you can ensure a successful and seamless migration that aligns with your needs and priorities. Here are additional guidelines for choosing between the two options.

**CI/CD pipeline redeployment**

The CI/CD pipeline method is the recommended approach if your application can be confidently redeployed by using a pipeline. This ensures that the migration is controlled, predictable, and aligned with your existing deployment practices. When you choose this method, you can use [blue/green deployment](https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/bluegreen-deployments.html) or canary deployment strategies to gradually shift the load to deployments on ROSA. For this scenario, this pattern assumes that Jenkins is orchestrating application deployments from the on-premises environment.

Advantages:
+ You do not require administrative access to the source ARO cluster or need to deploy any operators on the source or destination cluster.
+ This approach helps you switch traffic gradually by using a DevOps strategy.

Disadvantages:
+ It requires more effort to test the functionality of your application.
+ If your application contains persistent data, it requires additional steps to copy the data by using AWS DataSync or other tools.

**MTC migration**

In the real world, running applications can undergo unanticipated changes that cause them to drift away from the initial deployment. Choose the MTC option when you're unsure about the current state of your application on the source cluster. For example, if your application ecosystem spans various components, configurations, and data storage volumes, we recommend that you choose MTC to ensure a complete migration that covers the application and its entire environment.

Advantages:
+ MTC provides complete backup and restore of the workload.
+ It will copy the persistent data from source to target while migrating the workload.
+ It does not require access to the source code repository.

Disadvantages:
+ You need administrative privileges to install the MTC operator on the source and destination clusters.
+ The DevOps team requires training to use the MTC tool and perform migrations. 

# Run Amazon ECS tasks on Amazon WorkSpaces with Amazon ECS Anywhere
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere"></a>

*Akash Kumar, Amazon Web Services*

## Summary
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-summary"></a>

Amazon Elastic Container Service (Amazon ECS) Anywhere supports the deployment of Amazon ECS tasks in any environment, including Amazon Web Services (AWS) managed infrastructure and customer managed infrastructure. You can do this while using a fully AWS managed control plane that’s running in the cloud and always up to date. 

Enterprises often use Amazon WorkSpaces for developing container-based applications. This has required Amazon Elastic Compute Cloud (Amazon EC2) or AWS Fargate with an Amazon ECS cluster to test and run ECS tasks. Now, by using Amazon ECS Anywhere, you can add Amazon WorkSpaces as external instances directly to an ECS cluster, and you can run your tasks directly. This reduces your development time, because you can test your container with an ECS cluster locally on Amazon WorkSpaces. You can also save the cost of using EC2 or Fargate instances for testing your container applications.

This pattern showcases how to deploy ECS tasks on Amazon WorkSpaces with Amazon ECS Anywhere. It sets up the ECS cluster and uses AWS Directory Service Simple AD to launch the WorkSpaces. Then the example ECS task launches NGINX in the WorkSpaces.

## Prerequisites and limitations
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-prereqs"></a>
+ An active AWS account
+ AWS Command Line Interface (AWS CLI)
+ AWS credentials [configured on your machine](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html)

## Architecture
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-architecture"></a>

**Target technology stack**
+ A virtual private cloud (VPC)
+ An Amazon ECS cluster
+ Amazon WorkSpaces
+ AWS Directory Service with Simple AD

**Target architecture **

![\[ECS Anywhere sets up ECS cluster and uses Simple AD to launch WorkSpaces.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/da8b2249-3423-485c-9fef-6f902025e969/images/fd354d14-f29b-4b9e-8f1a-c3cb7ed4d6bf.png)


 

The architecture includes the following services and resources:
+ An ECS cluster with public and private subnets in a custom VPC
+ Simple AD in the VPC to provide user access to Amazon WorkSpaces
+ Amazon WorkSpaces provisioned in the VPC using Simple AD
+ AWS Systems Manager activated for adding Amazon WorkSpaces as managed instances
+ Using Amazon ECS and AWS Systems Manager Agent (SSM Agent), Amazon WorkSpaces added to Systems Manager and the ECS cluster
+ An example ECS task to run in the WorkSpaces in the ECS cluster

## Tools
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-tools"></a>
+ [AWS Directory Service Simple Active Directory (Simple AD)](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_simple_ad.html) is a standalone managed directory powered by a Samba 4 Active Directory Compatible Server. Simple AD provides a subset of the features offered by AWS Managed Microsoft AD, including the ability to manage users and to securely connect to Amazon EC2 instances.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a fast and scalable container management service that helps you run, stop, and manage containers on a cluster.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) helps you manage your applications and infrastructure running in the AWS Cloud. It simplifies application and resource management, shortens the time to detect and resolve operational problems, and helps you manage your AWS resources securely at scale.
+ [Amazon WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html) helps you provision virtual, cloud-based Microsoft Windows or Amazon Linux desktops for your users, known as *WorkSpaces*. WorkSpaces eliminates the need to procure and deploy hardware or install complex software.

## Epics
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-epics"></a>

### Set up the ECS cluster
<a name="set-up-the-ecs-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create and configure the ECS cluster. | To create the ECS cluster, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html), including the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere.html) | Cloud architect | 

### Launch Amazon WorkSpaces
<a name="launch-amazon-workspaces"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up Simple AD and launch Amazon WorkSpaces. | To provision a Simple AD directory for your newly created VPC and launch Amazon WorkSpaces, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/workspaces/latest/adminguide/launch-workspace-simple-ad.html). | Cloud architect | 

### Set up AWS Systems Manager for a hybrid environment
<a name="set-up-aws-systems-manager-for-a-hybrid-environment"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the attached scripts. | On your local machine, download the `ssm-trust-policy.json` and `ssm-activation.json` files that are in the *Attachments* section. | Cloud architect | 
| Add the IAM role. | Add environment variables based on your business requirements.<pre>export AWS_DEFAULT_REGION=${AWS_REGION_ID}<br />export ROLE_NAME=${ECS_TASK_ROLE}<br />export CLUSTER_NAME=${ECS_CLUSTER_NAME}<br />export SERVICE_NAME=${ECS_CLUSTER_SERVICE_NAME}</pre>Run the following command.<pre>aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document file://ssm-trust-policy.json</pre> | Cloud architect | 
| Add the AmazonSSMManagedInstanceCore policy to the IAM role. | Run the following command.<pre>aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore</pre> | Cloud architect | 
| Add the AmazonEC2ContainerServiceforEC2Role policy to IAM role. | Run the following command.<pre>aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role</pre> | Cloud architect | 
| Verify the IAM role. | To verify the IAM role, run the following command.<pre>aws iam list-attached-role-policies --role-name $ROLE_NAME</pre> | Cloud architect | 
| Activate Systems Manager. | Run the following command.<pre>aws ssm create-activation --iam-role $ROLE_NAME | tee ssm-activation.json</pre> | Cloud architect | 

### Add WorkSpaces to the ECS cluster
<a name="add-workspaces-to-the-ecs-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
|  Connect to your WorkSpaces. | To connect to and set up your Workspaces, follow the instructions in the [AWS documentation](https://docs.aws.amazon.com/workspaces/latest/userguide/workspaces-user-getting-started.html). | App developer | 
| Download the ecs-anywhere install script. | At the command prompt, run the following command.<pre>curl -o "ecs-anywhere-install.sh" "https://amazon-ecs-agent-packages-preview.s3.us-east-1.amazonaws.com/ecs-anywhere-install.sh" && sudo chmod +x ecs-anywhere-install.sh</pre> | App developer | 
| Check integrity of the shell script. | (Optional) Run the following command.<pre>curl -o "ecs-anywhere-install.sh.sha256" "https://amazon-ecs-agent-packages-preview.s3.us-east-1.amazonaws.com/ecs-anywhere-install.sh.sha256" && sha256sum -c ecs-anywhere-install.sh.sha256<br /><br /><br /></pre> | App developer | 
| Add an EPEL repository on Amazon Linux. | To add an Extra Packages for Enterprise Linux (EPEL) repository, run the  command `sudo amazon-linux-extras install epel -y`. | App developer | 
| Install Amazon ECS Anywhere. | To run the install script, use the following command.<pre>sudo ./ecs-anywhere-install.sh --cluster $CLUSTER_NAME --activation-id $ACTIVATION_ID --activation-code $ACTIVATION_CODE --region $AWS_REGION<br /><br /><br /></pre> |  | 
| Check instance information from the ECS cluster. | To check the Systems Manager and ECS cluster instance information and validate that WorkSpaces were added on the cluster, run the following command from your local machine.<pre>aws ssm describe-instance-information" && "aws ecs list-container-instances --cluster $CLUSTER_NAME</pre> | App developer | 

### Add an ECS task for the WorkSpaces
<a name="add-an-ecs-task-for-the-workspaces"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a task execution IAM role. | Download `task-execution-assume-role.json` and `external-task-definition.json` from the *Attachments* section. On your local machine, run the following command.<pre>aws iam --region $AWS_DEFAULT_REGION create-role --role-name $ECS_TASK_EXECUTION_ROLE --assume-role-policy-document file://task-execution-assume-role.json</pre> | Cloud architect | 
| Add the policy to the execution role. | Run the following command.<pre>aws iam --region $AWS_DEFAULT_REGION attach-role-policy --role-name $ECS_TASK_EXECUTION_ROLE --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy</pre> | Cloud architect | 
| Create a task role. | Run the following command.<pre>aws iam --region $AWS_DEFAULT_REGION create-role --role-name $ECS_TASK_EXECUTION_ROLE --assume-role-policy-document file://task-execution-assume-role.json<br /><br /><br /></pre> | Cloud architect | 
| Register the task definition to the cluster. | On your local machine, run the following command.<pre>aws ecs register-task-definition --cli-input-json file://external-task-definition.json</pre> | Cloud architect | 
| Run the task. | On your local machine, run the following command.<pre>aws ecs run-task --cluster $CLUSTER_NAME --launch-type EXTERNAL --task-definition nginx</pre> | Cloud architect | 
| Validate the task running state. | To fetch the task ID, run the following command.<pre>export TEST_TASKID=$(aws ecs list-tasks --cluster $CLUSTER_NAME | jq -r '.taskArns[0]')</pre>With the task ID, run the following command.<pre>aws ecs describe-tasks --cluster $CLUSTER_NAME --tasks ${TEST_TASKID}</pre> | Cloud architect | 
| Verify the task on the WorkSpace. | To check that NGINX is running on the WorkSpace, run the command` curl http://localhost:8080`. | App developer | 

## Related resources
<a name="run-amazon-ecs-tasks-on-amazon-workspaces-with-amazon-ecs-anywhere-resources"></a>
+ [ECS clusters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html)
+ [Setting up a hybrid environment](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html)
+ [Amazon WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html)
+ [Simple AD](https://docs.aws.amazon.com/workspaces/latest/adminguide/launch-workspace-simple-ad.html)

## Attachments
<a name="attachments-da8b2249-3423-485c-9fef-6f902025e969"></a>

To access additional content that is associated with this document, unzip the following file: [attachment.zip](samples/p-attach/da8b2249-3423-485c-9fef-6f902025e969/attachments/attachment.zip)

# Run an ASP.NET Core web API Docker container on an Amazon EC2 Linux instance
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance"></a>

*Vijai Anand Ramalingam and Sreelaxmi Pai, Amazon Web Services*

## Summary
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-summary"></a>

This pattern is for people who are starting to containerize their applications on the Amazon Web Services (AWS) Cloud. When you begin to containerize apps on cloud, usually there are no container orchestrating platforms set up. This pattern helps you quickly set up infrastructure on AWS to test your containerized applications without needing an elaborate container orchestrating infrastructure.

The first step in the modernization journey is to transform the application. If it's a legacy .NET Framework application, you must first change the runtime to ASP.NET Core. Then do the following:
+ Create the Docker container image
+ Run the Docker container using the built image
+ Validate the application before deploying it on any container orchestration platform, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). 

This pattern covers the build, run, and validate aspects of modern application development on an Amazon Elastic Compute Cloud (Amazon EC2) Linux instance.

## Prerequisites and limitations
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-prereqs"></a>

**Prerequisites **
+ An active [Amazon Web Services (AWS) account](https://aws.amazon.com/account/)
+ An [AWS Identity and Access Management (IAM) role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) with sufficient access to create AWS resources for this pattern 
+ [Visual Studio Community 2022](https://visualstudio.microsoft.com/downloads/) or later downloaded and installed
+ A .NET Framework project modernized to ASP.NET Core
+ A GitHub repository

**Product versions**
+ Visual Studio Community 2022 or later

## Architecture
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-architecture"></a>

**Target architecture **

This pattern uses an [AWS CloudFormation template](https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=SSM-SSH-Demo&templateURL=https://aws-quickstart.s3.amazonaws.com/quickstart-examples/samples/session-manager-ssh/session-manager-example.yaml) to create the highly available architecture shown in the following diagram. An Amazon EC2 Linux instance is launched in a private subnet. AWS Systems Manager Session Manager is used to access the private Amazon EC2 Linux instance and to test the API running in the Docker container.

![\[A user accessing the Amazon EC2 Linux instance and testing the API running in the Docker container.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/512e61b2-10ba-43be-bbd8-2bdc597c3de3/images/9c5206f6-32b1-47be-9037-360c0bff713c.png)


1. Access to the Linux instance through Session Manager

## Tools
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-tools"></a>

**AWS services**
+ [AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) – AWS Command Line Interface (AWS CLI) is an open source tool for interacting with AWS services through commands in your command line shell. With minimal configuration, you can run AWS CLI commands that implement functionality equivalent to that provided by the browser-based AWS Management Console.
+ [AWS Management Console](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/learn-whats-new.html) – The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console and offers a single place to access the information you need to perform your AWS related tasks.
+ [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) – Session Manager is a fully managed AWS Systems Manager capability. With Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances. Session Manager provides secure and auditable node management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys.

**Other tools**
+ [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/) – Visual Studio 2022 is an integrated development environment (IDE).
+ [Docker](https://www.docker.com/) – Docker is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.

**Code**

```
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
 WORKDIR /app
EXPOSE 80
EXPOSE 443
 
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["DemoNetCoreWebAPI/DemoNetCoreWebAPI.csproj", "DemoNetCoreWebAPI/"]
RUN dotnet restore "DemoNetCoreWebAPI/DemoNetCoreWebAPI.csproj"
COPY . .
WORKDIR "/src/DemoNetCoreWebAPI"
RUN dotnet build "DemoNetCoreWebAPI.csproj" -c Release -o /app/build
 
FROM build AS publish
RUN dotnet publish "DemoNetCoreWebAPI.csproj" -c Release -o /app/publish
 
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "DemoNetCoreWebAPI.dll"]
```

## Epics
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-epics"></a>

### Develop the ASP.NET Core web API
<a name="develop-the-asp-net-core-web-api"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an example ASP.NET Core web API using Visual Studio. | To create an example ASP.NET Core web API, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer | 
| Create a Dockerfile. | To create a Dockerfile, do one of the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html)To push the changes to your GitHub repository, run the following command.<pre>git add --all<br />git commit -m "Dockerfile added"<br />git push</pre> | App developer | 

### Set up the Amazon EC2 Linux instance
<a name="set-up-the-amazon-ec2-linux-instance"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Set up the infrastructure. | Launch the [AWS CloudFormation template](https://console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/new?stackName=SSM-SSH-Demo&templateURL=https://aws-quickstart.s3.amazonaws.com/quickstart-examples/samples/session-manager-ssh/session-manager-example.yaml) to create the infrastructure, which includes the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html)To learn more about accessing a private Amazon EC2 instance using Session Manager without requiring a bastion host, see the [Toward a bastion-less world](https://aws.amazon.com/blogs/infrastructure-and-automation/toward-a-bastion-less-world/) blog post. | App developer, AWS administrator, AWS DevOps | 
| Log in to the Amazon EC2 Linux instance. | To connect to the Amazon EC2 Linux instance in the private subnet, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer | 
| Install and start Docker. | To install and start Docker in the Amazon EC2 Linux instance, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer, AWS administrator, AWS DevOps | 
| Install Git and clone the repository. | To install Git on the Amazon EC2 Linux instance and clone the repository from GitHub, do the following.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer, AWS administrator, AWS DevOps | 
| Build and run the Docker container. | To build the Docker image and run the container inside the Amazon EC2 Linux instance, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance.html) | App developer, AWS administrator, AWS DevOps | 

### Test the web API
<a name="test-the-web-api"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Test the web API using the curl command. | To test the web API, run the following command.<pre>curl -X GET "http://localhost/WeatherForecast" -H  "accept: text/plain"</pre>Verify the API response.You can get the curl commands for each endpoint from Swagger when you are running it locally. | App developer | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Delete all resources. | Delete the stack to remove all the resources. This ensures that you aren’t charged for any services that you aren’t using. | AWS administrator, AWS DevOps | 

## Related resources
<a name="run-an-asp-net-core-web-api-docker-container-on-an-amazon-ec2-linux-instance-resources"></a>
+ [Connect to your Linux instance from Windows using PuTTY](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html)
+ [Create a web API with ASP.NET Core](https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-web-api?view=aspnetcore-5.0&tabs=visual-studio)
+ [Toward a bastion-less world](https://aws.amazon.com/blogs/infrastructure-and-automation/toward-a-bastion-less-world/)

# Run stateful workloads with persistent data storage by using Amazon EFS on Amazon EKS with AWS Fargate
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate"></a>

*Ricardo Morais, Rodrigo Bersa, and Lucio Pereira, Amazon Web Services*

## Summary
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-summary"></a>

This pattern provides guidance for enabling Amazon Elastic File System (Amazon EFS) as a storage device for containers that are running on Amazon Elastic Kubernetes Service (Amazon EKS) by using AWS Fargate to provision your compute resources.

The setup described in this pattern follows security best practices and provides security at rest and security in transit by default. To encrypt your Amazon EFS file system, it uses an AWS Key Management Service (AWS KMS) key, but you can also specify a key alias that dispatches the process of creating a KMS key.

You can follow the steps in this pattern to create a namespace and Fargate profile for a proof-of-concept (PoC) application, install the Amazon EFS Container Storage Interface (CSI) driver that is used to integrate the Kubernetes cluster with Amazon EFS, configure the storage class, and deploy the PoC application. These steps result in an Amazon EFS file system that is shared among multiple Kubernetes workloads, running over Fargate. The pattern is accompanied by scripts that automate these steps.

You can use this pattern if you want data persistence in your containerized applications and want to avoid data loss during scaling operations. For example:
+ **DevOps tools** – A common scenario is to develop a continuous integration and continuous delivery (CI/CD) strategy. In this case, you can use Amazon EFS as a shared file system to store configurations among different instances of the CI/CD tool or to store a cache (for example, an Apache Maven repository) for pipeline stages among different instances of the CI/CD tool.
+ **Web servers** – A common scenario is to use Apache as an HTTP web server. You can use Amazon EFS as a shared file system to store static files that are shared among different instances of the web server. In this example scenario, modifications are applied directly to the file system instead of static files being baked into a Docker image.

## Prerequisites and limitations
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-prereqs"></a>

**Prerequisites**
+ An active AWS account
+ An existing Amazon EKS cluster with Kubernetes version 1.17 or later (tested up to version 1.27)
+ An existing Amazon EFS file system to bind a Kubernetes StorageClass and provision file systems dynamically
+ Cluster administration permissions
+ Context configured to point to the desired Amazon EKS cluster

**Limitations**
+ There are some limitations to consider when you’re using Amazon EKS with Fargate. For example, the use of some Kubernetes constructs, such as DaemonSets and privileged containers, aren’t supported. For more information, about Fargate limitations, see the [AWS Fargate considerations](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html#fargate-considerations) in the Amazon EKS documentation.
+ The code provided with this pattern supports workstations that are running Linux or macOS.

**Product versions**
+ AWS Command Line Interface (AWS CLI) version 2 or later
+ Amazon EFS CSI driver version 1.0 or later (tested up to version 2.4.8)
+ eksctl version 0.24.0 or later (tested up to version 0.158.0)
+ jq version 1.6 or later
+ kubectl version 1.17 or later (tested up to version 1.27)
+ Kubernetes version 1.17 or later (tested up to version 1.27)

## Architecture
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-architecture"></a>

![\[Architecture diagram of running stateful workloads with persistent data storage by using Amazon EFS\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/2487e285-269b-415b-a270-877f973e3aaf/images/ec8de63c-3307-4010-9e03-2bd7b9881fff.png)


The target architecture is comprised of the following infrastructure:
+ A virtual private cloud (VPC)
+ Two Availability Zones
+ A public subnet with a NAT gateway that provides internet access
+ A private subnet with an Amazon EKS cluster and Amazon EFS mount targets (also known as *mount points*)
+ Amazon EFS at the VPC level

The following is the environment infrastructure for the Amazon EKS cluster:
+ AWS Fargate profiles that accommodate the Kubernetes constructs at the namespace level
+ A Kubernetes namespace with:
  + Two application pods distributed across Availability Zones
  + One persistent volume claim (PVC) bound to a persistent volume (PV) at the cluster level
+ A cluster-wide PV that is bound to the PVC in the namespace and that points to the Amazon EFS mount targets in the private subnet, outside of the cluster

## Tools
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-tools"></a>

**AWS services**
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that you can use to interact with AWS services from the command line.
+ [Amazon Elastic File System (Amazon EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) helps you create and configure shared file systems in the AWS Cloud. In this pattern, it provides a simple, scalable, fully managed, and shared file system for use with Amazon EKS.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or operate your own clusters.
+ [AWS Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html) is a serverless compute engine for Amazon EKS. It creates and manages compute resources for your Kubernetes applications.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) helps you create and control cryptographic keys to help protect your data.

**Other tools**
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) is a command-line utility for creating and managing Kubernetes clusters on Amazon EKS.
+ [kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) is a command-line interface that helps you run commands against Kubernetes clusters.
+ [jq](https://stedolan.github.io/jq/download/) is a command-line tool for parsing JSON.

**Code**

The code for this pattern is provided in the GitHub [Persistence Configuration with Amazon EFS on Amazon EKS using AWS Fargate](https://github.com/aws-samples/eks-efs-share-within-fargate) repo. The scripts are organized by epic, in the folders `epic01` through `epic06`, corresponding to the order in the [Epics](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-epics) section in this pattern.

## Best practices
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-best-practices"></a>

The target architecture includes the following services and components, and it follows [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) best practices:
+ Amazon EFS, which provides a simple, scalable, fully managed elastic NFS file system. This is used as a shared file system among all replications of the PoC application that are running in pods, which are distributed in the private subnets of the chosen Amazon EKS cluster.
+ An Amazon EFS mount target for each private subnet. This provides redundancy per Availability Zone within the virtual private cloud (VPC) of the cluster.
+ Amazon EKS, which runs the Kubernetes workloads. You must provision an Amazon EKS cluster before you use this pattern, as described in the [Prerequisites](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-prereqs) section.
+ AWS KMS, which provides encryption at rest for the content that’s stored in the Amazon EFS file system.
+ Fargate, which manages the compute resources for the containers so that you can focus on business requirements instead of infrastructure burden. The Fargate profile is created for all private subnets. It provides redundancy per Availability Zone within the virtual private cloud (VPC) of the cluster.
+ Kubernetes Pods, for validating that content can be shared, consumed, and written by different instances of an application.

## Epics
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-epics"></a>

### Provision an Amazon EKS cluster (optional)
<a name="provision-an-amazon-eks-cluster-optional"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EKS cluster. | If you already have a cluster deployed, skip to the next epic. Create an Amazon EKS cluster in your existing AWS account. In the [GitHub directory](https://github.com/aws-samples/eks-efs-share-within-fargate/tree/master/bootstrap), use one of the patterns to deploy an Amazon EKS cluster by using Terraform or eksctl. For more information, see [Creating an Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) in the Amazon EKS documentation. In the Terraform pattern, there are also examples showing how to: link Fargate profiles to your Amazon EKS cluster, create an Amazon EFS file system, and deploy Amazon EFS CSI driver in your Amazon EKS cluster. | AWS administrator, Terraform or eksctl administrator, Kubernetes administrator | 
| Export environment variables. | Run the env.sh script. This provides the information required in the next steps.<pre>source ./scripts/env.sh<br />Inform the AWS Account ID:<br /><13-digit-account-id><br />Inform your AWS Region:<br /><aws-Region-code><br />Inform your Amazon EKS Cluster Name:<br /><amazon-eks-cluster-name><br />Inform the Amazon EFS Creation Token:<br /><self-genereated-uuid></pre>If not noted yet, you can get all the information requested above with the following CLI commands.<pre># ACCOUNT ID<br />aws sts get-caller-identity --query "Account" --output text</pre><pre># REGION CODE<br />aws configure get region</pre><pre># CLUSTER EKS NAME<br />aws eks list-clusters --query "clusters" --output text</pre><pre># GENERATE EFS TOKEN<br />uuidgen</pre> | AWS systems administrator | 

### Create a Kubernetes namespace and a linked Fargate profile
<a name="create-a-kubernetes-namespace-and-a-linked-fargate-profile"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a Kubernetes namespace and Fargate profile for application workloads. | Create a namespace for receiving the application workloads that interact with Amazon EFS. Run the `create-k8s-ns-and-linked-fargate-profile.sh` script. You can choose to use a custom namespace name or the default provided namespace `poc-efs-eks-fargate`.**With a custom application namespace name:**<pre>export $APP_NAMESPACE=<CUSTOM_NAME><br />./scripts/epic01/create-k8s-ns-and-linked-fargate-profile.sh \<br />-c "$CLUSTER_NAME" -n "$APP_NAMESPACE"</pre>**Without a custom application namespace name:**<pre>./scripts/epic01/create-k8s-ns-and-linked-fargate-profile.sh \<br />    -c "$CLUSTER_NAME"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster. The `-n <NAMESPACE>` parameter is optional; if not informed, a default generated namespace name will be provided. | Kubernetes user with granted permissions | 

### Create an Amazon EFS file system
<a name="create-an-amazon-efs-file-system"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Generate a unique token. | Amazon EFS requires a creation token to ensure idempotent operation (calling the operation with the same creation token has no effect). To meet this requirement, you must generate a unique token through an available technique. For example, you can generate a universally unique identifier (UUID) to use as a creation token. | AWS systems administrator | 
| Create an Amazon EFS file system. | Create the file system for receiving the data files that are read and written by the application workloads. You can create an encrypted or non-encrypted file system. (As a best practice, the code for this pattern creates an encrypted system to enable encryption at rest by default.) You can use a unique, symmetric AWS KMS key to encrypt your file system. If a custom key is not specified, an AWS managed key is used.Use the create-efs.sh script to create an encrypted or non-encrypted Amazon EFS file system, after you generate a unique token for Amazon EFS.**With encryption at rest, without a KMS key:**<pre>./scripts/epic02/create-efs.sh \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster and `$EFS_CREATION_TOKEN` is a unique creation token for the file system.**With encryption at rest, with a KMS key:**<pre>./scripts/epic02/create-efs.sh \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN" \<br />    -k "$KMS_KEY_ALIAS"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster, `$EFS_CREATION_TOKEN` is a unique creation token for the file system, and `$KMS_KEY_ALIAS` is the alias for the KMS key.**Without encryption:**<pre>./scripts/epic02/create-efs.sh -d \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster, `$EFS_CREATION_TOKEN` is a unique creation token for the file system, and `–d` disables encryption at rest. | AWS systems administrator | 
| Create a security group. | Create a security group to allow the Amazon EKS cluster to access the Amazon EFS file system. | AWS systems administrator | 
| Update the inbound rule for the security group. | Update the inbound rules of the security group to allow incoming traffic for the following settings:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate.html) | AWS systems administrator | 
| Add a mount target for each private subnet. | For each private subnet of the Kubernetes cluster, create a mount target for the file system and the security group. | AWS systems administrator | 

### Install Amazon EFS components into the Kubernetes cluster
<a name="install-amazon-efs-components-into-the-kubernetes-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Amazon EFS CSI driver. | Deploy the Amazon EFS CSI driver into the cluster. The driver provisions storage according to persistent volume claims created by applications. Run the `create-k8s-efs-csi-sc.sh` script to deploy the Amazon EFS CSI driver and the storage class into the cluster.<pre>./scripts/epic03/create-k8s-efs-csi-sc.sh</pre>This script uses the `kubectl` utility, so make sure that the context has been configured and point to the desired Amazon EKS cluster. | Kubernetes user with granted permissions | 
| Deploy the storage class. | Deploy the storage class into the cluster for the Amazon EFS provisioner (efs.csi.aws.com). | Kubernetes user with granted permissions | 

### Install the PoC application into the Kubernetes cluster
<a name="install-the-poc-application-into-the-kubernetes-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the persistent volume. | Deploy the persistent volume, and link it to the created storage class and to the ID of the Amazon EFS file system. The application uses the persistent volume to read and write content. You can specify any size for the persistent volume in the storage field. Kubernetes requires this field, but because Amazon EFS is an elastic file system, it does not enforce any file system capacity. You can deploy the persistent volume with or without encryption. (The Amazon EFS CSI driver enables encryption by default, as a best practice.) Run the `deploy-poc-app.sh` script to deploy the persistent volume, the persistent volume claim, and the two workloads.**With encryption in transit:**<pre>./scripts/epic04/deploy-poc-app.sh \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$EFS_CREATION_TOKEN` is the unique creation token for the file system.**Without encryption in transit:**<pre>./scripts/epic04/deploy-poc-app.sh -d \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$EFS_CREATION_TOKEN` is the unique creation token for the file system, and `–d` disables encryption in transit. | Kubernetes user with granted permissions | 
| Deploy the persistent volume claim requested by the application. | Deploy the persistent volume claim requested by the application, and link it to the storage class. Use the same access mode as the persistent volume you created previously. You can specify any size for the persistent volume claim in the storage field. Kubernetes requires this field, but because Amazon EFS is an elastic file system, it does not enforce any file system capacity. | Kubernetes user with granted permissions | 
| Deploy workload 1. | Deploy the pod that represents workload 1 of the application. This workload writes content to the file `/data/out1.txt`. | Kubernetes user with granted permissions | 
| Deploy workload 2. | Deploy the pod that represents workload 2 of the application. This workload writes content to the file `/data/out2.txt`. | Kubernetes user with granted permissions | 

### Validate file system persistence, durability, and shareability
<a name="validate-file-system-persistence-durability-and-shareability"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Check the status of the `PersistentVolume`. | Enter the following command to check the status of the `PersistentVolume`.<pre>kubectl get pv</pre>For an example output, see the [Additional information](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-additional) section. | Kubernetes user with granted permissions | 
| Check the status of the `PersistentVolumeClaim`. | Enter the following command to check the status of the `PersistentVolumeClaim`.<pre>kubectl -n poc-efs-eks-fargate get pvc</pre>For an example output, see the [Additional information](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-additional) section. | Kubernetes user with granted permissions | 
| Validate that workload 1 can write to the file system. | Enter the following command to validate that workload 1 is writing to `/data/out1.txt`.<pre>kubectl exec -ti poc-app1 -n poc-efs-eks-fargate -- tail -f /data/out1.txt</pre>The results are similar to the following:<pre>...<br />Thu Sep  3 15:25:07 UTC 2023 - PoC APP 1<br />Thu Sep  3 15:25:12 UTC 2023 - PoC APP 1<br />Thu Sep  3 15:25:17 UTC 2023 - PoC APP 1<br />...</pre> | Kubernetes user with granted permissions | 
| Validate that workload 2 can write to the file system. | Enter the following command to validate that workload 2 is writing to `/data/out2.txt`.<pre>kubectl -n $APP_NAMESPACE exec -ti poc-app2 -- tail -f /data/out2.txt</pre>The results are similar to the following:<pre>...<br />Thu Sep  3 15:26:48 UTC 2023 - PoC APP 2<br />Thu Sep  3 15:26:53 UTC 2023 - PoC APP 2<br />Thu Sep  3 15:26:58 UTC 2023 - PoC APP 2<br />...</pre> | Kubernetes user with granted permissions | 
| Validate that workload 1 can read the file written by workload 2. | Enter the following command to validate that workload 1 can read the `/data/out2.txt` file written by workload 2.<pre>kubectl exec -ti poc-app1 -n poc-efs-eks-fargate -- tail -n 3 /data/out2.txt</pre>The results are similar to the following:<pre>...<br />Thu Sep  3 15:26:48 UTC 2023 - PoC APP 2<br />Thu Sep  3 15:26:53 UTC 2023 - PoC APP 2<br />Thu Sep  3 15:26:58 UTC 2023 - PoC APP 2<br />...</pre> | Kubernetes user with granted permissions | 
| Validate that workload 2 can read the file written by workload 1. | Enter the following command to validate that workload 2 can read the `/data/out1.txt` file written by workload 1.<pre>kubectl -n $APP_NAMESPACE exec -ti poc-app2 -- tail -n 3 /data/out1.txt</pre>The results are similar to the following:<pre>...<br />Thu Sep  3 15:29:22 UTC 2023 - PoC APP 1<br />Thu Sep  3 15:29:27 UTC 2023 - PoC APP 1<br />Thu Sep  3 15:29:32 UTC 2023 - PoC APP 1<br />...</pre> | Kubernetes user with granted permissions | 
| Validate that files are retained after you remove application components. | Next, you use a script to remove the application components (persistent volume, persistent volume claim, and pods), and validate that the files `/data/out1.txt` and `/data/out2.txt` are retained in the file system. Run the `validate-efs-content.sh` script by using the following command.<pre>./scripts/epic05/validate-efs-content.sh \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$EFS_CREATION_TOKEN` is the unique creation token for the file system.The results are similar to the following:<pre>pod/poc-app-validation created<br />Waiting for pod get Running state...<br />Waiting for pod get Running state...<br />Waiting for pod get Running state...<br />Results from execution of 'find /data' on validation process pod:<br />/data<br />/data/out2.txt<br />/data/out1.txt</pre> | Kubernetes user with granted permissions, System administrator | 

### Monitor operations
<a name="monitor-operations"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Monitor application logs. | As part of a day-two operation, ship the application logs to Amazon CloudWatch for monitoring. | AWS systems administrator, Kubernetes user with granted permissions | 
| Monitor Amazon EKS and Kubernetes containers with Container Insights. | As part of a day-two operation, monitor the Amazon EKS and Kubernetes systems by using Amazon CloudWatch Container Insights. This tool collects, aggregates, and summarizes metrics from containerized applications at different levels and dimensions. For more information, see the [Related resources](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-resources) section. | AWS systems administrator, Kubernetes user with granted permissions | 
| Monitor Amazon EFS with CloudWatch. | As part of a day-two operation, monitor the file systems using Amazon CloudWatch, which collects and processes raw data from Amazon EFS into readable, near real-time metrics. For more information, see the [Related resources](#run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-resources) section. | AWS systems administrator | 

### Clean up resources
<a name="clean-up-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clean up all created resources for the pattern. | After you complete this pattern, clean up all resources, to avoid incurring AWS charges. Run the `clean-up-resources.sh` script to remove all resources after you have finished using the PoC application. Complete one of the following options.**With encryption at rest, with a KMS key:**<pre>./scripts/epic06/clean-up-resources.sh \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN" \<br />    -k "$KMS_KEY_ALIAS"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster, `$EFS_CREATION_TOKEN` is the creation token for the file system, and `$KMS_KEY_ALIAS` is the alias for the KMS key.**Without encryption at rest:**<pre>./scripts/epic06/clean-up-resources.sh \<br />    -c "$CLUSTER_NAME" \<br />    -t "$EFS_CREATION_TOKEN"</pre>where `$CLUSTER_NAME` is the name of your Amazon EKS cluster and `$EFS_CREATION_TOKEN` is the creation token for the file system. | Kubernetes user with granted permissions, System administrator | 

## Related resources
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-resources"></a>

**References**
+ [AWS Fargate for Amazon EKS now supports Amazon EFS](https://aws.amazon.com/blogs/aws/new-aws-fargate-for-amazon-eks-now-supports-amazon-efs/) (announcement)
+ [How to capture application logs when using Amazon EKS on AWS Fargate](https://aws.amazon.com/blogs/containers/how-to-capture-application-logs-when-using-amazon-eks-on-aws-fargate/) (blog post)
+ [Using Container Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights.html) (Amazon CloudWatch documentation)
+ [Setting Up Container Insights on Amazon EKS and Kubernetes](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-EKS.html) (Amazon CloudWatch documentation)
+ [Amazon EKS and Kubernetes Container Insights metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-EKS.html) (Amazon CloudWatch documentation)
+ [Monitoring Amazon EFS with Amazon CloudWatch](https://docs.aws.amazon.com/efs/latest/ug/monitoring-cloudwatch.html) (Amazon EFS documentation)

**GitHub tutorials and examples**
+ [Static provisioning](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/static_provisioning/README.md)
+ [Encryption in transit](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/encryption_in_transit/README.md)
+ [Accessing the file system from multiple pods](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/multiple_pods/README.md)
+ [Consuming Amazon EFS in StatefulSets](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/statefulset/README.md)
+ [Mounting subpaths](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/volume_path/README.md)
+ [Using Amazon EFS access points](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/access_points/README.md)
+ [Amazon EKS Blueprints for Terraform](https://aws-ia.github.io/terraform-aws-eks-blueprints/)

**Required tools**
+ [Installing the AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
+ [Installing eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html)
+ [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html)
+ [Installing jq](https://stedolan.github.io/jq/download/)

## Additional information
<a name="run-stateful-workloads-with-persistent-data-storage-by-using-amazon-efs-on-amazon-eks-with-aws-fargate-additional"></a>

The following is an example output of the `kubectl get pv` command.

```
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                             STORAGECLASS   REASON   AGE
poc-app-pv   1Mi        RWX            Retain           Bound    poc-efs-eks-fargate/poc-app-pvc   efs-sc                  3m56s
```

The following is an example output of the `kubectl -n poc-efs-eks-fargate get pvc` command.

```
NAME          STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
poc-app-pvc   Bound    poc-app-pv   1Mi        RWX            efs-sc         4m34s
```

# Set up event-driven auto scaling in Amazon EKS by using Amazon EKS Pod Identity and KEDA
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda"></a>

*Dipen Desai, Abhay Diwan, Kamal Joshi, and Mahendra Revanasiddappa, Amazon Web Services*

## Summary
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-summary"></a>

Orchestration platforms, such as [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html), have streamlined the lifecycle management of container-based applications. This helps organizations focus on building, securing, operating, and maintaining container-based applications. As event-driven deployments become more common, organizations are more frequently scaling Kubernetes deployments based on various event sources. This method, combined with auto scaling, can result in significant cost savings by providing on-demand compute resources and efficient scaling that is tailored to application logic.

[KEDA](https://keda.sh/) is a Kubernetes-based event-driven autoscaler. KEDA helps you scale any container in Kubernetes based on the number of events that need to be processed. It is lightweight and integrates with any Kubernetes cluster. It also works with standard Kubernetes components, such as [Horizontal Pod Autoscaling (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). KEDA also offers [TriggerAuthentication](https://keda.sh/docs/2.14/concepts/authentication/#re-use-credentials-and-delegate-auth-with-triggerauthentication), which is a feature that helps you delegate authentication. It allows you to describe authentication parameters that are separate from the ScaledObject and the deployment containers.

AWS provides AWS Identity and Access Management (IAM) roles that support diverse Kubernetes deployment options, including Amazon EKS, Amazon EKS Anywhere, Red Hat OpenShift Service on AWS (ROSA), and self-managed Kubernetes clusters on Amazon Elastic Compute Cloud (Amazon EC2). These roles use IAM constructs, such as OpenID Connect (OIDC) identity providers and IAM trust policies, to operate across different environments without relying directly on Amazon EKS services or APIs. For more information, see [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) in the Amazon EKS documentation.

[Amazon EKS Pod Identity](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html) simplifies the process for Kubernetes service accounts to assume IAM roles without requiring OIDC providers. It provides the ability to manage credentials for your applications. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instance’s role, you associate an IAM role with a Kubernetes service account and configure your Pods to use the service account. This helps you use an IAM role across multiple clusters and simplifies policy management by enabling the reuse of permission policies across IAM roles.

By implementing KEDA with Amazon EKS Pod Identity, businesses can achieve efficient event-driven auto scaling and simplified credential management. Applications scale based on demand, which optimizes resource utilization and reduces costs.

This pattern helps you integrate Amazon EKS Pod Identity with KEDA. It showcases how you can use the `keda-operator` service account and delegate authentication with `TriggerAuthentication`. It also describes how to set up a trust relationship between an IAM role for the KEDA operator and an IAM role for the application. This trust relationship allows KEDA to monitor messages in the event queues and adjust scaling for the destination Kubernetes objects.

## Prerequisites and limitations
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-prereqs"></a>

**Prerequisites**
+ AWS Command Line Interface (AWS CLI) version 2.13.17 or later, [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
+ Python version 3.11.5 or later, [installed](https://www.python.org/downloads/)
+ AWS SDK for Python (Boto3) version 1.34.135 or later, [installed](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html)
+ Helm version 3.12.3 or later, [installed](https://helm.sh/docs/intro/install/)
+ kubectl version 1.25.1 or later, [installed](https://kubernetes.io/docs/tasks/tools/)
+ Docker Engine version 26.1.1 or later, [installed](https://docs.docker.com/engine/install/)
+ An Amazon EKS cluster version 1.24 or later, [created](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
+ Prerequisites for creating the Amazon EKS Pod Identity agent, [met](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html#pod-id-agent-add-on-create)

**Limitations**
+ It is required that you establish a trust relationship between the `keda-operator` role and the `keda-identity` role. Instructions are provided in the [Epics](#event-driven-auto-scaling-with-eks-pod-identity-and-keda-epics) section of this pattern.

## Architecture
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-architecture"></a>

In this pattern, you create the following AWS resources:
+ **Amazon Elastic Container Registry (Amazon ECR) repository** – In this pattern, this repo is named `keda-pod-identity-registry`. This private repo is used to store Docker images of the sample application.
+ **Amazon Simple Queue Service (Amazon SQS) queue** – In this pattern, this queue is named `event-messages-queue`. The queue acts as a message buffer that collects and stores incoming messages. KEDA monitors the queue metrics, such as message count or queue length, and it automatically scales the application based on these metrics.
+ **IAM role for the application** – In this pattern, this role is named `keda-identity`. The `keda-operator` role assumes this role. This role allows access to the Amazon SQS queue.
+ **IAM role for the KEDA operator** – In this pattern, this role is named `keda-operator`. The KEDA operator uses this role to make the required AWS API calls. This role has permissions to assume the `keda-identity` role. Because of the trust relationship between the `keda-operator` and the `keda-identity` roles, the `keda-operator` role has Amazon SQS permissions.

Through the `TriggerAuthentication` and `ScaledObject` Kubernetes custom resources, the operator uses the `keda-identity` role to connect with an Amazon SQS queue. Based on the queue size, KEDA automatically scales the application deployment. It adds 1 pod for every 5 unread messages in the queue. In the default configuration, if there are no unread messages in the Amazon SQS queue, the application scales down to 0 pods. The KEDA operator monitors the queue at an interval that you specify.

 

The following image shows how you use Amazon EKS Pod Identity to provide the `keda-operator` role with secure access to the Amazon SQS queue.

![\[Using KEDA and Amazon EKS Pod Identity to automatically scale a Kubernetes-based application.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/56f7506d-e8d3-43e5-bec6-42267fedd0ae/images/05bdbd09-9eb8-4c0b-8c0d-efe38aecb683.png)


The diagram shows the following workflow:

1. You install the Amazon EKS Pod Identity agent in the Amazon EKS cluster.

1. You deploy KEDA operator in the KEDA namespace in the Amazon EKS cluster.

1. You create the `keda-operator` and `keda-identity` IAM roles in the target AWS account.

1. You establish a trust relationship between the IAM roles.

1. You deploy the application in the `security` namespace.

1. The KEDA operator polls messages in an Amazon SQS queue.

1. KEDA initiates HPA, which automatically scales the application based on the queue size.

## Tools
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-tools"></a>

**AWS services**
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.
+ [AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) helps you securely manage access to your AWS resources by controlling who is authenticated and authorized to use them.
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) provides a secure, durable, and available hosted queue that helps you integrate and decouple distributed software systems and components.

**Other tools**
+ [KEDA](https://keda.sh/) is a Kubernetes-based event-driven autoscaler.

**Code repository**

The code for this pattern is available in the GitHub [Event-driven auto scaling using EKS Pod Identity and KEDA](https://github.com/aws-samples/event-driven-autoscaling-using-podidentity-and-keda/tree/main) repository.

## Best practices
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-best-practices"></a>

We recommend that you adhere to the following best practices:
+ [Amazon EKS best practices](https://docs.aws.amazon.com/eks/latest/best-practices/introduction.html)
+ [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)
+ [Amazon SQS best practices](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-best-practices.html)

## Epics
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-epics"></a>

### Create AWS resources
<a name="create-aws-resources"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create the IAM role for the KEDA operator. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | AWS administrator | 
| Create the IAM role for the sample application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | AWS administrator | 
| Create an Amazon SQS queue. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | General AWS | 
| Create an Amazon ECR repository. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | General AWS | 

### Set up the Amazon EKS cluster
<a name="set-up-the-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Deploy the Amazon EKS Pod Identity agent. | For the target Amazon EKS cluster, set up the Amazon EKS Pod Identity agent. Follow the instructions in [Set up the Amazon EKS Pod Identity Agent](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html#pod-id-agent-add-on-create) in the Amazon EKS documentation. | AWS DevOps | 
| Deploy KEDA. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Assign the IAM role to the Kubernetes service account. | Follow the instructions in [Assign an IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-association.html) in the Amazon EKS documentation. Use the following values:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | AWS DevOps | 
| Create a namespace. | Enter the following command to create a `security` namespace in the target Amazon EKS cluster:<pre>kubectl create ns security</pre> | DevOps engineer | 

### Deploy the sample application
<a name="deploy-the-sample-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the application files. | Enter the following command to clone the [Event-driven auto scaling using EKS Pod Identity and KEDA repository](https://github.com/aws-samples/event-driven-autoscaling-using-podidentity-and-keda/tree/main) from GitHub:<pre>git clone https://github.com/aws-samples/event-driven-autoscaling-using-podidentity-and-keda.git</pre> | DevOps engineer | 
| Build the Docker image. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Push the Docker image to Amazon ECR. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html)You can find push commands by navigating to the Amazon ECR repository page and then choosing **View push commands**. | DevOps engineer | 
| Deploy the sample application. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Assign the IAM role to the application service account. | Do one of the following to associate the `keda-identity` IAM role with the service account for the sample application:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Deploy `ScaledObject` and `TriggerAuthentication`. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 

### Test auto scaling
<a name="test-auto-scaling"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Send messages to the Amazon SQS queue. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 
| Monitor the application pods. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | DevOps engineer | 

## Troubleshooting
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The KEDA operator cannot scale the application. | Enter the following command to check the logs of the `keda-operator` IAM role:<pre>kubectl logs -n keda -l app=keda-operator -c keda-operator</pre> If there is an `HTTP 403` response code, then the application and the KEDA scaler do not have sufficient permissions to access the Amazon SQS queue. Complete the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html)If there is an `Assume-Role` error, then an [Amazon EKS node IAM role](https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html) is unable to assume the IAM role that is defined for `TriggerAuthentication`. Complete the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/event-driven-auto-scaling-with-eks-pod-identity-and-keda.html) | 

## Related resources
<a name="event-driven-auto-scaling-with-eks-pod-identity-and-keda-resources"></a>
+ [Set up the Amazon EKS Pod Identity Agent](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html) (Amazon EKS documentation)
+ [Deploying KEDA](https://keda.sh/docs/2.14/deploy/) (KEDA documentation)
+ [ScaledObject specification](https://keda.sh/docs/2.16/reference/scaledobject-spec/) (KEDA documentation)
+ [Authentication with TriggerAuthentication](https://keda.sh/docs/2.14/concepts/authentication/) (KEDA documentation)

# Streamline PostgreSQL deployments on Amazon EKS by using PGO
<a name="streamline-postgresql-deployments-amazon-eks-pgo"></a>

*Shalaka Dengale, Amazon Web Services*

## Summary
<a name="streamline-postgresql-deployments-amazon-eks-pgo-summary"></a>

This pattern integrates the Postgres Operator from Crunchy Data (PGO) with Amazon Elastic Kubernetes Service (Amazon EKS) to streamline PostgreSQL deployments in cloud-native environments. PGO provides automation and scalability for managing PostgreSQL databases in Kubernetes. When you combine PGO with Amazon EKS, it forms a robust platform for deploying, managing, and scaling PostgreSQL databases efficiently.

This integration provides the following key benefits:
+ Automated deployment: Simplifies PostgreSQL cluster deployment and management.
+ Custom resource definitions (CRDs):** **Uses Kubernetes primitives for PostgreSQL management.
+ High availability: Supports automatic failover and synchronous replication.
+ Automated backups and restores:** **Streamlines backup and restore processes.
+ Horizontal scaling:** **Enables dynamic scaling of PostgreSQL clusters.
+ Version upgrades: Facilitates rolling upgrades with minimal downtime.
+ Security: Enforces encryption, access controls, and authentication mechanisms.

## Prerequisites and limitations
<a name="streamline-postgresql-deployments-amazon-eks-pgo-prereqs"></a>

**Prerequisites**
+ An active AWS account.
+ [AWS Command Line Interface (AWS CLI) version 2](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html), installed and configured on Linux, macOS, or Windows.
+ [AWS CLI Config](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html), to connect AWS resources from the command line.
+ [eksctl](https://github.com/eksctl-io/eksctl#installation), installed and configured on Linux, macOS, or Windows.
+ `kubectl`, installed and configured to access resources on your Amazon EKS cluster. For more information, see [Set up kubectl and eksctl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) in the Amazon EKS documentation. 
+ Your computer terminal configured to access the Amazon EKS cluster. For more information, see [Configure your computer to communicate with your cluster](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-configure-kubectl) in the Amazon EKS documentation.

**Product versions**
+ Kubernetes versions 1.21–1.24 or later (see the [PGO documentation](https://access.crunchydata.com/documentation/postgres-operator/5.2.5/)).
+ PostgreSQL version 10 or later. This pattern uses PostgreSQL version 16.

**Limitations**
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see the [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) page, and choose the link for the service.

## Architecture
<a name="streamline-postgresql-deployments-amazon-eks-pgo-architecture"></a>

**Target technology stack **
+ Amazon EKS
+ Amazon Virtual Private Cloud (Amazon VPC)
+ Amazon Elastic Compute Cloud (Amazon EC2)

**Target architecture **

![\[Architecture for using PGO with three Availability Zones and two replicas, PgBouncer, and PGO operator.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/4c164012-7527-4ebe-b6a7-c129600328d6/images/26a5572b-405b-4634-b96a-91254c3ea2c1.png)


This pattern builds an architecture that contains an Amazon EKS cluster with three nodes. Each node runs on a set of EC2 instances in the backend. This PostgreSQL setup follows a primary replica architecture, which is particularly effective for read-heavy use cases. The architecture includes the following components:
+ **Primary database container (pg-primary)** hosts the main PostgreSQL instance where all write operations are directed.
+ **Secondary replica containers (pg-replica)** host the PostgreSQL instances that replicate the data from the primary database and handle read operations.
+ **PgBouncer** is a lightweight connection pooler for PostgreSQL databases that's included with PGO. It sits between the client and the PostgreSQL server, and acts as an intermediary for database connections.
+ **PGO** automates the deployment and management of PostgreSQL clusters in this Kubernetes environment.
+ **Patroni** is an open-source tool that manages and automates high availability configurations for PostgreSQL. It's included with PGO. When you use Patroni with PGO in Kubernetes, it plays a crucial role in ensuring the resilience and fault tolerance of a PostgreSQL cluster. For more information, see the [Patroni documentation](https://patroni.readthedocs.io/en/latest/).

The workflow includes these steps:
+ **Deploy the PGO operator**. You deploy the PGO operator on your Kubernetes cluster that runs on Amazon EKS. This can be done by using Kubernetes manifests or Helm charts. This pattern uses Kubernetes manifests.
+ **Define PostgreSQL instances**. When the operator is running, you create custom resources (CRs) to specify the desired state of PostgreSQL instances. This includes configurations such as storage, replication, and high availability settings.
+ **Operator management**. You interact with the operator through Kubernetes API objects such as CRs to create, update, or delete PostgreSQL instances.
+ **Monitoring and maintenance**. You can monitor the health and performance of the PostgreSQL instances running on Amazon EKS. Operators often provide metrics and logging for monitoring purposes. You can perform routine maintenance tasks such as upgrades and patching as necessary. For more information, see [Monitor your cluster performance and view logs](https://docs.aws.amazon.com/eks/latest/userguide/eks-observe.html) in the Amazon EKS documentation.
+ **Scaling and backup**: You can use the features provided by the operator to scale PostgreSQL instances and manage backups.

This pattern doesn't cover monitoring, maintenance, and backup operations.

**Automation and scale**
+ You can use CloudFormation to automate the infrastructure creation. For more information, see [Create Amazon EKS resources with CloudFormation](https://docs.aws.amazon.com/eks/latest/userguide/creating-resources-with-cloudformation.html) in the Amazon EKS documentation.
+ You can use GitVersion or Jenkins build numbers to automate the deployment of database instances.

## Tools
<a name="streamline-postgresql-deployments-amazon-eks-pgo-tools"></a>

**AWS services**
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) helps you run Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane or nodes.  
+ [AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html) is an open-source tool that helps you interact with AWS services through commands in your command line shell.

**Other tools**
+ [eksctl](https://eksctl.io/) is a simple command line tool for creating clusters on Amazon EKS.
+ [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) is a command line utility for running commands against Kubernetes clusters.
+ [PGO](https://github.com/CrunchyData/postgres-operator) automates and scales the management of PostgreSQL databases in Kubernetes.

## Best practices
<a name="streamline-postgresql-deployments-amazon-eks-pgo-best-practices"></a>

Follow these best practices to ensure a smooth and efficient deployment:
+ **Secure your EKS cluster**. Implement security best practices for your EKS cluster, such as using AWS Identity and Access Management (IAM) roles for service accounts (IRSA), network policies, and VPC security groups. Limit access to the EKS cluster API server, and encrypt communications between nodes and the API server by using TLS.
+ **Ensure version compatibility** between PGO and Kubernetes running on Amazon EKS. Some PGO features might require specific Kubernetes versions or introduce compatibility limitations. For more information, see [Components and Compatibility](https://access.crunchydata.com/documentation/postgres-operator/5.2.5/references/components/) in the PGO documentation.
+ **Plan resource allocation** for your PGO deployment, including CPU, memory, and storage. Consider the resource requirements of both PGO and the PostgreSQL instances it manages. Monitor resource usage and scale resources as needed.
+ **Design for high availability**. Design your PGO deployment for high availability to minimize downtime and ensure reliability. Deploy multiple replicas of PGO across multiple Availability Zones for fault tolerance.
+ **Implement backup and restore procedures** for your PostgreSQL databases that PGO manages. Use features provided by PGO or third-party backup solutions that are compatible with Kubernetes and Amazon EKS.
+ **Set up monitoring and logging** for your PGO deployment to track performance, health, and events. Use tools such as Prometheus for monitoring metrics and Grafana for visualization. Configure logging to capture PGO logs for troubleshooting and auditing.
+ **Configure networking** properly to allow communications between PGO, PostgreSQL instances, and other services in your Kubernetes cluster. Use Amazon VPC networking features and Kubernetes networking plugins such as Calico or [Amazon VPC CNI](https://github.com/aws/amazon-vpc-cni-k8s) for network policy enforcement and traffic isolation.
+ **Choose appropriate storage options** for your PostgreSQL databases, considering factors such as performance, durability, and scalability. Use Amazon Elastic Block Store (Amazon EBS) volumes or AWS managed storage services for persistent storage. For more information, see [Store Kubernetes volumes with Amazon EBS](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html) in the Amazon EKS documentation.
+ **Use infrastructure as code (IaC) tools** such as CloudFormation to automate the deployment and configuration of PGO on Amazon EKS. Define infrastructure components—including the EKS cluster, networking, and PGO resources—as code for consistency, repeatability, and version control.

## Epics
<a name="streamline-postgresql-deployments-amazon-eks-pgo-epics"></a>

### Create an IAM role
<a name="create-an-iam-role"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an IAM role. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | AWS administrator | 

### Create an Amazon EKS cluster
<a name="create-an-eks-cluster"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an Amazon EKS cluster. | If you've already deployed a cluster, skip this step. Otherwise, deploy an Amazon EKS cluster in your current AWS account by using `eksctl`, Terraform, or CloudFormation. This pattern uses `eksctl` for cluster deployment.This pattern uses Amazon EC2 as a node group for Amazon EKS. If you want to use AWS Fargate, see the `managedNodeGroups` configuration in the [eksctl documentation](https://eksctl.io/usage/schema/#managedNodeGroups).[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | AWS administrator, Terraform or eksctl administrator, Kubernetes administrator | 
| Validate the status of the cluster. | Run the following command to see the current status of nodes in the cluster:<pre>kubectl get nodes</pre>If you encounter errors, see the [troubleshooting section](https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html) of the Amazon EKS documentation. | AWS administrator, Terraform or eksctl administrator, Kubernetes administrator | 

### Create an OIDC identity provider
<a name="create-an-oidc-identity-provider"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Enable the IAM OIDC provider. | As a prerequisite for the Amazon EBS Container Storage Interface (CSI) driver, you must have an existing IAM OpenID Connect (OIDC) provider for your cluster.Enable the IAM OIDC provider by using the following command:<pre>eksctl utils associate-iam-oidc-provider --region={region} --cluster={YourClusterNameHere} --approve</pre>For more information about this step, see the [Amazon EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html). | AWS administrator | 
| Create an IAM role for the Amazon EBS CSI driver. | Use the following `eksctl` command to create the IAM role for the CSI driver:<pre>eksctl create iamserviceaccount \<br />  --region {RegionName} \<br />  --name ebs-csi-controller-sa \<br />  --namespace kube-system \<br />  --cluster {YourClusterNameHere} \<br />  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \<br />  --approve \<br />  --role-only \<br />  --role-name AmazonEKS_EBS_CSI_DriverRole</pre>If you use encrypted Amazon EBS drives, you have to configure the policy further. For instructions, see the [Amazon EBS SCI driver documentation](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/install.md#installation-1). | AWS administrator | 
| Add the Amazon EBS CSI driver. | Use the following `eksctl` command to add the Amazon EBS CSI driver:<pre>eksctl create addon \<br />  --name aws-ebs-csi-driver \<br />  --cluster <YourClusterName> service-account-role-arn arn:aws:iam::$(aws sts get-caller-identity \<br />  --query Account \<br />  --output text):role/AmazonEKS_EBS_CSI_DriverRole \<br />  --force</pre> | AWS administrator | 

### Install PGO
<a name="install-pgo"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Clone the PGO repository. | Clone the GitHub repository for PGO:<pre>git clone https://github.com/CrunchyData/postgres-operator-examples.git </pre> | AWS DevOps | 
| Provide the role details for service account creation. | To grant the Amazon EKS cluster access to the required AWS resources, specify the Amazon Resource Name (ARN) of the OIDC role that you created earlier in the `service_account.yaml` file that is located in [GitHub](https://github.com/CrunchyData/postgres-operator/blob/main/config/rbac/cluster/service_account.yaml).<pre>cd postgres-operator-examples</pre><pre>---<br />metadata:<br />  annotations:<br />    eks.amazonaws.com/role-arn: arn:aws:iam::<accountId>:role/<role_name> # Update the OIDC role ARN created earlier</pre> | AWS administrator, Kubernetes administrator | 
| Create the namespace and PGO prerequisites. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | Kubernetes administrator | 
| Verify the creation of pods. | Verify that the namespace and default configuration were created:<pre>kubectl get pods -n postgres-operator</pre> | AWS administrator, Kubernetes administrator | 
| Verify PVCs. | Use the following command to verify persistent volume claims (PVCs):<pre>kubectl describe pvc -n postgres-operator</pre> | AWS administrator, Kubernetes administrator | 

### Create and deploy an operator
<a name="create-and-deploy-an-operator"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create an operator. | Revise the contents of the file located at `/kustomize/postgres/postgres.yaml` to match the following:<pre>spec:<br />  instances:<br />    - name: pg-1<br />      replicas: 3<br />  patroni:<br />    dynamicConfiguration:<br />      postgresql:<br />      pg_hba:<br />        - "host all all 0.0.0.0/0 trust" # this line enabled logical replication with programmatic access<br />        - "host all postgres 127.0.0.1/32 md5"<br />      synchronous_mode: true<br />  users:<br />  - name: replicator<br />    databases:<br />      - testdb<br />    options: "REPLICATION"</pre>These updates do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | AWS administrator, DBA, Kubernetes administrator | 
| Deploy the operator. | Deploy the PGO operator to enable the streamlined management and operation of PostgreSQL databases in Kubernetes environments:<pre>kubectl apply -k kustomize/postgres</pre> | AWS administrator, DBA, Kubernetes administrator | 
| Verify the deployment. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html)From the command output, note the primary replica (`primary_pod_name`) and read replica (`read_pod_name`). You will uses these in the next steps. | AWS administrator, DBA, Kubernetes administrator | 

### Verify streaming replication
<a name="verify-streaming-replication"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Write data to the primary replica. | Use the following commands to connect to the PostgreSQL primary replica and write data to the database:<pre>kubectl exec -it <primary_pod_name> bash -n postgres-operator</pre><pre>psql</pre><pre>CREATE TABLE customers (firstname text, customer_id serial, date_created timestamp);<br />\dt</pre> | AWS administrator, Kubernetes administrator | 
| Confirm that the read replica has the same data. | Connect to the PostgreSQL read replica and check whether the streaming replication is working correctly:<pre>kubectl exec -it {read_pod_name} bash -n postgres-operator</pre><pre>psql</pre><pre>\dt</pre>The read replica should have the table that you created in the primary replica in the previous step. | AWS administrator, Kubernetes administrator | 

## Troubleshooting
<a name="streamline-postgresql-deployments-amazon-eks-pgo-troubleshooting"></a>


| Issue | Solution | 
| --- | --- | 
| The pod doesn’t start. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | 
| Replicas are significantly behind the primary database. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | 
| You don’t have visibility into the performance and health of the PostgreSQL cluster. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | 
| Replication doesn’t work. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/streamline-postgresql-deployments-amazon-eks-pgo.html) | 

## Related resources
<a name="streamline-postgresql-deployments-amazon-eks-pgo-resources"></a>
+ [Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/amazon-elastic-kubernetes-service.html) (*Overview of Deployment Options on AWS* whitepaper)
+  [CloudFormation](https://docs.aws.amazon.com/whitepapers/latest/overview-deployment-options/aws-cloudformation.html) (*Overview of Deployment Options on AWS* whitepaper)
+ [Get started with Amazon EKS – eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) (*Amazon EKS User Guide*)
+ [Set up kubectl and eksctl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) (*Amazon EKS User Guide*)
+ [Create a role for OpenID Connect federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_oidc.html) (*IAM User Guide*)
+ [Configuring settings for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) (*AWS CLI User Guide*)
+ [Crunchy Postgres for Kubernetes documentation](https://access.crunchydata.com/documentation/postgres-operator/latest)
+ [Crunch & Learn: Crunchy Postgres for Kubernetes 5.0](https://www.youtube-nocookie.com/embed/IIf9WZO3K50) (video)

# Simplify application authentication with mutual TLS in Amazon ECS by using Application Load Balancer
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs"></a>

*Olawale Olaleye and Shamanth Devagari, Amazon Web Services*

## Summary
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-summary"></a>

This pattern helps you to simplify your application authentication and offload security burdens with mutual TLS in Amazon Elastic Container Service (Amazon ECS) by using [Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/mutual-authentication.html). With ALB, you can authenticate X.509 client certificates from AWS Private Certificate Authority. This powerful combination helps to achieve secure communication between your services, reducing the need for complex authentication mechanisms within your applications. In addition, the pattern uses Amazon Elastic Container Registry (Amazon ECR) to store container images.

The example in this pattern uses Docker images from a public gallery to create the sample workloads initially. Subsequently, new Docker images are built to be stored in Amazon ECR. For the source, consider a Git-based system such as GitHub, GitLab, or Bitbucket, or use Amazon Simple Storage Service Amazon S3 (Amazon S3). For building the Docker images, consider using AWS CodeBuild for the subsequent images.

## Prerequisites and limitations
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-prereqs"></a>

**Prerequisites**
+ An active AWS account with access to deploy AWS CloudFormation stacks. Make sure that you have AWS Identity and Access Management (IAM) [user or role permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/control-access-with-iam.html) to deploy CloudFormation.
+ AWS Command Line Interface (AWS CLI) [installed](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). [Configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) your AWS credentials on your local machine or in your environment by either using the AWS CLI or by setting the environment variables in the `~/.aws/credentials` file.
+ OpenSSL [installed](https://www.openssl.org/).
+ Docker [installed](https://www.docker.com/get-started/).
+ Familiarity with the AWS services described in [Tools](#simplify-application-authentication-with-mutual-tls-in-amazon-ecs-tools).
+ Knowledge of Docker and NGINX.

**Limitations**
+ Mutual TLS for Application Load Balancer only supports X.509v3 client certificates. X.509v1 client certificates are not supported.
+ The CloudFormation template that is provided in this pattern’s code repository doesn’t include provisioning a CodeBuild project as part of the stack.
+ Some AWS services aren’t available in all AWS Regions. For Region availability, see [AWS Services by Region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). For specific endpoints, see [Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html), and choose the link for the service.

**Product versions**
+ Docker version 27.3.1 or later
+ AWS CLI version 2.14.5 or later

## Architecture
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-architecture"></a>

The following diagram shows the architecture components for this pattern.

![\[Workflow to authenticate with mutual TLS using Application Load Balancer.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/images/pattern-img/a343fa4e-097f-416b-9c83-01a28eb57dc3/images/e1371297-b987-4487-9b13-8120933c921f.png)


 The diagram shows the following workflow:

1. Create a Git repository, and commit the application code to the repository.

1. Create a private certificate authority (CA) in AWS Private CA.

1. Create a CodeBuild project. The CodeBuildproject is triggered by commit changes and creates the Docker image and publishes the built image to Amazon ECR.

1. Copy the certificate chain and certificate body from the CA, and upload the certificate bundle to Amazon S3.

1. Create a trust store with the CA bundle that you uploaded to Amazon S3. Associate the trust store with the mutual TLS listeners on the Application Load Balancer (ALB).

1. Use the private CA to issue client certificates for the container workloads. Also create a private TLS certificate using AWS Private CA.

1. Import the private TLS certificate into AWS Certificate Manager (ACM), and use it with the ALB.

1. The container workload in `ServiceTwo` uses the issued client certificate to authenticate with the ALB when it communicates with the container workload in `ServiceOne`.

1. The container workload in `ServiceOne` uses the issued client certificate to authenticate with the ALB when it communicates with the container workload in `ServiceTwo`.

**Automation and scale**

This pattern can be fully automated by using CloudFormation, AWS Cloud Development Kit (AWS CDK) , or API operations from an SDK to provision the AWS resources.

You can use AWS CodePipeline to implement a continuous integration and continuous deployment (CI/CD) pipeline using CodeBuild to automate container image build process and deploying new releases to the Amazon ECS cluster services.

## Tools
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-tools"></a>

**AWS services **
+ [AWS Certificate Manager (ACM)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) helps you create, store, and renew public and private SSL/TLS X.509 certificates and keys that protect your AWS websites and applications.
+ [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) helps you set up AWS resources, provision them quickly and consistently, and manage them throughout their lifecycle across AWS accounts and AWS Regions.
+ [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html) is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.
+ [Amazon Elastic Container Registry (Amazon ECR)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is a managed container image registry service that’s secure, scalable, and reliable.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) is a highly scalable, fast container management service for running, stopping, and managing containers on a cluster. You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances that you manage.
+ [Amazon ECS Exec](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html) allows you to directly interact with containers without needing to first interact with the host container operating system, open inbound ports, or manage SSH keys. You can use ECS Exec to run commands in, or get a shell to, a container running on an Amazon EC2 instance or on AWS Fargate.
+ [Elastic Load Balancing (ELB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) distributes incoming application or network traffic across multiple targets. For example, you can distribute traffic across Amazon EC2 instances, containers, and IP addresses, in one or more Availability Zones. ELB monitors the health of its registered targets, and routes traffic only to the healthy targets. ELB scales your load balancer as your incoming traffic changes over time. It can automatically scale to the majority of workloads.
+ [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html) helps you run containers without needing to manage servers or Amazon EC2 instances. Fargate is compatible with both Amazon ECS and Amazon Elastic Kubernetes Service (Amazon EKS). You can run your Amazon ECS tasks and services with the Fargate launch type or a Fargate capacity provider. To do so, package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and doesn’t share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
+ [AWS Private Certificate Authority](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) enables creation of private certificate authority (CA) hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating an on-premises CA.

**Other tools**** **
+ [Docker](https://www.docker.com/) is a set of platform as a service (PaaS) products that use virtualization at the operating-system level to deliver software in containers.
+ [GitHub](https://docs.github.com/en/repositories/creating-and-managing-repositories/quickstart-for-repositories), [GitLab](https://docs.gitlab.com/ee/user/get_started/get_started_projects.html), and [Bitbucket](https://support.atlassian.com/bitbucket-cloud/docs/tutorial-learn-bitbucket-with-git/) are some of the commonly used Git-based source control system to keep track of source code changes.
+ [NGINX Open Source](https://nginx.org/en/docs/?_ga=2.187509224.1322712425.1699399865-405102969.1699399865) is an open source load balancer, content cache, and web server. This pattern uses it as a web server.
+ [OpenSSL](https://www.openssl.org/) is an open source library that provides services that are used by the OpenSSL implementations of TLS and CMS. 

**Code repository**

The code for this pattern is available in the GitHub [mTLS-with-Application-Load-Balancer-in-Amazon-ECS](https://github.com/aws-samples/mTLS-with-Application-Load-Balancer-in-Amazon-ECS) repository.

## Best practices
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-best-practices"></a>
+ Use Amazon ECS Exec to run commands or get a shell to a container running on Fargate. You can also use ECS Exec to help collect diagnostic information for debugging.
+ Use security groups and network access control lists (ACLs) to control inbound and outbound traffic between the services. Fargate tasks receive an IP address from the configured subnet in your virtual private cloud (VPC).

## Epics
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-epics"></a>

### Create the repository
<a name="create-the-repository"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Download the source code. | To download this pattern’s source code, fork or clone the GitHub [mTLS-with-Application-Load-Balancer-in-Amazon-ECS](https://github.com/aws-samples/mTLS-with-Application-Load-Balancer-in-Amazon-ECS) repository. | DevOps engineer | 
| Create a Git repository. | To create a Git repository to contain the Dockerfile and the `buildspec.yaml` files, use the following steps:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)`git clone https://github.com/aws-samples/mTLS-with-Application-Load-Balancer-in-Amazon-ECS.git` | DevOps engineer | 

### Create CA and generate certificates
<a name="create-ca-and-generate-certificates"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Create a private CA in AWS Private CA. | To create a private certificate authority (CA), run the following commands in your terminal. Replace the values in the example variables with your own values. <pre>export AWS_DEFAULT_REGION="us-west-2"<br />export SERVICES_DOMAIN="www.example.com"<br /><br />export ROOT_CA_ARN=`aws acm-pca create-certificate-authority \<br />    --certificate-authority-type ROOT \<br />    --certificate-authority-configuration \<br />    "KeyAlgorithm=RSA_2048,<br />    SigningAlgorithm=SHA256WITHRSA,<br />    Subject={<br />        Country=US,<br />        State=WA,<br />        Locality=Seattle,<br />        Organization=Build on AWS,<br />        OrganizationalUnit=mTLS Amazon ECS and ALB Example,<br />        CommonName=${SERVICES_DOMAIN}}" \<br />        --query CertificateAuthorityArn --output text`</pre>For more details, see [Create a private CA in AWS Private CA](https://docs.aws.amazon.com/privateca/latest/userguide/create-CA.html) in the AWS documentation. | DevOps engineer, AWS DevOps | 
| Create and install your private CA certificate. | To create and install a certificate for your private root CA, run the following commands in your terminal:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html) | AWS DevOps, DevOps engineer | 
| Request a managed certificate. | To request a private certificate in AWS Certificate Manager to use with your private ALB, use the following command:<pre>export TLS_CERTIFICATE_ARN=`aws acm request-certificate \<br />    --domain-name "*.${DOMAIN_DOMAIN}" \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --query CertificateArn --output text`</pre> | DevOps engineer, AWS DevOps | 
| Use the private CA to issue a client certificate. | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)`openssl req -out client_csr1.pem -new -newkey rsa:2048 -nodes -keyout client_private-key1.pem``openssl req -out client_csr2.pem -new -newkey rsa:2048 -nodes -keyout client_private-key2.pem`This command returns the CSR and the private key for the two services. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)<pre>SERVICE_ONE_CERT_ARN=`aws acm-pca issue-certificate \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --csr fileb://client_csr1.pem \<br />    --signing-algorithm "SHA256WITHRSA" \<br />    --validity Value=5,Type="YEARS" --query CertificateArn --output text` <br /><br />echo "SERVICE_ONE_CERT_ARN: ${SERVICE_ONE_CERT_ARN}"<br /><br />aws acm-pca get-certificate \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --certificate-arn ${SERVICE_ONE_CERT_ARN} \<br />     | jq -r '.Certificate' > client_cert1.cert<br /><br />SERVICE_TWO_CERT_ARN=`aws acm-pca issue-certificate \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --csr fileb://client_csr2.pem \<br />    --signing-algorithm "SHA256WITHRSA" \<br />    --validity Value=5,Type="YEARS" --query CertificateArn --output text` <br /><br />echo "SERVICE_TWO_CERT_ARN: ${SERVICE_TWO_CERT_ARN}"<br /><br />aws acm-pca get-certificate \<br />    --certificate-authority-arn ${ROOT_CA_ARN} \<br />    --certificate-arn ${SERVICE_TWO_CERT_ARN} \<br />     | jq -r '.Certificate' > client_cert2.cert</pre>For more information, see [Issue private end-entity certificates](https://docs.aws.amazon.com/privateca/latest/userguide/PcaIssueCert.html) in the AWS documentation. | DevOps engineer, AWS DevOps | 

### Provision AWS services
<a name="provision-aws-services"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Provision AWS services with the CloudFormation template. | To provision the virtual private cloud (VPC), Amazon ECS cluster, Amazon ECS services, Application Load Balancer, and Amazon Elastic Container Registry (Amazon ECR), use the CloudFormation template. | DevOps engineer | 
| Get variables. | Verify that you have an Amazon ECS cluster with two services running. To retrieve the resource details and store them as variables, use the following commands:<pre><br />export LoadBalancerDNS=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`LoadBalancerDNS`].OutputValue')<br /><br />export ECRRepositoryUri=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`ECRRepositoryUri`].OutputValue')<br /><br />export ECRRepositoryServiceOneUri=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`ECRRepositoryServiceOneUri`].OutputValue')<br /><br />export ECRRepositoryServiceTwoUri=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`ECRRepositoryServiceTwoUri`].OutputValue')<br /><br />export ClusterName=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`ClusterName`].OutputValue')<br /><br />export BucketName=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`BucketName`].OutputValue')<br /><br />export Service1ListenerArn=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`Service1ListenerArn`].OutputValue')<br /><br />export Service2ListenerArn=$(aws cloudformation describe-stacks --stack-name ecs-mtls \<br />--output text \<br />--query 'Stacks[0].Outputs[?OutputKey==`Service2ListenerArn`].OutputValue')</pre> | DevOps engineer | 
| Create a CodeBuild project. | To use a CodeBuild project to create the Docker images for your Amazon ECS services, do the following:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)For more details, see [Create a build project in AWS CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html) in the AWS documentation. | AWS DevOps, DevOps engineer | 
| Build the Docker images. | You can use CodeBuild to perform the image build process. CodeBuild needs permissions to interact with Amazon ECR and to work with Amazon S3.As part of the process, the Docker image is built and pushed to the Amazon ECR registry. For details about the template and the code, see [Additional information](#simplify-application-authentication-with-mutual-tls-in-amazon-ecs-additional).(Optional) To build locally for test purposes, use the following command:<pre># login to ECR<br />aws ecr get-login-password | docker login --username AWS --password-stdin $ECRRepositoryUri<br /><br /># build image for service one<br />cd /service1<br />aws s3 cp s3://$BucketName/serviceone/ service1/ --recursive<br />docker build -t $ECRRepositoryServiceOneUri .<br />docker push $ECRRepositoryServiceOneUri<br /><br /># build image for service two<br />cd ../service2<br />aws s3 cp s3://$BucketName/servicetwo/ service2/ --recursive<br />docker build -t $ECRRepositoryServiceTwoUri .<br />docker push $ECRRepositoryServiceTwoUri</pre> | DevOps engineer | 

### Enable mutual TLS
<a name="enable-mutual-tls"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Upload the CA certificate to Amazon S3. | To upload the CA certificate to the Amazon S3 bucket, use the following example command:`aws s3 cp ca-cert.pem s3://$BucketName/acm-trust-store/ ` | AWS DevOps, DevOps engineer | 
| Create the trust store. | To create the trust store, use the following example command:<pre>TrustStoreArn=`aws elbv2 create-trust-store --name acm-pca-trust-certs \<br />    --ca-certificates-bundle-s3-bucket $BucketName \<br />    --ca-certificates-bundle-s3-key acm-trust-store/ca-cert.pem --query 'TrustStores[].TrustStoreArn' --output text`</pre> | AWS DevOps, DevOps engineer | 
| Upload client certificates. | To upload client certificates to Amazon S3 for Docker images, use the following example command:<pre># for service one<br />aws s3 cp client_cert1.cert s3://$BucketName/serviceone/<br />aws s3 cp client_private-key1.pem s3://$BucketName/serviceone/<br /><br /># for service two<br />aws s3 cp client_cert2.cert s3://$BucketName/servicetwo/<br />aws s3 cp client_private-key2.pem s3://$BucketName/servicetwo/</pre> | AWS DevOps, DevOps engineer | 
| Modify the listener. | To enable mutual TLS on the ALB, modify the HTTPS listeners by using the following commands:<pre>aws elbv2 modify-listener \<br />    --listener-arn $Service1ListenerArn \<br />    --certificates CertificateArn=$TLS_CERTIFICATE_ARN_TWO \<br />    --ssl-policy ELBSecurityPolicy-2016-08 \<br />    --protocol HTTPS \<br />    --port 8080 \<br />    --mutual-authentication Mode=verify,TrustStoreArn=$TrustStoreArn,IgnoreClientCertificateExpiry=false<br /><br />aws elbv2 modify-listener \<br />    --listener-arn $Service2ListenerArn \<br />    --certificates CertificateArn=$TLS_CERTIFICATE_ARN_TWO \<br />    --ssl-policy ELBSecurityPolicy-2016-08 \<br />    --protocol HTTPS \<br />    --port 8090 \<br />    --mutual-authentication Mode=verify,TrustStoreArn=$TrustStoreArn,IgnoreClientCertificateExpiry=false<br /></pre>For more information, see [Configuring mutual TLS on an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/configuring-mtls-with-elb.html) in the AWS documentation. | AWS DevOps, DevOps engineer | 

### Update the services
<a name="update-the-services"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Update the Amazon ECS task definition. | To update the Amazon ECS task definition, modify the `image` parameter in the new revision.To get the values for the respective services, update the task definitions with the new Docker images Uri that you built in the previous steps: `echo $ECRRepositoryServiceOneUri` or `echo $ECRRepositoryServiceTwoUri`<pre><br />    "containerDefinitions": [<br />        {<br />            "name": "nginx",<br />            "image": "public.ecr.aws/nginx/nginx:latest",   # <----- change to new Uri<br />            "cpu": 0,</pre>For more information, see [Updating an Amazon ECS task definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-task-definition-console-v2.html) using the console in the AWS documentation.  | AWS DevOps, DevOps engineer | 
| Update the Amazon ECS service. | Update the service with the latest task definition. This task definition is the blueprint for the newly built Docker images, and it contains the client certificate that’s required for the mutual TLS authentication.  To update the service, use the following procedure:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html)Repeat the steps for the other service. | AWS administrator, AWS DevOps, DevOps engineer | 

### Access the application
<a name="access-the-application"></a>


| Task | Description | Skills required | 
| --- | --- | --- | 
| Copy the application URL. | Use the Amazon ECS console to view the task. When the task status has been updated to **Running**, select the task. In the **Task **section, copy the task ID. | AWS administrator, AWS DevOps | 
| Test your application. | To test your application, use ECS Exec to access the tasks.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/simplify-application-authentication-with-mutual-tls-in-amazon-ecs.html) | AWS administrator, AWS DevOps | 

## Related resources
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-resources"></a>

**Amazon ECS documentation**
+ [Creating an Amazon ECS task definition using the console](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html)
+ [Creating a container image for use on Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html)
+ [Amazon ECS clusters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html)
+ [Amazon ECS for AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-container-image.html#create-container-image-next-steps)
+ [Amazon ECS networking best practices](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/networking-best-practices.html)
+ [Amazon ECS service definition parameters](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html)

**Other AWS resources**
+ [How do I use AWS private CA to configure mTLS on the Application Load Balancer?](https://repost.aws/knowledge-center/elb-alb-configure-private-ca-mtls) (AWS re:Post)

## Additional information
<a name="simplify-application-authentication-with-mutual-tls-in-amazon-ecs-additional"></a>

**Editing the Dockerfile**** **

The following code shows the commands that you edit in the Dockerfile for service 1:

```
FROM public.ecr.aws/nginx/nginx:latest
WORKDIR /usr/share/nginx/html
RUN echo "Returning response from Service 1: Ok" > /usr/share/nginx/html/index.html
ADD client_cert1.cert client_private-key1.pem /usr/local/share/ca-certificates/
RUN chmod -R 400 /usr/local/share/ca-certificates/
```

The following code shows the commands that you edit in the Dockerfile for service 2:

```
FROM public.ecr.aws/nginx/nginx:latest
WORKDIR /usr/share/nginx/html
RUN echo "Returning response from Service 2: Ok" > /usr/share/nginx/html/index.html
ADD client_cert2.cert client_private-key2.pem /usr/local/share/ca-certificates/
RUN chmod -R 400 /usr/local/share/ca-certificates/
```

If you’re building the Docker images with CodeBuild, the `buildspec` file uses the CodeBuild build number to uniquely identify image versions as a tag value. You can change the `buildspec` file to fit your requirements, as shown in the following `buildspec `custom code: 

```
version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REPOSITORY_URI
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=${COMMIT_HASH:=latest}
  build:
    commands:
        # change the S3 path depending on the service
      - aws s3 cp s3://$YOUR_S3_BUCKET_NAME/serviceone/ $CodeBuild_SRC_DIR/ --recursive 
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $ECR_REPOSITORY_URI:latest .
      - docker tag $ECR_REPOSITORY_URI:latest $ECR_REPOSITORY_URI:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $ECR_REPOSITORY_URI:latest
      - docker push $ECR_REPOSITORY_URI:$IMAGE_TAG
      - echo Writing image definitions file...
      # for ECS deployment reference
      - printf '[{"name":"%s","imageUri":"%s"}]' $CONTAINER_NAME $ECR_REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json   

artifacts:
  files:
    - imagedefinitions.json
```

# More patterns
<a name="containersandmicroservices-more-patterns-pattern-list"></a>

**Topics**
+ [Automate deletion of AWS CloudFormation stacks and associated resources](automate-deletion-cloudformation-stacks-associated-resources.md)
+ [Automate dynamic pipeline management for deploying hotfix solutions in Gitflow environments by using AWS Service Catalog and AWS CodePipeline](automate-dynamic-pipeline-management-for-deploying-hotfix-solutions.md)
+ [Automatically build CI/CD pipelines and Amazon ECS clusters for microservices using AWS CDK](automatically-build-ci-cd-pipelines-and-amazon-ecs-clusters-for-microservices-using-aws-cdk.md)
+ [Build and push Docker images to Amazon ECR using GitHub Actions and Terraform](build-and-push-docker-images-to-amazon-ecr-using-github-actions-and-terraform.md)
+ [Containerize mainframe workloads that have been modernized by Blu Age](containerize-mainframe-workloads-that-have-been-modernized-by-blu-age.md)
+ [Create a custom log parser for Amazon ECS using a Firelens log router](create-a-custom-log-parser-for-amazon-ecs-using-a-firelens-log-router.md)
+ [Deploy agentic systems on Amazon Bedrock with the CrewAI framework by using Terraform](deploy-agentic-systems-on-amazon-bedrock-with-the-crewai-framework.md)
+ [Deploy an environment for containerized Blu Age applications by using Terraform](deploy-an-environment-for-containerized-blu-age-applications-by-using-terraform.md)
+ [Deploy preprocessing logic into an ML model in a single endpoint using an inference pipeline in Amazon SageMaker](deploy-preprocessing-logic-into-an-ml-model-in-a-single-endpoint-using-an-inference-pipeline-in-amazon-sagemaker.md)
+ [Deploy workloads from Azure DevOps pipelines to private Amazon EKS clusters](deploy-workloads-from-azure-devops-pipelines-to-private-amazon-eks-clusters.md)
+ [Implement AI-powered Kubernetes diagnostics and troubleshooting with K8sGPT and Amazon Bedrock integration](implement-ai-powered-kubernetes-diagnostics-and-troubleshooting-with-k8sgpt-and-amazon-bedrock-integration.md)
+ [Manage blue/green deployments of microservices to multiple accounts and Regions by using AWS code services and AWS KMS multi-Region keys](manage-blue-green-deployments-of-microservices-to-multiple-accounts-and-regions-by-using-aws-code-services-and-aws-kms-multi-region-keys.md)
+ [Manage on-premises container applications by setting up Amazon ECS Anywhere with the AWS CDK](manage-on-premises-container-applications-by-setting-up-amazon-ecs-anywhere-with-the-aws-cdk.md)
+ [Migrate from Oracle WebLogic to Apache Tomcat (TomEE) on Amazon ECS](migrate-from-oracle-weblogic-to-apache-tomcat-tomee-on-amazon-ecs.md)
+ [Modernize ASP.NET Web Forms applications on AWS](modernize-asp-net-web-forms-applications-on-aws.md)
+ [Monitor Amazon ECR repositories for wildcard permissions using AWS CloudFormation and AWS Config](monitor-amazon-ecr-repositories-for-wildcard-permissions-using-aws-cloudformation-and-aws-config.md)
+ [Monitor application activity by using CloudWatch Logs Insights](monitor-application-activity-by-using-cloudwatch-logs-insights.md)
+ [Set up a CI/CD pipeline for hybrid workloads on Amazon ECS Anywhere by using AWS CDK and GitLab](set-up-a-ci-cd-pipeline-for-hybrid-workloads-on-amazon-ecs-anywhere-by-using-aws-cdk-and-gitlab.md)
+ [Set up end-to-end encryption for applications on Amazon EKS using cert-manager and Let's Encrypt](set-up-end-to-end-encryption-for-applications-on-amazon-eks-using-cert-manager-and-let-s-encrypt.md)
+ [Simplify Amazon EKS multi-tenant application deployment by using Flux](simplify-amazon-eks-multi-tenant-application-deployment-by-using-flux.md)
+ [Streamline machine learning workflows from local development to scalable experiments by using SageMaker AI and Hydra](streamline-machine-learning-workflows-by-using-amazon-sagemaker.md)
+ [Structure a Python project in hexagonal architecture using AWS Lambda](structure-a-python-project-in-hexagonal-architecture-using-aws-lambda.md)
+ [Test AWS infrastructure by using LocalStack and Terraform Tests](test-aws-infra-localstack-terraform.md)
+ [Coordinate resource dependency and task execution by using the AWS Fargate WaitCondition hook construct](use-the-aws-fargate-waitcondition-hook-construct.md)
+ [Use Amazon Bedrock agents to automate creation of access entry controls in Amazon EKS through text-based prompts](using-amazon-bedrock-agents-to-automate-creation-of-access-entry-controls-in-amazon-eks.md)